Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
9C spectral-index distributions and source-count estimates from 15 to 93 GHz - a re-assessment
NASA Astrophysics Data System (ADS)
Waldram, E. M.; Bolton, R. C.; Riley, J. M.; Pooley, G. G.
2018-01-01
In an earlier paper (2007), we used follow-up observations of a sample of sources from the 9C survey at 15.2 GHz to derive a set of spectral-index distributions up to a frequency of 90 GHz. These were based on simultaneous measurements made at 15.2 GHz with the Ryle telescope and at 22 and 43 GHz with the Karl G. Jansky Very Large Array (VLA). We used these distributions to make empirical estimates of source counts at 22, 30, 43, 70 and 90 GHz. In a later paper (2013), we took data at 15.7 GHz from the Arcminute Microkelvin Imager (AMI) and data at 93.2 GHz from the Combined Array for Research in Millimetre-wave Astronomy (CARMA) and estimated the source count at 93.2 GHz. In this paper, we re-examine the data used in both papers and now believe that the VLA flux densities we measured at 43 GHz were significantly in error, being on average only about 70 per cent of their correct values. Here, we present strong evidence for this conclusion and discuss the effect on the source-count estimates made in the 2007 paper. The source-count prediction in the 2013 paper is also revised. We make comparisons with spectral-index distributions and source counts from other telescopes, in particular with a recent deep 95 GHz source count measured by the South Pole Telescope. We investigate reasons for the problem of the low VLA 43-GHz values and find a number of possible contributory factors, but none is sufficient on its own to account for such a large deficit.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
NASA Astrophysics Data System (ADS)
Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.
2018-04-01
The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.
Emission Features and Source Counts of Galaxies in Mid-Infrared
NASA Technical Reports Server (NTRS)
Xu, C.; Hacking, P. B.; Fang, F.; Shupe, D. L.; Lonsdale, C. J.; Lu, N. Y.; Helou, G.; Stacey, G. J.; Ashby, M. L. N.
1998-01-01
In this work we incorporate the newest ISO results on the mid-infrared spectral-energy-distributions (MIR SEDs) of galaxies into models for the number counts and redshift distributions of MIR surveys.
Getting something out of nothing in the measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Tan, Yong-Gang; Cai, Qing-Yu; Yang, Hai-Feng; Hu, Yao-Hua
2015-11-01
Because of the monogamy of entanglement, the measurement-device-independent quantum key distribution is immune to the side-information leaking of the measurement devices. When the correlated measurement outcomes are generated from the dark counts, no entanglement is actually obtained. However, secure key bits can still be proven to be generated from these measurement outcomes. Especially, we will give numerical studies on the contributions of dark counts to the key generation rate in practical decoy state MDI-QKD where a signal source, a weaker decoy source and a vacuum decoy source are used by either legitimate key distributer.
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...
2018-03-29
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.; Ishii, Takako T.
2004-03-01
Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S
2016-04-15
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12} ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}] ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope
Ackermann, M.; Ajello, M.; Albert, A.; ...
2016-04-14
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less
Radio Sources Toward Galaxy Clusters at 30 GHz
NASA Technical Reports Server (NTRS)
Coble, K.; Bonamente, M.; Carlstrom, J. E.; Dawson, K.; Hasler, N.; Holzapfel, W.; Joy, M.; LaRoque, S.; Marrone, D. P.; Reese, E. D.
2007-01-01
Extra-galactic radio sources are a significant contaminant in cosmic microwave background and Sunyaev-Zeldovich effect experiments. Deep interferometric observations with the BIMA and OVRO arrays are used to characterize the spatial, spectral, and flux distributions of radio sources toward massive galaxy clusters at 28.5 GHz. We compute counts of mJy source fluxes from 89 fields centered on known massive galaxy clusters and 8 non-cluster fields. We find that source counts in the inner regions of the cluster fields (within 0.5 arcmin of the cluster center) are a factor of 8.9 (+4.2 to -3.8) times higher than counts in the outer regions of the cluster fields (radius greater than 0.5 arcmin). Counts in the outer regions of the cluster fields are in turn a factor of 3.3 (+4.1 -1.8) greater than those in the noncluster fields. Counts in the non-cluster fields are consistent with extrapolations from the results of other surveys. We compute spectral indices of mJy sources in cluster fields between 1.4 and 28.5 GHz and find a mean spectral index of al[ja = 0.66 with an rms dispersion of 0.36, where flux S varies as upsilon(sup -alpha). The distribution is skewed, with a median spectral index of 0.72 and 25th and 75th percentiles of 0.51 and 0.92, respectively. This is steeper than the spectral indices of stronger field sources measured by other surveys.
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects
ERIC Educational Resources Information Center
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.
2018-01-01
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Long-distance practical quantum key distribution by entanglement swapping.
Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang
2011-02-14
We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.
Tanaka, Naoaki; Papadelis, Christos; Tamilia, Eleonora; Madsen, Joseph R; Pearl, Phillip L; Stufflebeam, Steven M
2018-04-27
This study evaluates magnetoencephalographic (MEG) spike population as compared with intracranial electroencephalographic (IEEG) spikes using a quantitative method based on distributed source analysis. We retrospectively studied eight patients with medically intractable epilepsy who had an MEG and subsequent IEEG monitoring. Fifty MEG spikes were analyzed in each patient using minimum norm estimate. For individual spikes, each vertex in the source space was considered activated when its source amplitude at the peak latency was higher than a threshold, which was set at 50% of the maximum amplitude over all vertices. We mapped the total count of activation at each vertex. We also analyzed 50 IEEG spikes in the same manner over the intracranial electrodes and created the activation count map. The location of the electrodes was obtained in the MEG source space by coregistering postimplantation computed tomography to MRI. We estimated the MEG- and IEEG-active regions associated with the spike populations using the vertices/electrodes with a count over 25. The activation count maps of MEG spikes demonstrated the localization associated with the spike population by variable count values at each vertex. The MEG-active region overlapped with 65 to 85% of the IEEG-active region in our patient group. Mapping the MEG spike population is valid for demonstrating the trend of spikes clustering in patients with epilepsy. In addition, comparison of MEG and IEEG spikes quantitatively may be informative for understanding their relationship.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y
2012-06-01
Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
2016-06-01
This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less
Support of selected X-ray studies to be performed using data from the Uhuru (SAS-A) satellite
NASA Technical Reports Server (NTRS)
Garmire, G. P.
1976-01-01
A new measurement of the diffuse X-ray emission sets more stringent upper limits on the fluctuations of the background and on the number counts of X-ray sources with absolute value of b 20 deg than previous measurements. A random sample of background data from the Uhuru satellite gives a relative fluctuation in excess of statistics of 2.0% between 2.4 and 6.9 keV. The hypothesis that the relative fluctuation exceeds 2.9% can be rejected at the 90% confidence level. No discernable energy dependence is evident in the fluctuations in the pulse height data, when separated into three energy channels of nearly equal width from 1.8 to 10.0 keV. The probability distribution of fluctuations was convolved with the photon noise and cosmic ray background deviation (obtained from the earth-viewing data) to yield the differential source count distribution for high latitude sources. Results imply that a maximum of 160 sources could be between 1.7 and 5.1 x 10 to the -11 power ergs/sq cm/sec (1-3 Uhuru counts).
Deep 3 GHz number counts from a P(D) fluctuation analysis
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
NASA Astrophysics Data System (ADS)
Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.
2011-09-01
We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org
AMI-LA observations of the SuperCLASS supercluster
NASA Astrophysics Data System (ADS)
Riseley, C. J.; Grainge, K. J. B.; Perrott, Y. C.; Scaife, A. M. M.; Battye, R. A.; Beswick, R. J.; Birkinshaw, M.; Brown, M. L.; Casey, C. M.; Demetroullas, C.; Hales, C. A.; Harrison, I.; Hung, C.-L.; Jackson, N. J.; Muxlow, T.; Watson, B.; Cantwell, T. M.; Carey, S. H.; Elwood, P. J.; Hickish, J.; Jin, T. Z.; Razavi-Ghods, N.; Scott, P. F.; Titterington, D. J.
2018-03-01
We present a deep survey of the Super-Cluster Assisted Shear Survey (SuperCLASS) supercluster - a region of sky known to contain five Abell clusters at redshift z ˜ 0.2 - performed using the Arcminute Microkelvin Imager (AMI) Large Array (LA) at 15.5 GHz. Our survey covers an area of approximately 0.9 deg2. We achieve a nominal sensitivity of 32.0 μJy beam-1 towards the field centre, finding 80 sources above a 5σ threshold. We derive the radio colour-colour distribution for sources common to three surveys that cover the field and identify three sources with strongly curved spectra - a high-frequency-peaked source and two GHz-peaked-spectrum sources. The differential source count (i) agrees well with previous deep radio source counts, (ii) exhibits no evidence of an emerging population of star-forming galaxies, down to a limit of 0.24 mJy, and (iii) disagrees with some models of the 15 GHz source population. However, our source count is in agreement with recent work that provides an analytical correction to the source count from the Square Kilometre Array Design Study (SKADS) Simulated Sky, supporting the suggestion that this discrepancy is caused by an abundance of flat-spectrum galaxy cores as yet not included in source population models.
Li, Gang; Xu, Jiayun; Zhang, Jie
2015-01-01
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
Photocounting distributions for exponentially decaying sources.
Teich, M C; Card, H C
1979-05-01
Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.
Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M
2015-10-30
Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
The 2-24 μm source counts from the AKARI North Ecliptic Pole survey
NASA Astrophysics Data System (ADS)
Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.
2014-11-01
We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Analysis of neutron propagation from the skyshine port of a fusion neutron source facility
NASA Astrophysics Data System (ADS)
Wakisaka, M.; Kaneko, J.; Fujita, F.; Ochiai, K.; Nishitani, T.; Yoshida, S.; Sawamura, T.
2005-12-01
The process of neutron leaking from a 14 MeV neutron source facility was analyzed by calculations and experiments. The experiments were performed at the Fusion Neutron Source (FNS) facility of the Japan Atomic Energy Institute, Tokai-mura, Japan, which has a port on the roof for skyshine experiments, and a 3He counter surrounded with a polyethylene moderator of different thicknesses was used to estimate the energy spectra and dose distributions. The 3He counter with a 3-cm-thick moderator was also used for dose measurements, and the doses evaluated by the counter counts and the calculated count-to-dose conversion factor agreed with the calculations to within ˜30%. The dose distribution was found to fit a simple analytical expression, D(r)=Q{exp(-r/λD)}/{r} and the parameters Q and λD are discussed.
Detecting fission from special nuclear material sources
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-06-05
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinohara, K., E-mail: shinohara.koji@jaea.go.jp; Ochiai, K.; Sukegawa, A.
In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons.more » The results suggested that the concept of a multi-stage detection system will work in practice.« less
Galaxy evolution and large-scale structure in the far-infrared. II - The IRAS faint source survey
NASA Astrophysics Data System (ADS)
Lonsdale, Carol J.; Hacking, Perry B.; Conrow, T. P.; Rowan-Robinson, M.
1990-07-01
The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling.
Galaxy evolution and large-scale structure in the far-infrared. II. The IRAS faint source survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonsdale, C.J.; Hacking, P.B.; Conrow, T.P.
1990-07-01
The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to themore » Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling. 105 refs.« less
Galaxy evolution and large-scale structure in the far-infrared. II - The IRAS faint source survey
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Hacking, Perry B.; Conrow, T. P.; Rowan-Robinson, M.
1990-01-01
The new IRAS Faint Source Survey data base is used to confirm the conclusion of Hacking et al. (1987) that the 60 micron source counts fainter than about 0.5 Jy lie in excess of predictions based on nonevolving model populations. The existence of an anisotropy between the northern and southern Galactic caps discovered by Rowan-Robinson et al. (1986) and Needham and Rowan-Robinson (1988) is confirmed, and it is found to extend below their sensitivity limit to about 0.3 Jy in 60 micron flux density. The count anisotropy at f(60) greater than 0.3 can be interpreted reasonably as due to the Local Supercluster; however, no one structure accounting for the fainter anisotropy can be easily identified in either optical or far-IR two-dimensional sky distributions. The far-IR galaxy sky distributions are considerably smoother than distributions from the published optical galaxy catalogs. It is likely that structure of the large size discussed here have been discriminated against in earlier studies due to insufficient volume sampling.
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J
2012-08-13
The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.
Recovery and diversity of heterotrophic bacteria from chlorinated drinking waters.
Maki, J S; LaCroix, S J; Hopkins, B S; Staley, J T
1986-01-01
Heterotrophic bacteria were enumerated from the Seattle drinking water catchment basins and distribution system. The highest bacterial recoveries were obtained by using a very dilute medium containing 0.01% peptone as the primary carbon source. Other factors favoring high recovery were the use of incubation temperatures close to that of the habitat and an extended incubation (28 days or longer provided the highest counts). Total bacterial counts were determined by using acridine orange staining. With one exception, all acridine orange counts in chlorinated samples were lower than those in prechlorinated reservoir water, indicating that chlorination often reduces the number of acridine orange-detectable bacteria. Source waters had higher diversity index values than did samples examined following chlorination and storage in reservoirs. Shannon index values based upon colony morphology were in excess of 4.0 for prechlorinated source waters, whereas the values for final chlorinated tap waters were lower than 2.9. It is not known whether the reduction in diversity was due solely to chlorination or in part to other factors in the water treatment and distribution system. Based upon the results of this investigation, we provide a list of recommendations for changes in the procedures used for the enumeration of heterotrophic bacteria from drinking waters. Images PMID:3524453
Does small-perimeter fencing inhibit mule deer or pronghorn use of water developments?
Larsen, R.T.; Bissonette, J.A.; Flinders, J.T.; Robinson, A.C.
2011-01-01
Wildlife water development can be an important habitat management strategy in western North America for many species, including both pronghorn (Antilocapra americana) and mule deer (Odocoileus hemionus). In many areas, water developments are fenced (often with small-perimeter fencing) to exclude domestic livestock and feral horses. Small-perimeter exclosures could limit wild ungulate use of fenced water sources, as exclosures present a barrier pronghorn and mule deer must negotiate to gain access to fenced drinking water. To evaluate the hypothesis that exclosures limit wild ungulate access to water sources, we compared use (photo counts) of fenced versus unfenced water sources for both pronghorn and mule deer between June and October 2002-2008 in western Utah. We used model selection to identify an adequate distribution and best approximating model. We selected a zero-inflated negative binomial distribution for both pronghorn and mule deer photo counts. Both pronghorn and mule deer photo counts were positively associated with sampling time and average daily maximum temperature in top models. A fence effect was present in top models for both pronghorn and mule deer, but mule deer response to small-perimeter fencing was much more pronounced than pronghorn response. For mule deer, we estimated that presence of a fence around water developments reduced photo counts by a factor of 0.25. We suggest eliminating fencing of water developments whenever possible or fencing a big enough area around water sources to avoid inhibiting mule deer. More generally, our results provide additional evidence that water development design and placement influence wildlife use. Failure to account for species-specific preferences will limit effectiveness of management actions and could compromise research results. Copyright ?? 2011 The Wildlife Society.
Data-optimized source modeling with the Backwards Liouville Test–Kinetic method
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...
2017-09-14
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
pyblocxs: Bayesian Low-Counts X-ray Spectral Analysis in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, A.; Kashyap, V.; Refsdal, B.; van Dyk, D.; Connors, A.; Park, T.
2011-07-01
Typical X-ray spectra have low counts and should be modeled using the Poisson distribution. However, χ2 statistic is often applied as an alternative and the data are assumed to follow the Gaussian distribution. A variety of weights to the statistic or a binning of the data is performed to overcome the low counts issues. However, such modifications introduce biases or/and a loss of information. Standard modeling packages such as XSPEC and Sherpa provide the Poisson likelihood and allow computation of rudimentary MCMC chains, but so far do not allow for setting a full Bayesian model. We have implemented a sophisticated Bayesian MCMC-based algorithm to carry out spectral fitting of low counts sources in the Sherpa environment. The code is a Python extension to Sherpa and allows to fit a predefined Sherpa model to high-energy X-ray spectral data and other generic data. We present the algorithm and discuss several issues related to the implementation, including flexible definition of priors and allowing for variations in the calibration information.
Radio Source Contributions to the Microwave Sky
NASA Astrophysics Data System (ADS)
Boughn, S. P.; Partridge, R. B.
2008-03-01
Cross-correlations of the Wilkinson Microwave Anisotropy Probe (WMAP) full sky K-, Ka-, Q-, V-, and W-band maps with the 1.4 GHz NVSS source count map and the HEAO I A2 2-10 keV full sky X-ray flux map are used to constrain rms fluctuations due to unresolved microwave sources in the WMAP frequency range. In the Q band (40.7 GHz), a lower limit, taking account of only those fluctuations correlated with the 1.4 GHz radio source counts and X-ray flux, corresponds to an rms Rayleigh-Jeans temperature of ˜2 μK for a solid angle of 1 deg2 assuming that the cross-correlations are dominated by clustering, and ˜1 μK if dominated by Poisson fluctuations. The correlated fluctuations at the other bands are consistent with a β = -2.1 ± 0.4 frequency spectrum. If microwave sources are distributed similarly in redshift to the radio and X-ray sources and are similarly clustered, then the implied total rms microwave fluctuations correspond to ˜5 μK. While this value should be considered no more than a plausible estimate, it is similar to that implied by the excess, small angular scale fluctuations observed in the Q band by WMAP and is consistent with estimates made by extrapolating low-frequency source counts.
Wilkes, E J A; Cowling, A; Woodgate, R G; Hughes, K J
2016-10-15
Faecal egg counts (FEC) are used widely for monitoring of parasite infection in animals, treatment decision-making and estimation of anthelmintic efficacy. When a single count or sample mean is used as a point estimate of the expectation of the egg distribution over some time interval, the variability in the egg density is not accounted for. Although variability, including quantifying sources, of egg count data has been described, the spatiotemporal distribution of nematode eggs in faeces is not well understood. We believe that statistical inference about the mean egg count for treatment decision-making has not been used previously. The aim of this study was to examine the density of Parascaris eggs in solution and faeces and to describe the use of hypothesis testing for decision-making. Faeces from two foals with Parascaris burdens were mixed with magnesium sulphate solution and 30 McMaster chambers were examined to determine the egg distribution in a well-mixed solution. To examine the distribution of eggs in faeces from an individual animal, three faecal piles from a foal with a known Parascaris burden were obtained, from which 81 counts were performed. A single faecal sample was also collected daily from 20 foals on three consecutive days and a FEC was performed on three separate portions of each sample. As appropriate, Poisson or negative binomial confidence intervals for the distribution mean were calculated. Parascaris eggs in a well-mixed solution conformed to a homogeneous Poisson process, while the egg density in faeces was not homogeneous, but aggregated. This study provides an extension from homogeneous to inhomogeneous Poisson processes, leading to an understanding of why Poisson and negative binomial distributions correspondingly provide a good fit for egg count data. The application of one-sided hypothesis tests for decision-making is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieira, J. D.; Crawford, T. M.; Switzer, E. R.
2010-08-10
We report the results of an 87 deg{sup 2} point-source survey centered at R.A. 5{sup h}30{sup m}, decl. -55{sup 0} taken with the South Pole Telescope at 1.4 and 2.0 mm wavelengths with arcminute resolution and milli-Jansky depth. Based on the ratio of flux in the two bands, we separate the detected sources into two populations, one consistent with synchrotron emission from active galactic nuclei and the other consistent with thermal emission from dust. We present source counts for each population from 11 to 640 mJy at 1.4 mm and from 4.4 to 800 mJy at 2.0 mm. The 2.0more » mm counts are dominated by synchrotron-dominated sources across our reported flux range; the 1.4 mm counts are dominated by synchrotron-dominated sources above {approx}15 mJy and by dust-dominated sources below that flux level. We detect 141 synchrotron-dominated sources and 47 dust-dominated sources at signal-to-noise ratio S/N >4.5 in at least one band. All of the most significantly detected members of the synchrotron-dominated population are associated with sources in previously published radio catalogs. Some of the dust-dominated sources are associated with nearby (z << 1) galaxies whose dust emission is also detected by the Infrared Astronomy Satellite. However, most of the bright, dust-dominated sources have no counterparts in any existing catalogs. We argue that these sources represent the rarest and brightest members of the population commonly referred to as submillimeter galaxies (SMGs). Because these sources are selected at longer wavelengths than in typical SMG surveys, they are expected to have a higher mean redshift distribution and may provide a new window on galaxy formation in the early universe.« less
Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions
Talamo, A.; Gohar, Y.; Gabrielli, F.; ...
2017-02-01
This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less
Relativistic Transformations of Light Power.
ERIC Educational Resources Information Center
McKinley, John M.
1979-01-01
Using a photon-counting technique, finds the angular distribution of emitted and detected power and the total radiated power of an arbitrary moving source, and uses the technique to verify the predicted effect of the earth's motion through the cosmic blackbody radiation. (Author/GA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.
2006-01-01
Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.
Development of a beta-spectrometer using PIPS technology
Courti; Goutelard; Burger; Blotin
2000-07-01
Various anthropogenic sources contribute to the inventory of long live beta-emitters in the environment. Studies have been carried out to obtain the 90Sr distribution in environment in order to estimate its impact in terms of radiation exposure to humans. The Laboratory routinely measures 90Sr by proportional counter after radiochemistry. An incomplete radiochemical separation leads to a deposit submitted to count polluted by natural beta-emitters. In order to confirm the result, 90Y (daughter of 90Sr), is extracted from the final radiochemical fraction and counted. The 90Y decreasing (T(1/2) = 2.67 days) is checked by successive counts over 64 h. The delay between the end of radiochemistry and the counting is imposed by 15 days to allow radioactive equilibrium between 90Sr and 90Y to be established. In order to remove this delay the purity of the 90Sr fraction source can be verified by beta-spectrometry. Thus, a beta-spectrometer is under development in collaboration with Canberra Semi-Conductor and Canberra Electronic. It consists in a PIPS detector where several silicon layers are combined. Initial results will be presented in this paper.
A time-domain fluorescence diffusion optical tomography system for breast tumor diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Wei; Gao, Feng; Wu, LinHui; Ma, Wenjuan; Yang, Fang; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan
2011-02-01
A prototype time-domain fluorescence diffusion optical tomography (FDOT) system using near-infrared light is presented. The system employs two pulsed light sources, 32 source fibers and 32 detection channels, working separately for acquiring the temporal distribution of the photon flux on the tissue surface. The light sources are provided by low power picosecond pulsed diode lasers at wavelengths of 780 nm and 830 nm, and a 1×32-fiber-optic-switch sequentially directs light sources to the object surface through 32 source fibers. The light signals re-emitted from the object are collected by 32 detection fibers connected to four 8×1 fiber-optic-switch and then routed to four time-resolved measuring channels, each of which consists of a collimator, a filter wheel, a photomultiplier tube (PMT) photon-counting head and a time-correlated single photon counting (TCSPC) channel. The performance and efficacy of the designed multi-channel PMT-TCSPC system are assessed by reconstructing the fluorescent yield and lifetime images of a solid phantom.
Gravitational wave source counts at high redshift and in models with extra dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es
2016-07-01
Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less
The Norma arm region Chandra survey catalog: X-ray populations in the spiral arms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fornasini, Francesca M.; Tomsick, John A.; Bodaghee, Arash
2014-12-01
We present a catalog of 1415 X-ray sources identified in the Norma Arm Region Chandra Survey (NARCS), which covers a 2° × 0.°8 region in the direction of the Norma spiral arm to a depth of ≈20 ks. Of these sources, 1130 are point-like sources detected with ≥3σ confidence in at least one of three energy bands (0.5-10, 0.5-2, and 2-10 keV), five have extended emission, and the remainder are detected at low significance. Since most sources have too few counts to permit individual classification, they are divided into five spectral groups defined by their quantile properties. We analyze stackedmore » spectra of X-ray sources within each group, in conjunction with their fluxes, variability, and infrared counterparts, to identify the dominant populations in our survey. We find that ∼50% of our sources are foreground sources located within 1-2 kpc, which is consistent with expectations from previous surveys. Approximately 20% of sources are likely located in the proximity of the Scutum-Crux and near Norma arm, while 30% are more distant, in the proximity of the far Norma arm or beyond. We argue that a mixture of magnetic and nonmagnetic cataclysmic variables dominates the Scutum-Crux and near Norma arms, while intermediate polars and high-mass stars (isolated or in binaries) dominate the far Norma arm. We also present the cumulative number count distribution for sources in our survey that are detected in the hard energy band. A population of very hard sources in the vicinity of the far Norma arm and active galactic nuclei dominate the hard X-ray emission down to f{sub X} ≈ 10{sup –14} erg cm{sup –2} s{sup –1}, but the distribution curve flattens at fainter fluxes. We find good agreement between the observed distribution and predictions based on other surveys.« less
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration
2017-10-01
We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.
Statistical modeling of dental unit water bacterial test kit performance.
Cohen, Mark E; Harte, Jennifer A; Stone, Mark E; O'Connor, Karen H; Coen, Michael L; Cullum, Malford E
2007-01-01
While it is important to monitor dental water quality, it is unclear whether in-office test kits provide bacterial counts comparable to the gold standard method (R2A). Studies were conducted on specimens with known bacterial concentrations, and from dental units, to evaluate test kit accuracy across a range of bacterial types and loads. Colony forming units (CFU) were counted for samples from each source, using R2A and two types of test kits, and conformity to Poisson distribution expectations was evaluated. Poisson regression was used to test for effects of source and device, and to estimate rate ratios for kits relative to R2A. For all devices, distributions were Poisson for low CFU/mL when only beige-pigmented bacteria were considered. For higher counts, R2A remained Poisson, but kits exhibited over-dispersion. Both kits undercounted relative to R2A, but the degree of undercounting was reasonably stable. Kits did not grow pink-pigmented bacteria from dental-unit water identified as Methylobacterium rhodesianum. Only one of the test kits provided results with adequate reliability at higher bacterial concentrations. Undercount bias could be estimated for this device and used to adjust test kit results. Insensitivity to methylobacteria spp. is problematic.
Water quality problems associated with intermittent water supply.
Tokajian, S; Hashwa, F
2003-01-01
A controlled study was conducted in Lebanon over a period of 12 months to determine bacterial regrowth in a small network supplying the Beirut suburb of Naccache that had a population of about 3,000. The residential area, which is fed by gravity, is supplied twice a week with chlorinated water from two artesian wells of a confined aquifer. A significant correlation was detected between the turbidity and the levels of heterotrophic plate count bacteria (HPC) in the samples from the distribution network as well as from the artesian wells. However, a negative significant correlation was found between the temperature and the HPC count in the samples collected from the source. A statistically significant increase in counts, possibly due to regrowth, was repeatedly established between two sampling points lying on a straight distribution line but 1 km apart. Faecal coliforms were detected in the source water but none in the network except during a pipe breakage incident with confirmed Escherichia coli reaching 40 CFU/100 mL. However, coliforms such as Citrobacter freundii, Enterobacter agglomerans, E. cloacae and E. skazakii were repeatedly isolated from the network, mainly due to inadequate chlorination. A second controlled study was conducted to determine the effect of storage on the microbial quality of household storage tanks (500 L), which were of two main types - galvanized cast iron and black polyethylene. The mean bacterial count increased significantly after 7 d storage in both tank types. A significant difference was found in the mean HPC/mL between the winter and the summer. Highest counts were found April-June although the maximum temperature was reported later in the summer. A positive correlation was established between the HPC/mL and pH, temperature and storage time.
Marzocchi, O; Breustedt, B; Mostacci, D; Zankl, M; Urban, M
2011-03-01
A goal of whole body counting (WBC) is the estimation of the total body burden of radionuclides disregarding the actual position within the body. To achieve the goal, the detectors need to be placed in regions where the photon flux is as independent as possible from the distribution of the source. At the same time, the detectors need high photon fluxes in order to achieve better efficiency and lower minimum detectable activities. This work presents a method able to define the layout of new WBC systems and to study the behaviour of existing ones using both detection efficiency and its dependence on the position of the source within the body of computational phantoms.
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.
2013-02-01
We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a large range of flux densities. We derive the multi-frequency Euclidean level - the plateau in the normalised differential counts at high flux-density - and compare it to WMAP, Spitzer and IRAS results. The submillimetre number counts are not well reproduced by current evolution models of dusty galaxies, whereas the millimetre part appears reasonably well fitted by the most recent model for synchrotron-dominated sources. Finally we provide estimates of the local luminosity density of dusty galaxies, providing the first such measurements at 545 and 857 GHz. Appendices are available in electronic form at http://www.aanda.orgCorresponding author: herve.dole@ias.u-psud.fr
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
The first Extreme Ultraviolet Explorer source catalog
NASA Technical Reports Server (NTRS)
Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.
1994-01-01
The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.
Modeling unobserved sources of heterogeneity in animal abundance using a Dirichlet process prior
Dorazio, R.M.; Mukherjee, B.; Zhang, L.; Ghosh, M.; Jelks, H.L.; Jordan, F.
2008-01-01
In surveys of natural populations of animals, a sampling protocol is often spatially replicated to collect a representative sample of the population. In these surveys, differences in abundance of animals among sample locations may induce spatial heterogeneity in the counts associated with a particular sampling protocol. For some species, the sources of heterogeneity in abundance may be unknown or unmeasurable, leading one to specify the variation in abundance among sample locations stochastically. However, choosing a parametric model for the distribution of unmeasured heterogeneity is potentially subject to error and can have profound effects on predictions of abundance at unsampled locations. In this article, we develop an alternative approach wherein a Dirichlet process prior is assumed for the distribution of latent abundances. This approach allows for uncertainty in model specification and for natural clustering in the distribution of abundances in a data-adaptive way. We apply this approach in an analysis of counts based on removal samples of an endangered fish species, the Okaloosa darter. Results of our data analysis and simulation studies suggest that our implementation of the Dirichlet process prior has several attractive features not shared by conventional, fully parametric alternatives. ?? 2008, The International Biometric Society.
Operating envelopes of particle sizing instrumentation used for icing research
NASA Technical Reports Server (NTRS)
Hovenac, Edward A.
1987-01-01
The Forward Scattering Spectrometer Probe and the Optical Array Probe are analyzed in terms of their ability to make accurate determinations of water droplet size distributions. Sources of counting and sizing errors are explained. The paper describes ways of identifying these errors and how they can affect measurement.
Lehtola, Markku J; Juhna, Tālis; Miettinen, Ilkka T; Vartiainen, Terttu; Martikainen, Pertti J
2004-12-01
The formation of biofilms in drinking water distribution networks is a significant technical, aesthetic and hygienic problem. In this study, the effects of assimilable organic carbon, microbially available phosphorus (MAP), residual chlorine, temperature and corrosion products on the formation of biofilms were studied in two full-scale water supply systems in Finland and Latvia. Biofilm collectors consisting of polyvinyl chloride pipes were installed in several waterworks and distribution networks, which were supplied with chemically precipitated surface waters and groundwater from different sources. During a 1-year study, the biofilm density was measured by heterotrophic plate counts on R2A-agar, acridine orange direct counting and ATP-analyses. A moderate level of residual chlorine decreased biofilm density, whereas an increase of MAP in water and accumulated cast iron corrosion products significantly increased biofilm density. This work confirms, in a full-scale distribution system in Finland and Latvia, our earlier in vitro finding that biofilm formation is affected by the availability of phosphorus in drinking water.
Multi-channel photon counting DOT system based on digital lock-in detection technique
NASA Astrophysics Data System (ADS)
Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng
2011-02-01
Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
NASA Astrophysics Data System (ADS)
Béthermin, Matthieu; Wu, Hao-Yi; Lagache, Guilaine; Davidzon, Iary; Ponthieu, Nicolas; Cousin, Morgane; Wang, Lingyu; Doré, Olivier; Daddi, Emanuele; Lapi, Andrea
2017-11-01
Follow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model. Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor 2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only 60% of the Herschel flux density, but that this number will rise to 95% for future millimeter surveys on 30 m-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called Simulated Infrared Dusty Extragalactic Sky (SIDES), is publicly available. Our simulation Simulated Infrared Dusty Extragalactic Sky (SIDES) is available at http://cesam.lam.fr/sides.
A very deep IRAS survey at l(II) = 97 deg, b(II) = +30 deg
NASA Technical Reports Server (NTRS)
Hacking, Perry; Houck, James R.
1987-01-01
A deep far-infrared survey is presented using over 1000 scans made of a 4 to 6 sq. deg. field at the north ecliptic pole by the IRAS. Point sources from this survey are up to 100 times fainter than the IRAS point source catalog at 12 and 25 micrometers, and up to 10 times fainter at 60 and 100 micrometers. The 12 and 25 micrometer maps are instrumental noise-limited, and the 60 and 100 micrometer maps are confusion noise-limited. The majority of the 12 micrometer point sources are stars within the Milky Way. The 25 micrometer sources are composed almost equally of stars and galaxies. About 80% of the 60 micrometer sources correspond to galaxies on Palomar Observatory Sky Survey (POSS) enlargements. The remaining 20% are probably galaxies below the POSS detection limit. The differential source counts are presented and compared with what is predicted by the Bahcall and Soneira Standard Galaxy Model using the B-V-12 micrometer colors of stars without circumstellar dust shells given by Waters, Cote and Aumann. The 60 micrometer source counts are inconsistent with those predicted for a uniformly distributed, nonevolving universe. The implications are briefly discussed.
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.
Lensing corrections to features in the angular two-point correlation function and power spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam
2008-01-15
It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less
FPGA-based gating and logic for multichannel single photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G
2012-01-01
We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less
Probing the Cosmological Principle in the counts of radio galaxies at different frequencies
NASA Astrophysics Data System (ADS)
Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.
2018-04-01
According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.
Bondi Accretion and the Problem of the Missing Isolated Neutron Stars
NASA Technical Reports Server (NTRS)
Perna, Rosalba; Narayan, Ramesh; Rybicki, George; Stella, Luigi; Treves, Aldo
2003-01-01
A large number of neutron stars (NSs), approximately 10(exp 9), populate the Galaxy, but only a tiny fraction of them is observable during the short radio pulsar lifetime. The majority of these isolated NSs, too cold to be detectable by their own thermal emission, should be visible in X-rays as a result of accretion from the interstellar medium. The ROSAT All-Sky Survey has, however, shown that such accreting isolated NSs are very elusive: only a few tentative candidates have been identified, contrary to theoretical predictions that up to several thousand should be seen. We suggest that the fundamental reason for this discrepancy lies in the use of the standard Bondi formula to estimate the accretion rates. We compute the expected source counts using updated estimates of the pulsar velocity distribution, realistic hydrogen atmosphere spectra, and a modified expression for the Bondi accretion rate, as suggested by recent MHD simulations and supported by direct observations in the case of accretion around supermassive black holes in nearby galaxies and in our own. We find that, whereas the inclusion of atmospheric spectra partly compensates for the reduction in the counts due to the higher mean velocities of the new distribution, the modified Bondi formula dramatically suppresses the source counts. The new predictions are consistent with a null detection at the ROSAT sensitivity.
Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert
2014-11-10
Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.
SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, S; Kaye, W; Jaworski, J
2015-06-15
Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less
The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts
NASA Astrophysics Data System (ADS)
Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel
2015-01-01
Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129
Fast coincidence counting with active inspection systems
NASA Astrophysics Data System (ADS)
Mullens, J. A.; Neal, J. S.; Hausladen, P. A.; Pozzi, S. A.; Mihalczo, J. T.
2005-12-01
This paper describes 2nd and 3rd order time coincidence distributions measurements with a GHz processor that synchronously samples 5 or 10 channels of data from radiation detectors near fissile material. On-line, time coincidence distributions are measured between detectors or between detectors and an external stimulating source. Detector-to-detector correlations are useful for passive measurements also. The processor also measures the number of times n pulses occur in a selectable time window and compares this multiplet distribution to a Poisson distribution as a method of determining the occurrence of fission. The detectors respond to radiation emitted in the fission process induced internally by inherent sources or by external sources such as LINACS, DT generators either pulsed or steady state with alpha detectors, etc. Data can be acquired from prompt emission during the source pulse, prompt emissions immediately after the source pulse, or delayed emissions between source pulses. These types of time coincidence measurements (occurring on the time scale of the fission chain multiplication processes for nuclear weapons grade U and Pu) are useful for determining the presence of these fissile materials and quantifying the amount, and are useful for counter terrorism and nuclear material control and accountability. This paper presents the results for a variety of measurements.
Highly efficient entanglement swapping and teleportation at telecom wavelength
Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide
2015-01-01
Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links. PMID:25791212
Highly efficient entanglement swapping and teleportation at telecom wavelength.
Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide
2015-03-20
Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
Benchmarking Data for the Proposed Signature of Used Fuel Casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauch, Eric Benton
2016-09-23
A set of benchmarking measurements to test facets of the proposed extended storage signature was conducted on May 17, 2016. The measurements were designed to test the overall concept of how the proposed signature can be used to identify a used fuel cask based only on the distribution of neutron sources within the cask. To simulate the distribution, 4 Cf-252 sources were chosen and arranged on a 3x3 grid in 3 different patterns and raw neutron totals counts were taken at 6 locations around the grid. This is a very simplified test of the typical geometry studied previously in simulationmore » with simulated used nuclear fuel.« less
Improved detection of radioactive material using a series of measurements
NASA Astrophysics Data System (ADS)
Mann, Jenelle
The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
A new approach to counting measurements: Addressing the problems with ISO-11929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
A new approach to counting measurements: Addressing the problems with ISO-11929
Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie
2017-12-23
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less
A new approach to counting measurements: Addressing the problems with ISO-11929
NASA Astrophysics Data System (ADS)
Klumpp, John; Miller, Guthrie; Poudel, Deepesh
2018-06-01
We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Taut, A.; Drews, C.; Berger, L.; Wimmer-Schweingruber, R. F.
2015-12-01
The 1D Velocity Distribution Function (VDF) of He+ pickup ions shows two distinct populations that reflect the sources of these ions. The highly suprathermal population is the result of the ionization and pickup of almost resting interstellar neutrals that are injected into the solar wind as a highly anisotropic torus distribution. The nearly thermalized population is centered around the solar wind bulk speed and is mainly attributed to inner-source pickup ions that originate in the inner heliosphere. It is generally believed that the initial torus distribution of interstellar pickup ions is rapidly isotropized by resonant wave-particle interactions, but recent observations by Drews et al. (2015) of a torus-like VDF strongly limit this isotropization. This in turn means that more observational data is needed to further characterize the kinetic behavior of pickup ions. In this study we use data from the Charge-Time-Of-Flight sensor on-board SOHO. As this sensor offers unrivaled counting statistics for He+ together with a sufficient mass-per-charge resolution it is well-suited for investigating the He+ VDF on comparatively short timescales. We combine this data with the high resolution magnetic field data from WIND via an extrapolation to the location of SOHO. With this combination of instruments we investigate the He+ VDF for time periods of different solar wind speeds, magnetic field directions, and wave power. We find a systematic trend of the short-term He+ VDF with these parameters. Especially by varying the considered magnetic field directions we observe a 1D projection of the anisotropic torus-like VDF. In addition, we investigate stream interaction regions and coronal mass ejections. In the latter we observe an excess of inner-source He+ that is accompanied by a significant increase of heavy pickup ion count rates. This may be linked to the as yet ill understood production mechanism of inner-source pickup ions.
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
NASA Astrophysics Data System (ADS)
Yin, Guoyan; Zhang, Limin; Zhang, Yanqi; Liu, Han; Du, Wenwen; Ma, Wenjuan; Zhao, Huijuan; Gao, Feng
2018-02-01
Pharmacokinetic diffuse fluorescence tomography (DFT) can describe the metabolic processes of fluorescent agents in biomedical tissue and provide helpful information for tumor differentiation. In this paper, a dynamic DFT system was developed by employing digital lock-in-photon-counting with square wave modulation, which predominates in ultra-high sensitivity and measurement parallelism. In this system, 16 frequency-encoded laser diodes (LDs) driven by self-designed light source system were distributed evenly in the imaging plane and irradiated simultaneously. Meanwhile, 16 detection fibers collected emission light in parallel by the digital lock-in-photon-counting module. The fundamental performances of the proposed system were assessed with phantom experiments in terms of stability, linearity, anti-crosstalk as well as images reconstruction. The results validated the availability of the proposed dynamic DFT system.
Time encoded radiation imaging
Marleau, Peter; Brubaker, Erik; Kiff, Scott
2014-10-21
The various technologies presented herein relate to detecting nuclear material at a large stand-off distance. An imaging system is presented which can detect nuclear material by utilizing time encoded imaging relating to maximum and minimum radiation particle counts rates. The imaging system is integrated with a data acquisition system that can utilize variations in photon pulse shape to discriminate between neutron and gamma-ray interactions. Modulation in the detected neutron count rates as a function of the angular orientation of the detector due to attenuation of neighboring detectors is utilized to reconstruct the neutron source distribution over 360 degrees around the imaging system. Neutrons (e.g., fast neutrons) and/or gamma-rays are incident upon scintillation material in the imager, the photons generated by the scintillation material are converted to electrical energy from which the respective neutrons/gamma rays can be determined and, accordingly, a direction to, and the location of, a radiation source identified.
SPERM COUNT DISTRIBUTIONS IN FERTILE MEN
Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...
Extending pure luminosity evolution models into the mid-infrared, far-infrared and submillimetre
NASA Astrophysics Data System (ADS)
Hill, Michael D.; Shanks, Tom
2011-07-01
Simple pure luminosity evolution (PLE) models, in which galaxies brighten at high redshift due to increased star formation rates (SFRs), are known to provide a good fit to the colours and number counts of galaxies throughout the optical and near-infrared. We show that optically defined PLE models, where dust reradiates absorbed optical light into infrared spectra composed of local galaxy templates, fit galaxy counts and colours out to 8 μm and to at least z≈ 2.5. At 24-70 μm, the model is able to reproduce the observed source counts with reasonable success if 16 per cent of spiral galaxies show an excess in mid-IR flux due to a warmer dust component and a higher SFR, in line with observations of local starburst galaxies. There remains an underprediction of the number of faint-flux, high-z sources at 24 μm, so we explore how the evolution may be altered to correct this. At 160 μm and longer wavelengths, the model fails, with our model of normal galaxies accounting for only a few percent of sources in these bands. However, we show that a PLE model of obscured AGN, which we have previously shown to give a good fit to observations at 850 μm, also provides a reasonable fit to the Herschel/BLAST number counts and redshift distributions at 250-500 μm. In the context of a ΛCDM cosmology, an AGN contribution at 250-870 μm would remove the need to invoke a top-heavy IMF for high-redshift starburst galaxies.
Native Amazonian children forego egalitarianism in merit-based tasks when they learn to count.
Jara-Ettinger, Julian; Gibson, Edward; Kidd, Celeste; Piantadosi, Steve
2016-11-01
Cooperation often results in a final material resource that must be shared, but deciding how to distribute that resource is not straightforward. A distribution could count as fair if all members receive an equal reward (egalitarian distributions), or if each member's reward is proportional to their merit (merit-based distributions). Here, we propose that the acquisition of numerical concepts influences how we reason about fairness. We explore this possibility in the Tsimane', a farming-foraging group who live in the Bolivian rainforest. The Tsimane' learn to count in the same way children from industrialized countries do, but at a delayed and more variable timeline, allowing us to de-confound number knowledge from age and years in school. We find that Tsimane' children who can count produce merit-based distributions, while children who cannot count produce both merit-based and egalitarian distributions. Our findings establish that the ability to count - a non-universal, language-dependent, cultural invention - can influence social cognition. © 2015 John Wiley & Sons Ltd.
How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology
NASA Astrophysics Data System (ADS)
Ekers, Ron
2012-09-01
In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.
The Planck Catalogue of Galactic Cold Clumps : PGCC
NASA Astrophysics Data System (ADS)
Montier, L.
The Planck satellite has provided an unprecedented view of the submm sky, allowing us to search for the dust emission of Galactic cold sources. Combining Planck-HFI all-sky maps in the high frequency channels with the IRAS map at 100um, we built the Planck catalogue of Galactic Cold Clumps (PGCC, Planck 2015 results. XXVIII), counting 13188 sources distributed over the whole sky, and following mainly the Galactic structures at low and intermediate latitudes. This is the first all-sky catalogue of Galactic cold sources obtained with a single instrument at this resolution and sensitivity, which opens a new window on star-formation processes in our Galaxy.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
History of pronghorn population monitoring, research, and management in Yellowstone National Park
Keating, Kim A.
2002-01-01
over time. Despite these deficiencies, considerable information was reviewed, earlier summaries of population classification and count data were updated, and previously uncited sources of information were identified that challenge important aspects of previous interpretations of the history of pronghorns and pronghorn management in YNP. Information is grouped into 4 major subject areas: distribution and habitat use, demographics and management, genetics, and disease.
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Ragweed (Ambrosia) pollen source inventory for Austria.
Karrer, G; Skjøth, C A; Šikoparija, B; Smith, M; Berger, U; Essl, F
2015-08-01
This study improves the spatial coverage of top-down Ambrosia pollen source inventories for Europe by expanding the methodology to Austria, a country that is challenging in terms of topography and the distribution of ragweed plants. The inventory combines annual ragweed pollen counts from 19 pollen-monitoring stations in Austria (2004-2013), 657 geographical observations of Ambrosia plants, a Digital Elevation Model (DEM), local knowledge of ragweed ecology and CORINE land cover information from the source area. The highest mean annual ragweed pollen concentrations were generally recorded in the East of Austria where the highest densities of possible growth habitats for Ambrosia were situated. Approximately 99% of all observations of Ambrosia populations were below 745m. The European infection level varies from 0.1% at Freistadt in Northern Austria to 12.8% at Rosalia in Eastern Austria. More top-down Ambrosia pollen source inventories are required for other parts of Europe. A method for constructing top-down pollen source inventories for invasive ragweed plants in Austria, a country that is challenging in terms of topography and ragweed distribution. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Seshadreesan, Kaushik P.; Takeoka, Masahiro; Sasaki, Masahide
2016-04-01
Device-independent quantum key distribution (DIQKD) guarantees unconditional security of a secret key without making assumptions about the internal workings of the devices used for distribution. It does so using the loophole-free violation of a Bell's inequality. The primary challenge in realizing DIQKD in practice is the detection loophole problem that is inherent to photonic tests of Bell' s inequalities over lossy channels. We revisit the proposal of Curty and Moroder [Phys. Rev. A 84, 010304(R) (2011), 10.1103/PhysRevA.84.010304] to use a linear optics-based entanglement-swapping relay (ESR) to counter this problem. We consider realistic models for the entanglement sources and photodetectors: more precisely, (a) polarization-entangled states based on pulsed spontaneous parametric down-conversion sources with infinitely higher-order multiphoton components and multimode spectral structure, and (b) on-off photodetectors with nonunit efficiencies and nonzero dark-count probabilities. We show that the ESR-based scheme is robust against the above imperfections and enables positive key rates at distances much larger than what is possible otherwise.
Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards sub-fertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of ferti...
NASA Astrophysics Data System (ADS)
Zavala, J. A.; Aretxaga, I.; Geach, J. E.; Hughes, D. H.; Birkinshaw, M.; Chapin, E.; Chapman, S.; Chen, Chian-Chou; Clements, D. L.; Dunlop, J. S.; Farrah, D.; Ivison, R. J.; Jenness, T.; Michałowski, M. J.; Robson, E. I.; Scott, Douglas; Simpson, J.; Spaans, M.; van der Werf, P.
2017-01-01
We present deep observations at 450 and 850 μm in the Extended Groth Strip field taken with the SCUBA-2 camera mounted on the James Clerk Maxwell Telescope as part of the deep SCUBA-2 Cosmology Legacy Survey (S2CLS), achieving a central instrumental depth of σ450 = 1.2 mJy beam-1 and σ850 = 0.2 mJy beam-1. We detect 57 sources at 450 μm and 90 at 850 μm with signal-to-noise ratio >3.5 over ˜70 arcmin2. From these detections, we derive the number counts at flux densities S450 > 4.0 mJy and S850 > 0.9 mJy, which represent the deepest number counts at these wavelengths derived using directly extracted sources from only blank-field observations with a single-dish telescope. Our measurements smoothly connect the gap between previous shallower blank-field single-dish observations and deep interferometric ALMA results. We estimate the contribution of our SCUBA-2 detected galaxies to the cosmic infrared background (CIB), as well as the contribution of 24 μm-selected galaxies through a stacking technique, which add a total of 0.26 ± 0.03 and 0.07 ± 0.01 MJy sr-1, at 450 and 850 μm, respectively. These surface brightnesses correspond to 60 ± 20 and 50 ± 20 per cent of the total CIB measurements, where the errors are dominated by those of the total CIB. Using the photometric redshifts of the 24 μm-selected sample and the redshift distributions of the submillimetre galaxies, we find that the redshift distribution of the recovered CIB is different at each wavelength, with a peak at z ˜ 1 for 450 μm and at z ˜ 2 for 850 μm, consistent with previous observations and theoretical models.
Chapter 11: Web-based Tools - VO Region Inventory Service
NASA Astrophysics Data System (ADS)
Good, J. C.
As the size and number of datasets available through the VO grows, it becomes increasingly critical to have services that aid in locating and characterizing data pertinent to a particular scientific problem. At the same time, this same increase makes that goal more and more difficult to achieve. With a small number of datasets, it is feasible to simply retrieve the data itself (as the NVO DataScope service does). At intermediate scales, "count" DBMS searches (searches of the actual datasets which return record counts rather than full data subsets) sent to each data provider will work. However, neither of these approaches scale as the number of datasets expands into the hundreds or thousands. Dealing with the same problem internally, IRSA developed a compact and extremely fast scheme for determining source counts for positional catalogs (and in some cases image metadata) over arbitrarily large regions for multiple catalogs in a fraction of a second. To show applicability to the VO in general, this service has been extended with indices for all 4000+ catalogs in CDS Vizier (essentially all published catalogs and source tables). In this chapter, we will briefly describe the architecture of this service, and then describe how this can be used in a distributed system to retrieve rapid inventories of all VO holdings in a way that places an insignificant load on any data supplier. Further, we show and this tool can be used in conjunction with VO Registries and catalog services to zero in on those datasets that are appropriate to the user's needs. The initial implementation of this service consolidates custom binary index file structures (external to any DBMS and therefore portable) at a single site to minimize search times and implements the search interface as a simple CGI program. However, the architecture is amenable to distribution. The next phase of development will focus on metadata harvesting from data archives through a standard program interface and distribution of the search processing across multiple service providers for redundancy and parallelization.
NASA Astrophysics Data System (ADS)
Hirayama, Hideo; Kondo, Kenjiro; Suzuki, Seishiro; Hamamoto, Shimpei; Iwanaga, Kohei
2017-09-01
Pulse height distributions were measured using a LaBr3 detector set in a 1 cm lead collimator to investigate main radiation source at the operation floor of Fukushima Daiichi Nuclear Power Station Unit 4. It was confirmed that main radiation source above the reactor well was Co-60 from the activated steam dryer in the DS pool (Dryer-Separator pool) and that at the standby area was Cs-134 and Cs-137 from contaminated buildings and debris at the lower floor. Full energy peak count rate of Co-60 was reduced about 1/3 by 12mm lead sheet placed on the floor of the fuel handling machine.
Distribution and density of bird species hazardous to aircraft
Robbins, C.S.; Gauthreaux, Sidney A.
1975-01-01
Only in the past 5 years has it become feasible to map the relative abundance of North American birds. Two programs presently under way and a third that is in the experimental phase are making possible the up-to-date mapping of abundance as well as distribution. A fourth program that has been used successfully in Europe and on a small scale in parts of North America yields detailed information on breeding distribution. The Breeding Bird Survey, sponsored by the U.S. Bureau of Sport Fisheries and Wildlife and the Canadian Wildlife Service, involves 2,000 randomly distributed roadside counts that are conducted during the height of the breeding season in all U.S. States and Canadian Provinces. Observations of approximately 1.4 million birds per year are entered on magnetic tape and subsequently used both for statistical analysis of population trends and for computer mapping of distribution and abundance. The National Audubon Society's Christmas Bird Count is conducted in about 1,000 circles, each 15 miles (24 km) in diameter, in the latter half of December. Raw data for past years have been published in voluminous reports, but not in a form for ready analysis. Under a contract between the U.S. Air Force and the U. S. Bureau of Sport Fisheries and Wildlife (in cooperation with the National Audubon Society), preliminary maps showing distribution and abundance of selected species that are potential hazards to aircraft are presently being mapped and prepared for publication. The Winter Bird Survey, which is in its fifth season of experimental study in a limited area in Central Maryland, may ultimately replace the Christmas Bird Count source. This Survey consists of a standardized 8-kilometer (5-mile) route covered uniformly once a year during midwinter. Bird Atlas programs, which map distribution but not abundance, are well established in Europe and are gaining interest in America
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng
2014-12-01
Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.
NASA Astrophysics Data System (ADS)
Li, Zheng; Guan, Jun; Yang, Xudong; Lin, Chao-Hsin
2014-06-01
Airborne particles are an important type of air pollutants in aircraft cabin. Finding sources of particles is conducive to taking appropriate measures to remove them. In this study, measurements of concentration and size distribution of particles larger than 0.3 μm (PM>0.3) were made on nine short haul flights from September 2012 to March 2013. Particle counts in supply air and breathing zone air were both obtained. Results indicate that the number concentrations of particles ranged from 3.6 × 102 counts L-1 to 1.2 × 105 counts L-1 in supply air and breathing zone air, and they first decreased and then increased in general during the flight duration. Peaks of particle concentration were found at climbing, descending, and cruising phases in several flights. Percentages of particle concentration in breathing zone contributed by the bleed air (originated from outside) and cabin interior sources were calculated. The bleed air ratios, outside airflow rates and total airflow rates were calculated by using carbon dioxide as a ventilation tracer in five of the nine flights. The calculated results indicate that PM>0.3 in breathing zone mainly came from unfiltered bleed air, especially for particle sizes from 0.3 to 2.0 μm. And for particles larger than 2.0 μm, contributions from the bleed air and cabin interior were both important. The results would be useful for developing better cabin air quality control strategies.
Johnson, J. R.; Feldman, W.C.; Lawrence, D.J.; Maurice, S.; Swindle, T.D.; Lucey, P.G.
2002-01-01
Initial studies of neutron spectrometer data returned by Lunar Prospector concentrated on the discovery of enhanced hydrogen abundances near both lunar poles. However, the nonpolar data exhibit intriguing patterns that appear spatially correlated with surface features such as young impact craters (e.g., Tycho). Such immature crater materials may have low hydrogen contents because of their relative lack of exposure to solar wind-implanted volatiles. We tested this hypothesis by comparing epithermal* neutron counts (i.e., epithermal -0.057 ?? thermal neutrons) for Copernican-age craters classified as relatively young, intermediate, and old (as determined by previous studies of Clementine optical maturity variations). The epithermal* counts of the crater and continuous ejecta regions suggest that the youngest impact materials are relatively devoid of hydrogen in the upper 1 m of regolith. We also show that the mean hydrogen contents measured in Apollo and Luna landing site samples are only moderately well correlated to the epithermal* neutron counts at the landing sites, likely owing to the effects of rare earth elements. These results suggest that further work is required to define better how hydrogen distribution can be revealed by epithermal neutrons in order to understand more fully the nature and sources (e.g., solar wind, meteorite impacts) of volatiles in the lunar regolith.
NASA Astrophysics Data System (ADS)
Hiratama, Hideo; Kondo, Kenjiro; Suzuki, Seishiro; Tanimura, Yoshihiko; Iwanaga, Kohei; Nagata, Hiroshi
2017-09-01
Pulse height distributions were measured using a CdZnTe detector inside a lead collimator to investigate main source producing high dose rates above the shield plugs of Unit 3 at Fukushima Daiichi Nuclear Power Station. It was confirmed that low energy photons are dominant. Concentrations of Cs-137 under 60 cm concrete of the shield plug were estimated to be between 8.1E+9 and 5.7E+10 Bq/cm2 from the measured peak count rate of 0.662 MeV photons. If Cs-137 was distributed on the surfaces of the gaps with radius 6m and with the averaged concentration of 5 points, 2.6E+10 Bq/cm2, total amount of Cs-137 is estimated to be 30 PBq.
Byappanahalli, M.N.; Whitman, R.L.; Shively, D.A.; Sadowsky, M.J.; Ishii, S.
2006-01-01
The common occurrence of Escherichia coli in temperate soils has previously been reported, however, there are few studies to date to characterize its source, distribution, persistent capability and genetic diversity. In this study, undisturbed, forest soils within six randomly selected 0.5 m2 exclosure plots (covered by netting of 2.3 mm2 mesh size) were monitored from March to October 2003 for E. coli in order to describe its numerical and population characteristics. Culturable E. coli occurred in 88% of the samples collected, with overall mean counts of 16 MPN g-1, ranging from <1 to 1657 (n = 66). Escherichia coli counts did not correlate with substrate moisture content, air, or soil temperatures, suggesting that seasonality were not a strong factor in population density control. Mean E. coli counts in soil samples (n = 60) were significantly higher inside than immediately outside the exclosures; E. coli distribution within the exclosures was patchy. Repetitive extragenic palindromic polymerase chain reaction (Rep-PCR) demonstrated genetic heterogeneity of E. coli within and among exclosure sites, and the soil strains were genetically distinct from animal (E. coli) strains tested (i.e. gulls, terns, deer and most geese). These results suggest that E. coli can occur and persist for extended periods in undisturbed temperate forest soils independent of recent allochthonous input and season, and that the soil E. coli populations formed a cohesive phylogenetic group in comparison to the set of fecal strains with which they were compared. Thus, in assessing E. coli sources within a stream, it is important to differentiate background soil loadings from inputs derived from animal and human fecal contamination. ?? 2005 Society for Applied Microbiology and Blackwell Publishing Ltd.
Aerial Measuring System Sensor Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. S. Detwiler
2002-04-01
This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
Barnes, Neil; Ishii, Takeo; Hizawa, Nobuyuki; Midwinter, Dawn; James, Mark; Hilton, Emma; Jones, Paul W
2018-01-01
Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS) for patients with chronic obstructive pulmonary disease (COPD). Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta 2 -agonist (LABA) treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients. A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II-IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts). Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment. Among Japanese patients, the median (interquartile range) absolute eosinophil count was 170 cells/mm 3 (100-280 cells/mm 3 ). Overall, 612/1,094 Japanese patients (56%) had an absolute eosinophil count ≥150 cells/mm 3 and 902/1,304 Japanese patients (69%) had a percentage eosinophil ≥2%. Among non-Japanese patients, these values were 160 (100-250) cells/mm 3 , 2,842/5,151 patients (55%), and 2,937/5,155 patients (57%), respectively. The eosinophil distribution among Japanese patients was similar to that among non-Japanese patients. Within multi-country studies with similar inclusion criteria, the eosinophil count was numerically lower in Japanese compared with non-Japanese patients (median 120 vs 160 cells/mm 3 ). The eosinophil distribution in Japanese patients seems comparable to that of non-Japanese patients; although within multi-country studies, there was a slightly lower median eosinophil count for Japanese patients compared with non-Japanese patients. These findings suggest that blood eosinophil data from global studies are of relevance in Japan.
Multiplicity counting from fission detector signals with time delay effects
NASA Astrophysics Data System (ADS)
Nagy, L.; Pázsit, I.; Pál, L.
2018-03-01
In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).
NASA Astrophysics Data System (ADS)
Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.
2017-12-01
We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.
Walker, R.S.; Novare, A.J.; Nichols, J.D.
2000-01-01
Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.
Positron Scanner for Locating Brain Tumors
DOE R&D Accomplishments Database
Rankowitz, S.; Robertson, J. S.; Higinbotham, W. A.; Rosenblum, M. J.
1962-03-01
A system is described that makes use of positron emitting isotopes for locating brain tumors. This system inherently provides more information about the distribution of radioactivity in the head in less time than existing scanners which use one or two detectors. A stationary circular array of 32 scintillation detectors scans a horizontal layer of the head from many directions simultaneously. The data, consisting of the number of counts in all possible coincidence pairs, are coded and stored in the memory of a Two-Dimensional Pulse-Height Analyzer. A unique method of displaying and interpreting the data is described that enables rapid approximate analysis of complex source distribution patterns. (auth)
Queries over Unstructured Data: Probabilistic Methods to the Rescue
NASA Astrophysics Data System (ADS)
Sarawagi, Sunita
Unstructured data like emails, addresses, invoices, call transcripts, reviews, and press releases are now an integral part of any large enterprise. A challenge of modern business intelligence applications is analyzing and querying data seamlessly across structured and unstructured sources. This requires the development of automated techniques for extracting structured records from text sources and resolving entity mentions in data from various sources. The success of any automated method for extraction and integration depends on how effectively it unifies diverse clues in the unstructured source and in existing structured databases. We argue that statistical learning techniques like Conditional Random Fields (CRFs) provide a accurate, elegant and principled framework for tackling these tasks. Given the inherent noise in real-world sources, it is important to capture the uncertainty of the above operations via imprecise data models. CRFs provide a sound probability distribution over extractions but are not easy to represent and query in a relational framework. We present methods of approximating this distribution to query-friendly row and column uncertainty models. Finally, we present models for representing the uncertainty of de-duplication and algorithms for various Top-K count queries on imprecise duplicates.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution.
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Ourmazd, Abbas [University of Wisconsin, Milwaukee, Wisconsin, USA
2017-12-09
Ever shattered a valuable vase into 10 to the 6th power pieces and tried to reassemble it under a light providing a mean photon count of 10 minus 2 per detector pixel with shot noise? If you can do that, you can do single-molecule crystallography. This talk will outline how this can be done in principle. In more technical terms, the talk will describe how the combination of scattering physics and Bayesian algorithms can be used to reconstruct the 3-D diffracted intensity distribution from a collection of individual 2-D diffiraction patterns down to a mean photon count of 10 minus 2 per pixel, the signal level anticipated from the Linac Coherent Light Source, and hence determine the structure of individual macromolecules and nanoparticles.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...
2017-04-29
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the effective system dead time parameter for correlated neutron counting
NASA Astrophysics Data System (ADS)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.
2017-11-01
Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.
NASA Technical Reports Server (NTRS)
Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.;
2012-01-01
We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.
Long-distance quantum key distribution with imperfect devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Piparo, Nicoló; Razavi, Mohsen
2014-12-04
Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less
Properties of the Scuba-2 850 μm Sources in the XMM-LSS Field
NASA Astrophysics Data System (ADS)
Seo, Hyunjong; Jeong, Woong-Seob; Kim, Seong Jin; Pyo, Jeonghyun; Kim, Min Gyu; Ko, Jongwan; Kim, Minjin; Kim, Sam
2017-02-01
We carry out the study of 850 μm sources in a part of the XMM-LSS field. The 850 μm imaging data were obtained by the SCUBA-2 on the James Clerk Maxwell Telescope(JCMT) for three days in July 2015 with an integration time of 6.1 hours, covering a circular area with a radius of 15'. We choose the central area up to a radius of 9.15 arcmin for the study, where the noise distribution is relatively uniform. The root mean square (rms) noise at the center is 2.7 mJy. We identify 17 sources with S/N > 3.5. Differential number count is estimated in flux range between 3.5 and 9.0 mJy after applying various corrections derived by imaging simulations, which is consistent with previous studies. For detailed study on the individual sources, we select three sources with more reliable measurements (S/N > 4.5), and construct their spectral energy distributions (SEDs) from optical to far-infrared band. Redshift distribution of the sources ranges from 0.36 to 3.28, and their physical parameters are extracted using MAGPHYS model, which yield infrared luminosity L_{IR}= 10^{11.3}-10^{13.4} L_{⊙}, star formation rate SFR = 10^{1.3}-10^{3.2} M_{⊙}yr^{-1} and dust temperature T_{D} = 30-53 K. We investigate the correlation between L_{IR} and T_{D}, which appears to be consistent with previous studies.}
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Ishii, Takeo; Hizawa, Nobuyuki; Midwinter, Dawn; James, Mark; Hilton, Emma; Jones, Paul W
2018-01-01
Background Blood eosinophil measurements may help to guide physicians on the use of inhaled corticosteroids (ICS) for patients with chronic obstructive pulmonary disease (COPD). Emerging data suggest that COPD patients with higher blood eosinophil counts may be at higher risk of exacerbations and more likely to benefit from combined ICS/long-acting beta2-agonist (LABA) treatment than therapy with a LABA alone. This analysis describes the distribution of blood eosinophil count at baseline in Japanese COPD patients in comparison with non-Japanese COPD patients. Methods A post hoc analysis of eosinophil distribution by percentage and absolute cell count was performed across 12 Phase II–IV COPD clinical studies (seven Japanese studies [N=848 available absolute eosinophil counts] and five global studies [N=5,397 available eosinophil counts] that included 246 Japanese patients resident in Japan with available counts). Blood eosinophil distributions were assessed at baseline, before blinded treatment assignment. Findings Among Japanese patients, the median (interquartile range) absolute eosinophil count was 170 cells/mm3 (100–280 cells/mm3). Overall, 612/1,094 Japanese patients (56%) had an absolute eosinophil count ≥150 cells/mm3 and 902/1,304 Japanese patients (69%) had a percentage eosinophil ≥2%. Among non-Japanese patients, these values were 160 (100–250) cells/mm3, 2,842/5,151 patients (55%), and 2,937/5,155 patients (57%), respectively. The eosinophil distribution among Japanese patients was similar to that among non-Japanese patients. Within multi-country studies with similar inclusion criteria, the eosinophil count was numerically lower in Japanese compared with non-Japanese patients (median 120 vs 160 cells/mm3). Interpretation The eosinophil distribution in Japanese patients seems comparable to that of non-Japanese patients; although within multi-country studies, there was a slightly lower median eosinophil count for Japanese patients compared with non-Japanese patients. These findings suggest that blood eosinophil data from global studies are of relevance in Japan. PMID:29440882
ERIC Educational Resources Information Center
Le Corre, Mathieu; Carey, Susan
2007-01-01
Since the publication of [Gelman, R., & Gallistel, C. R. (1978). "The child's understanding of number." Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present…
26 CFR 1.959-4 - Distributions to United States persons not counting as dividends.
Code of Federal Regulations, 2010 CFR
2010-04-01
... normal taxes and surtaxes) of subtitle A (relating to income taxes) of the Code as a distribution which... 26 Internal Revenue 10 2010-04-01 2010-04-01 false Distributions to United States persons not... Distributions to United States persons not counting as dividends. Except as provided in section 960(a)(3) and...
Full counting statistics in a serially coupled double quantum dot system with spin-orbit coupling
NASA Astrophysics Data System (ADS)
Wang, Qiang; Xue, Hai-Bin; Xie, Hai-Qing
2018-04-01
We study the full counting statistics of electron transport through a serially coupled double quantum dot (QD) system with spin-orbit coupling (SOC) weakly coupled to two electrodes. We demonstrate that the spin polarizations of the source and drain electrodes determine whether the shot noise maintains super-Poissonian distribution, and whether the sign transitions of the skewness from positive to negative values and of the kurtosis from negative to positive values take place. In particular, the interplay between the spin polarizations of the source and drain electrodes and the magnitude of the external magnetic field, can give rise to a gate-voltage-tunable strong negative differential conductance (NDC) and the shot noise in this NDC region is significantly enhanced. Importantly, for a given SOC parameter, the obvious variation of the high-order current cumulants as a function of the energy-level detuning in a certain range, especially the dip position of the Fano factor of the skewness can be used to qualitatively extract the information about the magnitude of the SOC.
Protecting count queries in study design
Sarwate, Anand D; Boxwala, Aziz A
2012-01-01
Objective Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. Methods A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Results Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Conclusions Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems. PMID:22511018
Protecting count queries in study design.
Vinterbo, Staal A; Sarwate, Anand D; Boxwala, Aziz A
2012-01-01
Today's clinical research institutions provide tools for researchers to query their data warehouses for counts of patients. To protect patient privacy, counts are perturbed before reporting; this compromises their utility for increased privacy. The goal of this study is to extend current query answer systems to guarantee a quantifiable level of privacy and allow users to tailor perturbations to maximize the usefulness according to their needs. A perturbation mechanism was designed in which users are given options with respect to scale and direction of the perturbation. The mechanism translates the true count, user preferences, and a privacy level within administrator-specified bounds into a probability distribution from which the perturbed count is drawn. Users can significantly impact the scale and direction of the count perturbation and can receive more accurate final cohort estimates. Strong and semantically meaningful differential privacy is guaranteed, providing for a unified privacy accounting system that can support role-based trust levels. This study provides an open source web-enabled tool to investigate visually and numerically the interaction between system parameters, including required privacy level and user preference settings. Quantifying privacy allows system administrators to provide users with a privacy budget and to monitor its expenditure, enabling users to control the inevitable loss of utility. While current measures of privacy are conservative, this system can take advantage of future advances in privacy measurement. The system provides new ways of trading off privacy and utility that are not provided in current study design systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2015-12-01
This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.
NASA Astrophysics Data System (ADS)
Dole, H.
2000-10-01
This thesis deals with the analysis of the FIRBACK deep survey performed in the far infrared at 170 microns with the Infrared Space Observatory, whose aim is the study of the galaxies contributing to the Cosmic Infrared Background, and with the modellisation of galaxy evolution in the mid-infrared to submillimeter range. The FIRBACK survey covers 3.89 square degrees in 3 high galactic latitude and low foreground emission fields (2 of which are in the northern sky). I first present the techniques of reduction, processing and calibration of the ISOPHOT cosmological data. I show that there is a good agreement between PHOT and DIRBE on extended emission, thanks to the derivation of the PHOT footprint. Final maps are created, and the survey is confusion limited at (sigma = 45 mJy). I present then the techniques of source extraction and the simulations for photometry needed to build the final catalog of 106 sources between 180 mJy (4 sigma) and 2.4 Jy. The complementary catalog is made of 90 sources between 135 and 180 mJy. Galaxy counts show a large excess with respect to local counts or models (with and without evolution), only compatible with strong evolution scenarios. The Cosmic Infrared Background (CIB) is resolved at 4% at 170 microns. The identifications of the sources at other wavelengths suggest that most of the sources are local, but a non negligible part lies above redshift 1. I have developped a phenomenological model of galaxy evolution in order to constrain galaxy evolution in the infrared and to have a better understanding of what the FIRBACK sources are. Using the local Luminosity Function (LF), and template spectra of starburst galaxies, it is possible to constrain the evolution of the LF using all the available data: deep source counts at 15, 170 and 850 microns and the CIB spectrum. I show that galaxy evolution is dominated by a high infrared luminosity population, peaking at 2.0 1011 solar luminosities. Redshift distributions are in agreement with available observations. Predictions are possible with our model for the forthcoming space missions such as SIRTF, Planck and FIRST.
A Review of Multivariate Distributions for Count Data Derived from the Poisson Distribution
Inouye, David; Yang, Eunho; Allen, Genevera; Ravikumar, Pradeep
2017-01-01
The Poisson distribution has been widely studied and used for modeling univariate count-valued data. Multivariate generalizations of the Poisson distribution that permit dependencies, however, have been far less popular. Yet, real-world high-dimensional count-valued data found in word counts, genomics, and crime statistics, for example, exhibit rich dependencies, and motivate the need for multivariate distributions that can appropriately model this data. We review multivariate distributions derived from the univariate Poisson, categorizing these models into three main classes: 1) where the marginal distributions are Poisson, 2) where the joint distribution is a mixture of independent multivariate Poisson distributions, and 3) where the node-conditional distributions are derived from the Poisson. We discuss the development of multiple instances of these classes and compare the models in terms of interpretability and theory. Then, we empirically compare multiple models from each class on three real-world datasets that have varying data characteristics from different domains, namely traffic accident data, biological next generation sequencing data, and text data. These empirical experiments develop intuition about the comparative advantages and disadvantages of each class of multivariate distribution that was derived from the Poisson. Finally, we suggest new research directions as explored in the subsequent discussion section. PMID:28983398
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, M-D.
2000-08-23
Internal combustion engines are a major source of airborne particulate matter (PM). The size of the engine PM is in the sub-micrometer range. The number of engine particles per unit volume is high, normally in the range of 10{sup 12} to 10{sup 14}. To measure the size distribution of the engine particles dilution of an aerosol sample is required. A diluter utilizing a venturi ejector mixing technique is commercially available and tested. The purpose of this investigation was to determine if turbulence created by the ejector in the mini-dilutor changes the size of particles passing through it. The results ofmore » the NaCl aerosol experiments show no discernible difference in the geometric mean diameter and geometric standard deviation of particles passing through the ejector. Similar results were found for the DOP particles. The ratio of the total number concentrations before and after the ejector indicates that a dilution ratio of approximately 20 applies equally for DOP and NaCl particles. This indicates the dilution capability of the ejector is not affected by the particle composition. The statistical analysis results of the first and second moments of a distribution indicate that the ejector may not change the major parameters (e.g., the geometric mean diameter and geometric standard deviation) characterizing the size distributions of NaCl and DOP particles. However, when the skewness was examined, it indicates that the ejector modifies the particle size distribution significantly. The ejector could change the skewness of the distribution in an unpredictable and inconsistent manner. Furthermore, when the variability of particle counts in individual size ranges as a result of the ejector is examined, one finds that the variability is greater for DOP particles in the size range of 40-150 nm than for NaCl particles in the size range of 30 to 350 nm. The numbers or particle counts in this size region are high enough that the Poisson counting errors are small (<10%) compared with the tail regions. This result shows that the ejector device could have a higher bin-to-bin counting uncertainty for ''soft'' particles such as DOP than for a solid dry particle like NaCl. The results suggest that it may be difficult to precisely characterize the size distribution of particles ejected from the mini-dilution system if the particle is not solid.« less
NASA Technical Reports Server (NTRS)
Elvis, Martin; Plummer, David; Schachter, Jonathan; Fabbiano, G.
1992-01-01
A catalog of 819 sources detected in the Einstein IPC Slew Survey of the X-ray sky is presented; 313 of the sources were not previously known as X-ray sources. Typical count rates are 0.1 IPC count/s, roughly equivalent to a flux of 3 x 10 exp -12 ergs/sq cm s. The sources have positional uncertainties of 1.2 arcmin (90 percent confidence) radius, based on a subset of 452 sources identified with previously known pointlike X-ray sources (i.e., extent less than 3 arcmin). Identifications based on a number of existing catalogs of X-ray and optical objects are proposed for 637 of the sources, 78 percent of the survey (within a 3-arcmin error radius) including 133 identifications of new X-ray sources. A public identification data base for the Slew Survey sources will be maintained at CfA, and contributions to this data base are invited.
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
Spatially- explicit Fossil Fuel Carbon Dioxide Inventories for Transportation in the U.S.
NASA Astrophysics Data System (ADS)
Hutchins, M.; Gurney, K. R.
2016-12-01
The transportation sector is the second largest source of Fossil Fuel CO2 (FFCO2) emissions, and is unique in that federal, state, and municipal levels of government are all able to enact transportation policy. However, since data related to transportation activities are reported by multiple different government agencies, the data are not always consistent. As a result, the methods and data used to inventory and account for transportation related FFCO2 emissions have important implications for both science and policy. Aggregate estimates of transportation related FFCO2 emissions can be spatially distributed using traffic data, such as the Highway Performance Monitoring System (HPMS) Average Annual Daily Traffic (AADT). There are currently two datasets that estimate the spatial distribution of transportation related FFCO2 in the United States- Vulcan 3.0 and the Database of Road Transportation Emissions (DARTE). Both datasets are at 1 km resolution, for the year 2011, and utilize HPMS AADT traffic data. However, Vulcan 3.0 and DARTE spatially distribute emissions using different methods and inputs, resulting in a number of differences. Vulcan 3.0 and DARTE estimate national transportation related FFCO2 emissions within 2.5% of each other, with more significant differences at the county and state level. The differences are most notable in urban versus rural regions, and for specific road classes. The origin of these differences are explored in depth to understand the implication of using specific data sources, such as the National Emissions Inventory and other aggregate transportation statistics from the Federal Highway Administration (FHWA). In addition to comparing Vulcan 3.0 and DARTE to each other, the results from both data sets are compared to independent traffic volume measurements acquired from the FHWA Continuous Count Station (CCS) network. The CCS records hourly traffic counts at fixed locations in space throughout the U.S. We calculate transportation related FFCO2 emissions at a CCS stations using fuel specific emissions factors combined with the raw traffic counts. The CCS network provides a unique opportunity to compare spatially explicit, "bottom-up" models of transportation related FFCO2 emissions to measured traffic volume at over 300 specific locations.
Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.
Uhl, Wolfgang; Schaule, Gabriela
2004-05-01
Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.
Choi, Woo June; Pepple, Kathryn L; Wang, Ruikang K
2018-05-24
In preclinical vision research, cell grading in small animal models is essential for the quantitative evaluation of intraocular inflammation. Here, we present a new and practical optical coherence tomography (OCT) image analysis method for the automated detection and counting of aqueous cells in the anterior chamber (AC) of a rodent model of uveitis. Anterior segment OCT (AS-OCT) images are acquired with a 100kHz swept-source OCT (SS-OCT) system. The proposed method consists of two steps. In the first step, we first despeckle and binarize each OCT image. After removing AS structures in the binary image, we then apply area thresholding to isolate cell-like objects. Potential cell candidates are selected based on their best fit to roundness. The second step performs the cell counting within the whole AC, in which additional cell tracking analysis is conducted on the successive OCT images to eliminate redundancy in cell counting. Finally, 3-D cell grading using the proposed method is demonstrated in longitudinal OCT imaging of a mouse model of anterior uveitis in vivo. Rendering of anterior segment (orange) of mouse eye and automatically counted anterior chamber cells (green). Inset is a top view of the rendering, showing the cell distribution across the anterior chamber. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications
NASA Astrophysics Data System (ADS)
Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.
2002-08-01
We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.
Algorithms development for the GEM-based detection system
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Malinowski, K.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zabolotny, W.
2016-09-01
The measurement system based on GEM - Gas Electron Multiplier detector - is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an Xray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals and cluster charge values corresponding to the energy spectra.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
NASA Astrophysics Data System (ADS)
Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.
2018-04-01
Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods based on total traffic counts and road density distribution suggested that the use of total traffic counts in a simplified form could enhance higher uncertainties in the spatial disaggregation of emissions. Results obtained could add new information that help to improve the air pollution management system in the city and contribute to local public policy decisions. Additionally, this work provides appropriate resolution emission fluxes for ongoing research in atmospheric modeling in the city, with the aim to improve the understanding of transport, transformation and impacts of pollutant emissions in urban air quality.
Cosmological constraints from X-ray all sky surveys, from CODEX to eROSITA
NASA Astrophysics Data System (ADS)
Finoguenov, A.
2017-10-01
Large area cluster cosmology has long become a multiwavelength discipline. Understanding the effect of various selections is currently the main path to improving on the validity of cluster cosmological results. Many of these results are based on the large area sample derived from RASS data. We perform wavelet detection of X-ray sources and make extensive simulations of the detection of clusters in the RASS data. We assign an optical richness to each of the 25,000 detected X-ray sources in the 10,000 square degrees of SDSS BOSS area. We show that there is no obvious separation of sources on galaxy clusters and AGN, based on distribution of systems on their richness. We conclude that previous catalogs, such as MACS, REFLEX are all subject to a complex optical selection function, in addition to an X-ray selection. We provide a complete model of identification of cluster counts are galaxy clusters, which includes chance identification, effect of AGN halo occupation distribution and the thermal emission of ICM. Finally we present the cosmological results obtained using this sample.
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.
2002-01-01
Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.
Statistics of the fractional polarization of extragalactic dusty sources in Planck HFI maps
NASA Astrophysics Data System (ADS)
Bonavera, L.; González-Nuevo, J.; De Marco, B.; Argüeso, F.; Toffolatti, L.
2017-11-01
We estimate the average fractional polarization at 143, 217 and 353 GHz of a sample of 4697 extragalactic dusty sources by applying stacking technique. The sample is selected from the second version of the Planck Catalogue of Compact Sources at 857 GHz, avoiding the region inside the Planck Galactic mask (fsky ∼ 60 per cent). We recover values for the mean fractional polarization at 217 and 353 GHz of (3.10 ± 0.75) per cent and (3.65 ± 0.66) per cent, respectively, whereas at 143 GHz we give a tentative value of (3.52 ± 2.48) per cent. We discuss the possible origin of the measured polarization, comparing our new estimates with those previously obtained from a sample of radio sources. We test different distribution functions and we conclude that the fractional polarization of dusty sources is well described by a log-normal distribution, as determined in the radio band studies. For this distribution we estimate μ217GHz = 0.3 ± 0.5 [that would correspond to a median fractional polarization of Πmed = (1.3 ± 0.7) per cent] and μ353GHz = 0.7 ± 0.4 (Πmed = (2.0 ± 0.8) per cent), σ217GHz = 1.3 ± 0.2 and σ353GHz = 1.1 ± 0.2. With these values we estimate the source number counts in polarization and the contribution given by these sources to the Cosmic Microwave Background B-mode angular power spectrum at 217, 353, 600 and 800 GHz. We conclude that extragalactic dusty sources might be an important contaminant for the primordial B-mode at frequencies >217 GHz.
Native Amazonian Children Forego Egalitarianism in Merit-Based Tasks When They Learn to Count
ERIC Educational Resources Information Center
Jara-Ettinger, Julian; Gibson, Edward; Kidd, Celeste; Piantadosi, Steve
2016-01-01
Cooperation often results in a final material resource that must be shared, but deciding how to distribute that resource is not straightforward. A distribution could count as fair if all members receive an equal reward ("egalitarian distributions"), or if each member's reward is proportional to their merit ("merit-based…
Waiting time distribution revealing the internal spin dynamics in a double quantum dot
NASA Astrophysics Data System (ADS)
Ptaszyński, Krzysztof
2017-07-01
Waiting time distribution and the zero-frequency full counting statistics of unidirectional electron transport through a double quantum dot molecule attached to spin-polarized leads are analyzed using the quantum master equation. The waiting time distribution exhibits a nontrivial dependence on the value of the exchange coupling between the dots and the gradient of the applied magnetic field, which reveals the oscillations between the spin states of the molecule. The zero-frequency full counting statistics, on the other hand, is independent of the aforementioned quantities, thus giving no insight into the internal dynamics. The fact that the waiting time distribution and the zero-frequency full counting statistics give a nonequivalent information is associated with two factors. Firstly, it can be explained by the sensitivity to different timescales of the dynamics of the system. Secondly, it is associated with the presence of the correlation between subsequent waiting times, which makes the renewal theory, relating the full counting statistics and the waiting time distribution, no longer applicable. The study highlights the particular usefulness of the waiting time distribution for the analysis of the internal dynamics of mesoscopic systems.
Structured pedigree information for distributed fusion systems
NASA Astrophysics Data System (ADS)
Arambel, Pablo O.
2008-04-01
One of the most critical challenges in distributed data fusion is the avoidance of information double counting (also called "data incest" or "rumor propagation"). This occurs when a node in a network incorporates information into an estimate - e.g. the position of an object - and the estimate is injected into the network. Other nodes fuse this estimate with their own estimates, and continue to propagate estimates through the network. When the first node receives a fused estimate from the network, it does not know if it already contains its own contributions or not. Since the correlation between its own estimate and the estimate received from the network is not known, the node can not fuse the estimates in an optimal way. If it assumes that both estimates are independent from each other, it unknowingly double counts the information that has already being used to obtain the two estimates. This leads to overoptimistic error covariance matrices. If the double-counting is not kept under control, it may lead to serious performance degradation. Double counting can be avoided by propagating uniquely tagged raw measurements; however, that forces each node to process all the measurements and precludes the propagation of derived information. Another approach is to fuse the information using the Covariance Intersection (CI) equations, which maintain consistent estimates irrespective of the cross-correlation among estimates. However, CI does not exploit pedigree information of any kind. In this paper we present an approach that propagates multiple covariance matrices, one for each uncorrelated source in the network. This is a way to compress the pedigree information and avoids the need to propagate raw measurements. The approach uses a generalized version of the Split CI to fuse different estimates with appropriate weights to guarantee the consistency of the estimates.
Determination of beta activity in water
Barker, F.B.; Robinson, B.P.
1963-01-01
Many elements have one or more naturally radioactive isotopes, and several hundred other radionuclides have been produced artificially. Radioactive substances may be present in natural water as a result of geochemical processes or the release of radioactive waste and other nuclear debris to the environment. The Geological Survey has developed methods for measuring certain of these .radioactive substances in water. Radioactive substances often are present in water samples in microgram quantities or less. Therefore, precautions must be taken to prevent loss of material and to assure that the sample truly represents its source at the time of collection. Addition of acids, complexing agents, or stable isotopes often aids in preventing loss of radioactivity on container walls, on sediment, or on other solid materials in contact with the sample. The disintegration of radioactive atoms is a random process subject to established methods of statistical analysis. Because many water samples contain small amounts of radioactivity, low-level counting techniques must be used. The usual assumption that counting data follow a Gaussian distribution is invalid under these conditions, and statistical analyses must be based on the Poisson distribution. The gross beta activity in water samples is determined from the residue left after evaporation of the sample to dryness. Evaporation is accomplished first in a teflon dish, then the residue is transferred with distilled water to a counting planchet and again is reduced to dryness. The radioactivity on the planchet is measured with an anticoincidence-shielded, low-background, beta counter and is compared with measurements of a strontium-90-yttrium-90 standard prepared and measured in the same manner. Control charts are used to assure consistent operation of the counting instrument.
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.; Kohno, K.; Aretxaga, I.; Austermann, J. E.; Ezawa, H.; Hughes, D. H.; Ikarashi, S.; Iono, D.; Kawabe, R.; Matsuo, H.; Matsuura, S.; Nakanishi, K.; Oshima, T.; Perera, T.; Scott, K. S.; Shirahata, M.; Takeuchi, T. T.; Tamura, Y.; Tanaka, K.; Tosaki, T.; Wilson, G. W.; Yun, M. S.
2010-10-01
We present number counts and clustering properties of millimeter-bright galaxies uncovered by the AzTEC camera mounted on the Atacama Submillimeter Telescope Experiment (ASTE). We surveyed the AKARI Deep Field South (ADF-S), the Subaru/XMM Newton Deep Field (SXDF), and the SSA22 fields with an area of ~0.25 deg2 each with an rms noise level of ~0.4-1.0 mJy. We constructed differential and cumulative number counts, which provide currently the tightest constraints on the faint end. The integration of the best-fit number counts in the ADF-S find that the contribution of 1.1 mm sources with fluxes >=1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which the number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with >=1 mJy using the best-fit number counts in the ADF-S and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z~2-3. The average mass of dark halos hosting bright 1.1 mm sources was calculated to be 1013-1014 Msolar. Comparison of correlation lengths of 1.1 mm sources with other populations and with a bias evolution model suggests that dark halos hosting bright 1.1 mm sources evolve into systems of clusters at present universe and the 1.1 mm sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
A physics investigation of deadtime losses in neutron counting at low rates with Cf252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Louise G; Croft, Stephen
2009-01-01
{sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less
The size distribution of Pacific Seamounts
NASA Astrophysics Data System (ADS)
Smith, Deborah K.; Jordan, Thomas H.
1987-11-01
An analysis of wide-beam, Sea Beam and map-count data in the eastern and southern Pacific confirms the hypothesis that the average number of "ordinary" seamounts with summit heights h ≥ H can be approximated by the exponential frequency-size distribution: v(H) = vo e-βH. The exponential model, characterized by the single scale parameter β-1, is found to be superior to a power-law (self-similar) model. The exponential model provides a good first-order description of the summit-height distribution over a very broad spectrum of seamount sizes, from small cones (h < 300 m) to tall composite volcanoes (h > 3500 m). The distribution parameters obtained from 157,000 km of wide-beam profiles in the eastern and southern Pacific Ocean are vo = (5.4 ± 0.65) × 10-9m-2 and β = (3.5 ± 0.21) × 10-3 m-1, yielding an average of 5400 ± 650 seamounts per million square kilometers, of which 170 ± 17 are greater than one kilometer in height. The exponential distribution provides a reference for investigating the populations of not-so-ordinary seamounts, such as those on hotspot swells and near fracture zones, and seamounts in other ocean basins. If we assume that volcano height is determined by a hydraulic head proportional to the source depth of the magma column, then our observations imply an approximately exponential distribution of source depths. For reasonable values of magma and crustal densities, a volcano with the characteristic height β-1 = 285 m has an apparent source depth on the order of the crustal thickness.
Monitoring bacterial contamination of piped water supply in rural coastal Bangladesh.
Ahsan, Md Sabbir; Akber, Md Ali; Islam, Md Atikul; Kabir, Md Pervez; Hoque, Md Ikramul
2017-10-31
Safe drinking water is scarce in southwest coastal Bangladesh because of unavailability of fresh water. Given the high salinity of both groundwater and surface water in this area, harvested rainwater and rain-fed pond water became the main sources of drinking water. Both the government and non-government organizations have recently introduced pipe water supply in the rural coastal areas to ensure safe drinking water. We assessed the bacteriological quality of water at different points along the piped water distribution system (i.e., the source, treatment plant, household taps, street hydrants, and household storage containers) of Mongla municipality under Mongla Upazila in Bagerhat district. Water samples were collected at 2-month interval from May 2014 to March 2015. Median E. coli and total coliform counts at source, treatment plant, household taps, street hydrants, and household storage containers were respectively 225, 4, 7, 7, and 15 cfu/100 ml and 42,000, 545, 5000, 6150, and 18,800 cfu/100 ml. Concentrations of both of the indicator bacteria reduced after treatment, although it did not satisfy the WHO drinking water standards. However, re-contamination in distribution systems and household storage containers indicate improper maintenance of distribution system and lack of personal hygiene.
Brian S. Cade; Barry R. Noon; Rick D. Scherer; John J. Keane
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
Poisson mixture model for measurements using counting.
Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz
2010-03-01
Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.
The continuum spectral characteristics of gamma-ray bursts observed by BATSE
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Briggs, Michael S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Harmon, Alan B.; Kouveliotou, Chryssa
1994-01-01
Distributions of the continuum spectral characteristics of 260 bursts in the first Burst And Transient Source Experiement (BATSE) catalog are presented. The data are derived from flux calculated from BATSE Large Area Detector (LAD) four-channel discriminator data. The data are converted from counts to protons using a direct spectral inversion technique to remove the effects of atmospheric scattering and the energy dependence of the detector angular response. Although there are intriguing clusters of bursts in the spectral hardness ratio distributions, no evidence for the presence of distinct burst classes based in spectral hardness ratios alone is found. All subsets of bursts selected for their spectral characteristics in this analysis exhibit spatial distributions consistent with isotropy. The spectral diversity of the burst population appears to be caused largely by the highly variable nature of the burst production mechanisms themselves.
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, S. G.; Trott, C. M.; Jordan, C. H.
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less
Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.
Böhning, Dankmar; Kuhnert, Ronny
2006-12-01
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
MANTA--an open-source, high density electrophysiology recording suite for MATLAB.
Englitz, B; David, S V; Sorenson, M D; Shamma, S A
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.
MANTA—an open-source, high density electrophysiology recording suite for MATLAB
Englitz, B.; David, S. V.; Sorenson, M. D.; Shamma, S. A.
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point. PMID:23653593
Alonso Roldán, Virginia; Bossio, Luisina; Galván, David E
2015-01-01
In species showing distributions attached to particular features of the landscape or conspicuous signs, counts are commonly made by making focal observations where animals concentrate. However, to obtain density estimates for a given area, independent searching for signs and occupancy rates of suitable sites is needed. In both cases, it is important to estimate detection probability and other possible sources of variation to avoid confounding effects on measurements of abundance variation. Our objective was to assess possible bias and sources of variation in a two-step protocol in which random designs were applied to search for signs while continuously recording video cameras were used to perform abundance counts where animals are concentrated, using mara (Dolichotis patagonum) as a case study. The protocol was successfully applied to maras within the Península Valdés protected area, given that the protocol was logistically suitable, allowed warrens to be found, the associated adults to be counted, and the detection probability to be estimated. Variability was documented in both components of the two-step protocol. These sources of variation should be taken into account when applying this protocol. Warren detectability was approximately 80% with little variation. Factors related to false positive detection were more important than imperfect detection. The detectability for individuals was approximately 90% using the entire day of observations. The shortest sampling period with a similar detection capacity than a day was approximately 10 hours, and during this period, the visiting dynamic did not show trends. For individual mara, the detection capacity of the camera was not significantly different from the observer during fieldwork. The presence of the camera did not affect the visiting behavior of adults to the warren. Application of this protocol will allow monitoring of the near-threatened mara providing a minimum local population size and a baseline for measuring long-term trends.
X-ray detection of Nova Del 2013 with Swift
NASA Astrophysics Data System (ADS)
Castro-Tirado, Alberto J.; Martin-Carrillo, Antonio; Hanlon, Lorraine
2013-08-01
Continuous X-ray monitoring by Swift of Nova Del 2013 (see CBET #3628) shows an increase of X-ray emission at the source location compared to previous observations (ATEL #5283, ATEL #5305) during a 3.9 ksec observation at UT 2013-08-22 12:05. With the XRT instrument operating in window timing mode, 744 counts were extracted from a 50 pixel long source region and 324 counts from a similar box for a background region, resulting in a 13-sigma detection with a net count rate of 0.11±0.008 counts/sec.
On-line detection of Escherichia coli intrusion in a pilot-scale drinking water distribution system.
Ikonen, Jenni; Pitkänen, Tarja; Kosse, Pascal; Ciszek, Robert; Kolehmainen, Mikko; Miettinen, Ilkka T
2017-08-01
Improvements in microbial drinking water quality monitoring are needed for the better control of drinking water distribution systems and for public health protection. Conventional water quality monitoring programmes are not always able to detect a microbial contamination of drinking water. In the drinking water production chain, in addition to the vulnerability of source waters, the distribution networks are prone to contamination. In this study, a pilot-scale drinking-water distribution network with an on-line monitoring system was utilized for detecting bacterial intrusion. During the experimental Escherichia coli intrusions, the contaminant was measured by applying a set of on-line sensors for electric conductivity (EC), pH, temperature (T), turbidity, UV-absorbance at 254 nm (UVAS SC) and with a device for particle counting. Monitored parameters were compared with the measured E. coli counts using the integral calculations of the detected peaks. EC measurement gave the strongest signal compared with the measured baseline during the E. coli intrusion. Integral calculations showed that the peaks in the EC, pH, T, turbidity and UVAS SC data were detected corresponding to the time predicted. However, the pH and temperature peaks detected were barely above the measured baseline and could easily be mixed with the background noise. The results indicate that on-line monitoring can be utilized for the rapid detection of microbial contaminants in the drinking water distribution system although the peak interpretation has to be performed carefully to avoid being mixed up with normal variations in the measurement data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Inconsistencies in authoritative national paediatric workforce data sources.
Allen, Amy R; Doherty, Richard; Hilton, Andrew M; Freed, Gary L
2017-12-01
Objective National health workforce data are used in workforce projections, policy and planning. If data to measure the current effective clinical medical workforce are not consistent, accurate and reliable, policy options pursued may not be aligned with Australia's actual needs. The aim of the present study was to identify any inconsistencies and contradictions in the numerical count of paediatric specialists in Australia, and discuss issues related to the accuracy of collection and analysis of medical workforce data. Methods This study compared respected national data sources regarding the number of medical practitioners in eight fields of paediatric speciality medical (non-surgical) practice. It also counted the number of doctors listed on the websites of speciality paediatric hospitals and clinics as practicing in these eight fields. Results Counts of medical practitioners varied markedly for all specialties across the data sources examined. In some fields examined, the range of variability across data sources exceeded 450%. Conclusions The national datasets currently available from federal and speciality sources do not provide consistent or reliable counts of the number of medical practitioners. The lack of an adequate baseline for the workforce prevents accurate predictions of future needs to provide the best possible care of children in Australia. What is known about the topic? Various national data sources contain counts of the number of medical practitioners in Australia. These data are used in health workforce projections, policy and planning. What does this paper add? The present study found that the current data sources do not provide consistent or reliable counts of the number of practitioners in eight selected fields of paediatric speciality practice. There are several potential issues in the way workforce data are collected or analysed that cause the variation between sources to occur. What are the implications for practitioners? Without accurate data on which to base decision making, policy options may not be aligned with the actual needs of children with various medical needs, in various geographic areas or the nation as a whole.
A pilot study evaluating the prognostic utility of platelet indices in dogs with septic peritonitis.
Llewellyn, Efa A; Todd, Jeffrey M; Sharkey, Leslie C; Rendahl, Aaron
2017-09-01
To characterize platelet indices at time of diagnosis of septic peritonitis in dogs and to assess the relationship between platelet parameter data and survival to discharge in dogs treated surgically. Retrospective, observational, descriptive pilot study from 2009 to 2014. University teaching hospital. Forty-eight dogs diagnosed with septic peritonitis were included in this study. Thirty-six dogs had surgical source control. Blood samples from 46 healthy control dogs were used for reference interval (RI) generation. None. Dogs with septic peritonitis had significantly increased mean values for mean platelet volume (MPV), plateletcrit (PCT), and platelet distribution width (PDW) with increased proportions of dogs having values above the RI compared to healthy dogs. A significantly increased proportion of dogs with septic peritonitis had platelet counts above (12.5%) and below (8.3%) the RI, with no significant difference in mean platelet count compared to healthy dogs. No significant differences in the mean platelet count, MPV, PCT, or PDW were found between survivors and nonsurvivors in dogs with surgical source control; however, dogs with MPV values above the RI had significantly increased mortality compared to dogs within the RI (P = 0.025). Values outside the RI for other platelet parameters were not associated with significant differences in mortality. Dogs with septic peritonitis have increased frequency of thrombocytosis and thrombocytopenia with increased MPV, PCT, and PDW. An increased MPV may be a useful indicator of increased risk of mortality in dogs treated surgically. © Veterinary Emergency and Critical Care Society 2017.
NASA Astrophysics Data System (ADS)
Everett, Samantha
2010-10-01
A transmission curve experiment was carried out to measure the range of beta particles in aluminum in the health physics laboratory located on the campus of Texas Southern University. The transmission count rate through aluminum for varying radiation lengths was measured using beta particles emitted from a low activity (˜1 μCi) Sr-90 source. The count rate intensity was recorded using a Geiger Mueller tube (SGC N210/BNC) with an active volume of 61 cm^3 within a systematic detection accuracy of a few percent. We compared these data with a realistic simulation of the experimental setup using the Geant4 Monte Carlo toolkit (version 9.3). The purpose of this study was to benchmark our Monte Carlo for future experiments as part of a more comprehensive research program. Transmission curves were simulated based on the standard and low-energy electromagnetic physics models, and using the radioactive decay module for the electrons primary energy distribution. To ensure the validity of our measurements, linear extrapolation techniques were employed to determine the in-medium beta particle range from the measured data and was found to be 1.87 g/cm^2 (˜0.693 cm), in agreement with literature values. We found that the general shape of the measured data and simulated curves were comparable; however, a discrepancy in the relative count rates was observed. The origin of this disagreement is still under investigation.
Properties and Expected Number Counts of Active Galactic Nuclei and Their Hosts in the Far-infrared
NASA Astrophysics Data System (ADS)
Draper, A. R.; Ballantyne, D. R.
2011-03-01
Telescopes like Herschel and the Atacama Large Millimeter/submillimeter Array (ALMA) are creating new opportunities to study sources in the far-infrared (FIR), a wavelength region dominated by cold dust emission. Probing cold dust in active galaxies allows for study of the star formation history of active galactic nucleus (AGN) hosts. The FIR is also an important spectral region for observing AGNs which are heavily enshrouded by dust, such as Compton thick (CT) AGNs. By using information from deep X-ray surveys and cosmic X-ray background synthesis models, we compute Cloudy photoionization simulations which are used to predict the spectral energy distribution (SED) of AGNs in the FIR. Expected differential number counts of AGNs and their host galaxies are calculated in the Herschel bands. The expected contribution of AGNs and their hosts to the cosmic infrared background (CIRB) and the infrared luminosity density are also computed. Multiple star formation scenarios are investigated using a modified blackbody star formation SED. It is found that FIR observations at ~500 μm are an excellent tool in determining the star formation history of AGN hosts. Additionally, the AGN contribution to the CIRB can be used to determine whether star formation in AGN hosts evolves differently than in normal galaxies. The contribution of CT AGNs to the bright end differential number counts and to the bright source infrared luminosity density is a good test of AGN evolution models where quasars are triggered by major mergers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aima, M; Viscariello, N; Patton, T
Purpose: The aim of this work is to propose a method to optimize radioactive source localization (RSL) for non-palpable breast cancer surgery. RSL is commonly used as a guiding technique during surgery for excision of non-palpable tumors. A collimated hand-held detector is used to localize radioactive sources implanted in tumors. Incisions made by the surgeon are based on maximum observed detector counts, and tumors are subsequently resected based on an arbitrary estimate of the counts expected at the surgical margin boundary. This work focuses on building a framework to predict detector counts expected throughout the procedure to improve surgical margins.more » Methods: A gamma detection system called the Neoprobe GDS was used for this work. The probe consists of a cesium zinc telluride crystal and a collimator. For this work, an I-125 Best Medical model 2301 source was used. The source was placed in three different phantoms, a PMMA, a Breast (25%- glandular tissue/75%- adipose tissue) and a Breast (75-25) phantom with a backscatter thickness of 6 cm. Counts detected by the probe were recorded with varying amounts of phantom thicknesses placed on top of the source. A calibration curve was generated using MATLAB based on the counts recorded for the calibration dataset acquired with the PMMA phantom. Results: The observed detector counts data used as the validation set was accurately predicted to within ±3.2%, ±6.9%, ±8.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. The average difference between predicted and observed counts was −0.4%, 2.4%, 1.4% with a standard deviation of 1.2 %, 1.8%, 3.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. Conclusion: The results of this work provide a basis for characterization of a detector used for RSL. Counts were predicted to within ±9% for three different phantoms without the application of a density correction factor.« less
Gericke, M T; Bowman, J D; Carlini, R D; Chupp, T E; Coulter, K P; Dabaghyan, M; Desai, D; Freedman, S J; Gentile, T R; Gillis, R C; Greene, G L; Hersman, F W; Ino, T; Ishimoto, S; Jones, G L; Lauss, B; Leuschner, M B; Losowski, B; Mahurin, R; Masuda, Y; Mitchell, G S; Muto, S; Nann, H; Page, S A; Penttila, S I; Ramsay, W D; Santra, S; Seo, P-N; Sharapov, E I; Smith, T B; Snow, W M; Wilburn, W S; Yuan, V; Zhu, H
2005-01-01
The NPDGamma γ-ray detector has been built to measure, with high accuracy, the size of the small parity-violating asymmetry in the angular distribution of gamma rays from the capture of polarized cold neutrons by protons. The high cold neutron flux at the Los Alamos Neutron Scattering Center (LANSCE) spallation neutron source and control of systematic errors require the use of current mode detection with vacuum photodiodes and low-noise solid-state preamplifiers. We show that the detector array operates at counting statistics and that the asymmetries due to B4C and (27)Al are zero to with- in 2 × 10(-6) and 7 × 10(-7), respectively. Boron and aluminum are used throughout the experiment. The results presented here are preliminary.
Moore, Ginny; Stevenson, David; Thompson, Katy-Anne; Parks, Simon; Ngabo, Didier; Bennett, Allan M; Walker, Jimmy T
2015-01-01
Hospital tap water is a recognised source of Pseudomonas aeruginosa. U.K. guidance documents recommend measures to control/minimise the risk of P. aeruginosa in augmented care units but these are based on limited scientific evidence. An experimental water distribution system was designed to investigate colonisation of hospital tap components. P. aeruginosa was injected into 27 individual tap 'assemblies'. Taps were subsequently flushed twice daily and contamination levels monitored over two years. Tap assemblies were systematically dismantled and assessed microbiologically and the effect of removing potentially contaminated components was determined. P. aeruginosa was repeatedly recovered from the tap water at levels above the augmented care alert level. The organism was recovered from all dismantled solenoid valves with colonisation of the ethylene propylene diene monomer (EPDM) diaphragm confirmed by microscopy. Removing the solenoid valves reduced P. aeruginosa counts in the water to below detectable levels. This effect was immediate and sustained, implicating the solenoid diaphragm as the primary contamination source.
Takemoto, Kazuya; Nambu, Yoshihiro; Miyazawa, Toshiyuki; Sakuma, Yoshiki; Yamamoto, Tsuyoshi; Yorozu, Shinichi; Arakawa, Yasuhiko
2015-09-25
Advances in single-photon sources (SPSs) and single-photon detectors (SPDs) promise unique applications in the field of quantum information technology. In this paper, we report long-distance quantum key distribution (QKD) by using state-of-the-art devices: a quantum-dot SPS (QD SPS) emitting a photon in the telecom band of 1.5 μm and a superconducting nanowire SPD (SNSPD). At the distance of 100 km, we obtained the maximal secure key rate of 27.6 bps without using decoy states, which is at least threefold larger than the rate obtained in the previously reported 50-km-long QKD experiment. We also succeeded in transmitting secure keys at the rate of 0.307 bps over 120 km. This is the longest QKD distance yet reported by using known true SPSs. The ultralow multiphoton emissions of our SPS and ultralow dark count of the SNSPD contributed to this result. The experimental results demonstrate the potential applicability of QD SPSs to practical telecom QKD networks.
Effects of 1-MeV gamma radiation on a multi-anode microchannel array detector tube
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Bybee, R. L.
1979-01-01
A multianode microchannel array (MAMA) detector tube without a photocathode was exposed to a total dose of 1,000,000 rads of 1-MeV gamma radiation from a Co-60 source. The high-voltage characteristic of the microchannel array plate, average dark count, gain, and resolution of pulse height distribution characteristics showed no degradation after this total dose. In fact, the degassing of the microchannels induced by the high radiation flux had the effect of cleaning up the array plate and improving its characteristics.
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-12
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-13
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Cosmological Distance Scale to Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Azzam, W. J.; Linder, E. V.; Petrosian, V.
1993-05-01
The source counts or the so-called log N -- log S relations are the primary data that constrain the spatial distribution of sources with unknown distances, such as gamma-ray bursts. In order to test galactic, halo, and cosmological models for gamma-ray bursts we compare theoretical characteristics of the log N -- log S relations to those obtained from data gathered by the BATSE instrument on board the Compton Observatory (GRO) and other instruments. We use a new and statistically correct method, that takes proper account of the variable nature of the triggering threshold, to analyze the data. Constraints on models obtained by this comparison will be presented. This work is supported by NASA grants NAGW 2290, NAG5 2036, and NAG5 1578.
NASA Astrophysics Data System (ADS)
Czarski, T.; Chernyshova, M.; Malinowski, K.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zabolotny, W.
2016-11-01
The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals, and cluster charge values corresponding to the energy spectra.
Czarski, T; Chernyshova, M; Malinowski, K; Pozniak, K T; Kasprowicz, G; Kolasinski, P; Krawczyk, R; Wojenski, A; Zabolotny, W
2016-11-01
The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals, and cluster charge values corresponding to the energy spectra.
A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.
2011-11-02
Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less
Hahn, Robert G
2017-01-01
A high number of blood cells increases the viscosity of the blood. The present study explored whether variations in blood cell counts are relevant to the distribution and elimination of infused crystalloid fluid. On three different occasions, 10 healthy male volunteers received an intravenous infusion of 25mL/kg of Ringer's acetate, Ringer's lactate, and isotonic saline over 30min. Blood hemoglobin and urinary excretion were monitored for 4h and used as input in a two-volume kinetic model, using nonlinear mixed effects software. The covariates used in the kinetic model were red blood cell and platelet counts, the total leukocyte count, the use of isotonic saline, and the arterial pressure. Red blood cell and platelet counts in the upper end of the normal range were associated with a decreased rate of distribution and redistribution of crystalloid fluid. Simulations showed that high counts were correlated with volume expansion of the peripheral (interstitial) fluid space, while the plasma volume was less affected. In contrast, the total leukocyte count had no influence on the distribution, redistribution, or elimination. The use of isotonic saline caused a transient reduction in the systolic arterial pressure (P<0.05) and doubled the half-life of infused fluid in the body when compared to the two Ringer solutions. Isotonic saline did not decrease the serum potassium concentration, despite the fact that saline is potassium-free. High red blood cell and platelet counts are associated with peripheral accumulation of infused crystalloid fluid. Copyright © 2017 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. All rights reserved.
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor
2013-01-01
Recent interest in hybrid RF/Optical communications has led to the development and installation of a "polished-panel" optical receiver evaluation assembly on the 34-meter research antenna at Deep-Space Station 13 (DSS-13) at NASA's Goldstone Communications Complex. The test setup consists of a custom aluminum panel polished to optical smoothness, and a large-sensor CCD camera designed to image the point-spread function (PSF) generated by the polished aluminum panel. Extensive data has been obtained via realtime tracking and imaging of planets and stars at DSS-13. Both "on-source" and "off-source" data were recorded at various elevations, enabling the development of realistic simulations and analytic models to help determine the performance of future deep-space communications systems operating with on-off keying (OOK) or pulse-position-modulated (PPM) signaling formats with photon-counting detection, and compared with the ultimate quantum bound on detection performance for these modulations. Experimentally determined PSFs were scaled to provide realistic signal-distributions across a photon-counting detector array when a pulse is received, and uncoded as well as block-coded performance analyzed and evaluated for a well-known class of block codes.
Mapping of Bird Distributions from Point Count Surveys
John R. Sauer; Grey W. Pendleton; Sandra Orsillo
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes...
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Towards a census of high-redshift dusty galaxies with Herschel. A selection of "500 μm-risers"
NASA Astrophysics Data System (ADS)
Donevski, D.; Buat, V.; Boone, F.; Pappalardo, C.; Bethermin, M.; Schreiber, C.; Mazyed, F.; Alvarez-Marquez, J.; Duivenvoorden, S.
2018-06-01
Context. Over the last decade a large number of dusty star-forming galaxies has been discovered up to redshift z = 2 - 3 and recent studies have attempted to push the highly confused Herschel SPIRE surveys beyond that distance. To search for z ≥ 4 galaxies they often consider the sources with fluxes rising from 250 μm to 500 μm (so-called "500 μm-risers"). Herschel surveys offer a unique opportunity to efficiently select a large number of these rare objects, and thus gain insight into the prodigious star-forming activity that takes place in the very distant Universe. Aims: We aim to implement a novel method to obtain a statistical sample of 500 μm-risers and fully evaluate our selection inspecting different models of galaxy evolution. Methods: We consider one of the largest and deepest Herschel surveys, the Herschel Virgo Cluster Survey. We develop a novel selection algorithm which links the source extraction and spectral energy distribution fitting. To fully quantify selection biases we make end-to-end simulations including clustering and lensing. Results: We select 133 500 μm-risers over 55 deg2, imposing the criteria: S500 > S350 > S250, S250 > 13.2 mJy and S500 > 30 mJy. Differential number counts are in fairly good agreement with models, displaying a better match than other existing samples. The estimated fraction of strongly lensed sources is 24+6-5% based on models. Conclusions: We present the faintest sample of 500 μm-risers down to S250 = 13.2 mJy. We show that noise and strong lensing have an important impact on measured counts and redshift distribution of selected sources. We estimate the flux-corrected star formation rate density at 4 < z < 5 with the 500 μm-risers and find it to be close to the total value measured in far-infrared. This indicates that colour selection is not a limiting effect to search for the most massive, dusty z > 4 sources.
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Murray, S. G.; Trott, C. M.; Jordan, C. H.
2017-08-01
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
NASA Technical Reports Server (NTRS)
Siemiginowska, Aneta
2001-01-01
The predicted counts for ASCA observation was much higher than actually observed counts in the quasar. However, there are three weak hard x-ray sources in the GIS field. We are adding them to the source counts in modeling of hard x-ray background. The work is in progress. We have published a paper in Ap.J. on the luminosity function and the quasar evolution. Based on the theory described in this paper we are predicting a number of sources and their contribution to the x-ray background at different redshifts. These model predictions will be compared to the observed data in the final paper.
Kurosaki, Hiromu; Mueller, Rebecca J.; Lambert, Susan B.; ...
2016-07-15
An alternate method of preparing actinide alpha counting sources was developed in place of electrodeposition or lanthanide fluoride micro-precipitation. The method uses lanthanide hydroxide micro-precipitation to avoid the use of hazardous hydrofluoric acid. Lastly, it provides a quicker, simpler, and safer way of preparing actinide alpha counting sources in routine, production-type laboratories that process many samples daily.
NASA Astrophysics Data System (ADS)
Rogov, A.; Pepyolyshev, Yu.; Carta, M.; d'Angelo, A.
Scintillation detector (SD) is widely used in neutron and gamma-spectrometry in a count mode. The organic scintillators for the count mode of the detector operation are investigated rather well. Usually, they are applied for measurement of amplitude and time distributions of pulses caused by single interaction events of neutrons or gamma's with scintillator material. But in a large area of scientific research scintillation detectors can alternatively be used on a current mode by recording the average current from the detector. For example,the measurements of the neutron pulse shape at the pulsed reactors or another pulsed neutron sources. So as to get a rather large volume of experimental data at pulsed neutron sources, it is necessary to use the current mode detector for registration of fast neutrons. Many parameters of the SD are changed with a transition from an accounting mode to current one. For example, the detector efficiency is different in counting and current modes. Many effects connected with time accuracy become substantial. Besides, for the registration of solely fast neutrons, as must be in many measurements, in the mixed radiation field of the pulsed neutron sources, SD efficiency has to be determined with a gamma-radiation shield present. Here is no calculations or experimental data on SD current mode operation up to now. The response functions of the detectors can be either measured in high-precision reference fields or calculated by a computer simulation. We have used the MCNP code [1] and carried out some experiments for investigation of the plastic performances in a current mode. There are numerous programs performing simulating similar to the MCNP code. For example, for neutrons there are [2-4], for photons - [5-8]. However, all known codes to use (SCINFUL, NRESP4, SANDYL, EGS49) have more stringent restrictions on the source, geometry and detector characteristics. In MCNP code a lot of these restrictions are absent and you need only to write special additions for proton and electron recoil and transfer energy to light output. These code modifications allow taking into account all processes in organic scintillator influence the light yield.
Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V
2014-11-30
We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.
Long-term spatial heterogeneity in mallard distribution in the Prairie pothole region
Janke, Adam K.; Anteau, Michael J.; Stafford, Joshua D.
2017-01-01
The Prairie Pothole Region (PPR) of north-central United States and south-central Canada supports greater than half of all breeding mallards (Anas platyrhynchos) annually counted in North America and is the focus of widespread conservation and research efforts. Allocation of conservation resources for this socioeconomically important population would benefit from an understanding of the nature of spatiotemporal variation in distribution of breeding mallards throughout the 850,000 km2 landscape. We used mallard counts from the Waterfowl Breeding Population and Habitat Survey to test for spatial heterogeneity and identify high- and low-abundance regions of breeding mallards over a 50-year time series. We found strong annual spatial heterogeneity in all years: 90% of mallards counted annually were on an average of only 15% of surveyed segments. Using a local indicator of spatial autocorrelation, we found a relatively static distribution of low-count clusters in northern Montana, USA, and southern Alberta, Canada, and a dynamic distribution of high-count clusters throughout the study period. Distribution of high-count clusters shifted southeast from northwestern portions of the PPR in Alberta and western Saskatchewan, Canada, to North and South Dakota, USA, during the latter half of the study period. This spatial redistribution of core mallard breeding populations was likely driven by interactions between environmental variation that created favorable hydrological conditions for wetlands in the eastern PPR and dynamic land-use patterns related to upland cropping practices and government land-retirement programs. Our results highlight an opportunity for prioritizing relatively small regions within the PPR for allocation of wetland and grassland conservation for mallard populations. However, the extensive spatial heterogeneity in core distributions over our study period suggests such spatial prioritization will have to overcome challenges presented by dynamic land-use and climate patterns in the region, and thus merits additional monitoring and empirical research to anticipate future population distribution. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
State traffic volume systems council estimation process.
DOT National Transportation Integrated Search
2004-10-01
The Kentucky Transportation Cabinet has an immense traffic data collection program that is an essential source for many other programs. The Division of Planning processes traffic volume counts annually. These counts are maintained in the Counts Datab...
DC KIDS COUNT e-Databook Indicators
ERIC Educational Resources Information Center
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
Sources and magnitude of sampling error in redd counts for bull trout
Jason B. Dunham; Bruce Rieman
2001-01-01
Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...
Computer measurement of particle sizes in electron microscope images
NASA Technical Reports Server (NTRS)
Hall, E. L.; Thompson, W. B.; Varsi, G.; Gauldin, R.
1976-01-01
Computer image processing techniques have been applied to particle counting and sizing in electron microscope images. Distributions of particle sizes were computed for several images and compared to manually computed distributions. The results of these experiments indicate that automatic particle counting within a reasonable error and computer processing time is feasible. The significance of the results is that the tedious task of manually counting a large number of particles can be eliminated while still providing the scientist with accurate results.
Photon counting statistics analysis of biophotons from hands.
Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup
2003-05-01
The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.
Li, Aiwei; Yang, Shuo; Zhang, Jie; Qiao, Rui
2017-11-01
To observe the changes of complete blood count (CBC) parameters during pregnancy and establish appropriate reference intervals for healthy pregnant women. Healthy pregnant women took the blood tests at all trimesters. All blood samples were processed on Sysmex XE-2100. The following CBC parameters were analyzed: red blood cell count (RBC), hemoglobin (Hb), hematocrit (Hct), mean corpuscular volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC), red blood cell distribution width (RDW), platelet count (PLT), mean platelet volume (MPV), platelet distribution width (PDW), white blood cell count (WBC), and leukocyte differential count. Reference intervals were established using the 2.5th and 97.5th percentile of the distribution. Complete blood count parameters showed dynamic changes during trimesters. RBC, Hb, Hct declined at trimester 1, reaching their lowest point at trimester 2, and began to rise again at trimester 3. WBC, neutrophil count (Neut), monocyte count (MONO), RDW, and PDW went up from trimester 1 to trimester 3. On the contrary, MCHC, lymphocyte count (LYMPH), PLT, and MPV gradually descended during pregnancy. There were statistical significances in all CBC parameters between pregnant women and normal women, regardless of the trimesters (P<.001). The median obtained were (normal vs pregnancy) as follows: RBC 4.50 vs 3.94×10 12 /L, Hb 137 vs 120 g/L, WBC 5.71 vs 9.06×10 9 /L, LYMPH% 32.2 vs 18.0, Neut% 58.7 vs 75.0, and PLT 251 vs 202×10 9 /L. The changes of CBC parameters during pregnancy are described, and reference intervals for Beijing pregnant women are demonstrated in this study. © 2017 Wiley Periodicals, Inc.
A sampling device for counting insect egg clusters and measuring vertical distribution of vegetation
Robert L. Talerico; Robert W., Jr. Wilson
1978-01-01
The use of a vertical sampling pole that delineates known volumes and position is illustrated and demonstrated for counting egg clusters of N. sertifer. The pole can also be used to estimate vertical and horizontal coverage, distribution or damage of vegetation or foliage.
First β-ν correlation measurement from the recoil-energy spectrum of Penning trapped Ar35 ions
NASA Astrophysics Data System (ADS)
Van Gorp, S.; Breitenfeldt, M.; Tandecki, M.; Beck, M.; Finlay, P.; Friedag, P.; Glück, F.; Herlert, A.; Kozlov, V.; Porobic, T.; Soti, G.; Traykov, E.; Wauters, F.; Weinheimer, Ch.; Zákoucký, D.; Severijns, N.
2014-08-01
We demonstrate a novel method to search for physics beyond the standard model by determining the β-ν angular correlation from the recoil-ion energy distribution after β decay of ions stored in a Penning trap. This recoil-ion energy distribution is measured with a retardation spectrometer. The unique combination of the spectrometer with a Penning trap provides a number of advantages, e.g., a high recoil-ion count rate and low sensitivity to the initial position and velocity distribution of the ions and completely different sources of systematic errors compared to other state-of-the-art experiments. Results of a first measurement with the isotope Ar35 are presented. Although currently at limited precision, we show that a statistical precision of about 0.5% is achievable with this unique method, thereby opening up the possibility of contributing to state-of-the-art searches for exotic currents in weak interactions.
The continuum spectral characteristics of gamma ray bursts observed by BATSE
NASA Technical Reports Server (NTRS)
Pendleton, Geoffrey N.; Paciesas, William S.; Briggs, Michael S.; Mallozzi, Robert S.; Koshut, Tom M.; Fishman, Gerald J.; Meegan, Charles A.; Wilson, Robert B.; Harmon, Alan B.; Kouveliotou, Chryssa
1994-01-01
Distributions of the continuum spectral characteristics of 260 bursts in the first Burst and Transient Source Experiment (BATSE) catalog are presented. The data are derived from flux ratios calculated from the BATSE Large Area Detector (LAD) four channel discriminator data. The data are converted from counts to photons using a direct spectral inversion technique to remove the effects of atmospheric scattering and the energy dependence of the detector angular response. Although there are intriguing clusterings of bursts in the spectral hardness ratio distributions, no evidence for the presence of distinct burst classes based on spectral hardness ratios alone is found. All subsets of bursts selected for their spectral characteristics in this analysis exhibit spatial distributions consistent with isotropy. The spectral diversity of the burst population appears to be caused largely by the highly variable nature of the burst production mechanisms themselves.
How to retrieve additional information from the multiplicity distributions
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2017-01-01
Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.
Pitkänen, Tarja; Miettinen, Ilkka T; Nakari, Ulla-Maija; Takkinen, Johanna; Nieminen, Kalle; Siitonen, Anja; Kuusi, Markku; Holopainen, Arja; Hänninen, Marja-Liisa
2008-09-01
After heavy rains Campylobacter jejuni together with high counts of Escherichia coli, other coliforms and intestinal enterococci were detected from drinking water of a municipal distribution system in eastern Finland in August 2004. Three patients with a positive C. jejuni finding, who had drunk the contaminated water, were identified and interviewed. The pulsed-field gel electrophoresis (PFGE) genotypes from the patient samples were identical to some of the genotypes isolated from the water of the suspected contamination source. In addition, repetitive DNA element analysis (rep-PCR) revealed identical patterns of E. coli and other coliform isolates along the distribution line. Further on-site technical investigations revealed that one of the two rainwater gutters on the roof of the water storage tower had been in an incorrect position and rainwater had flushed a large amount of faecal material from wild birds into the drinking water. The findings required close co-operation between civil authorities, and application of cultivation and genotyping techniques strongly suggested that the municipal drinking water was the source of the infections. The faecal contamination associated with failures in cleaning and technical management stress the importance of instructions for waterworks personnel to perform maintenance work properly.
The displacement of the sun from the galactic plane using IRAS and faust source counts
NASA Technical Reports Server (NTRS)
Cohen, Martin
1995-01-01
I determine the displacement of the Sun from the Galactic plane by interpreting IRAS point-source counts at 12 and 25 microns in the Galactic polar caps using the latest version of the SKY model for the point-source sky (Cohen 1994). A value of solar zenith = 15.5 +/- 0.7 pc north of the plane provides the best match to the ensemble of useful IRAS data. Shallow K counts in the north Galactic pole are also best fitted by this offset, while limited FAUST far-ultraviolet counts at 1660 A near the same pole favor a value near 14 pc. Combining the many IRAS determinations with the few FAUST values suggests that a value of solar zenith = 15.0 +/- 0.5 pc (internal error only) would satisfy these high-latitude sets of data in both wavelength regimes, within the context of the SKY model.
Artifact reduction in the CSPAD detectors used for LCLS experiments.
Pietrini, Alberto; Nettelblad, Carl
2017-09-01
The existence of noise and column-wise artifacts in the CSPAD-140K detector and in a module of the CSPAD-2.3M large camera, respectively, is reported for the L730 and L867 experiments performed at the CXI Instrument at the Linac Coherent Light Source (LCLS), in low-flux and low signal-to-noise ratio regime. Possible remedies are discussed and an additional step in the preprocessing of data is introduced, which consists of performing a median subtraction along the columns of the detector modules. Thus, we reduce the overall variation in the photon count distribution, lowering the mean false-positive photon detection rate by about 4% (from 5.57 × 10 -5 to 5.32 × 10 -5 photon counts pixel -1 frame -1 in L867, cxi86715) and 7% (from 1.70 × 10 -3 to 1.58 × 10 -3 photon counts pixel -1 frame -1 in L730, cxi73013), and the standard deviation in false-positive photon count per shot by 15% and 35%, while not making our average photon detection threshold more stringent. Such improvements in detector noise reduction and artifact removal constitute a step forward in the development of flash X-ray imaging techniques for high-resolution, low-signal and in serial nano-crystallography experiments at X-ray free-electron laser facilities.
A new multivariate zero-adjusted Poisson model with applications to biomedicine.
Liu, Yin; Tian, Guo-Liang; Tang, Man-Lai; Yuen, Kam Chuen
2018-05-25
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, ) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin
2018-01-01
Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.
NASA Technical Reports Server (NTRS)
Hooke, A. J.
1979-01-01
A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.
Sources and dispersive modes of micro-fibers in the environment.
Carr, Steve A
2017-05-01
Understanding the sources and distribution of microfibers (MFs) in the environment is critical if control and remediation measures are to be effective. Microfibers comprise an overwhelming fraction (>85%) of microplastic debris found on shorelines around the world. Although primary sources have not been fully vetted, until recently it was widely believed that domestic laundry discharges were the major source. It was also thought that synthetic fibers and particles having dimensions <5 mm easily bypassed filtration and other solid separation processes at wastewater treatment plants (WWTPs) and entered oceans and surface waters. A more thorough assessment of WWTP effluent discharges indicates, however, that fiber and particulate counts do not support the belief that plants are the primary vectors for fibers entering the environment. This finding may bolster concerns that active and pervasive shedding of fibers from common fabrics and textiles could be contributing significantly, via direct pathways, to burgeoning environmental loads. Integr Environ Assess Manag 2017;13:466-469. © 2017 SETAC. © 2017 SETAC.
Search for optical bursts from the gamma ray burst source GBS 0526-66
NASA Astrophysics Data System (ADS)
Seetha, S.; Sreenivasaiah, K. V.; Marar, T. M. K.; Kasturirangan, K.; Rao, U. R.; Bhattacharyya, J. C.
1985-08-01
Attempts were made to detect optical bursts from the gamma-ray burst source GBS 0526-66 during Dec. 31, 1984 to Jan. 2, 1985 and Feb. 23 to Feb. 24, 1985, using the one meter reflector of the Kavalur Observatory. Jan. 1, 1985 coincided with the zero phase of the predicted 164 day period of burst activity from the source (Rothschild and Lingenfelter, 1984). A new optical burst photon counting system with adjustable trigger threshold was used in parallel with a high speed photometer for the observations. The best time resolution was 1 ms and maximum count rate capability was 255,000 counts s(-1). Details of the instrumentation and observational results are presented.
The Atacama Cosmology Telescope: Extragalactic Sources at 148 GHz in the 2008 Survey
NASA Technical Reports Server (NTRS)
Marriage, T. A.; Juin, J. B.; Lin, Y. T.; Marsden, D.; Nolta, M. R.; Partridge, B.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.;
2011-01-01
We report on extragalactic sources detected in a 455 square-degree map of the southern sky made with data at a frequency of 148 GHz from the Atacama Cosmology Telescope 2008 observing season. We provide a catalog of 157 sources with flux densities spanning two orders of magnitude: from 15 mJy to 1500 mJy. Comparison to other catalogs shows that 98% of the ACT detections correspond to sources detected at lower radio frequencies. Three of the sources appear to be associated with the brightest cluster galaxies of low redshift X-ray selected galaxy clusters. Estimates of the radio to mm-wave spectral indices and differential counts of the sources further bolster the hypothesis that they are nearly all radio sources, and that their emission is not dominated by re-emission from warm dust. In a bright (> 50 mJy) 148 GHz-selected sample with complete cross-identifications from the Australia Telescope 20 GHz survey, we observe an average steepening of the spectra between .5, 20, and 148 GHz with median spectral indices of alp[ha (sub 5-20) = -0.07 +/- 0.06, alpha (sub 20-148) -0.39 +/- 0.04, and alpha (sub 5-148) = -0.20 +/- 0.03. When the measured spectral indices are taken into account, the 148 GHz differential source counts are consistent with previous measurements at 30 GHz in the context of a source count model dominated by radio sources. Extrapolating with an appropriately rescaled model for the radio source counts, the Poisson contribution to the spatial power spectrum from synchrotron-dominated sources with flux density less than 20 mJy is C(sup Sync) = (2.8 +/- 0.3) x 1O (exp-6) micro K(exp 2).
Conventional plating methods were used to quantify heterotrophic bacteria from a drinking water distribution system. Three media, plate count agar (PCA), R2A agar and sheep blood agar (TSA-SB) were used to determine heterotrophic plate count (HPC) levels. Grab samples were collec...
In vitro ovine articular chondrocyte proliferation: experiments and modelling.
Mancuso, L; Liuzzo, M I; Fadda, S; Pisu, M; Cincotti, A; Arras, M; La Nasa, G; Concas, A; Cao, G
2010-06-01
This study focuses on analysis of in vitro cultures of chondrocytes from ovine articular cartilage. Isolated cells were seeded in Petri dishes, then expanded to confluence and phenotypically characterized by flow cytometry. The sigmoidal temporal profile of total counts was obtained by classic haemocytometry and corresponding cell size distributions were measured electronically using a Coulter Counter. A mathematical model recently proposed (1) was adopted for quantitative interpretation of these experimental data. The model is based on a 1-D (that is, mass-structured), single-staged population balance approach capable of taking into account contact inhibition at confluence. The model's parameters were determined by fitting measured total cell counts and size distributions. Model reliability was verified by predicting cell proliferation counts and corresponding size distributions at culture times longer than those used when tuning the model's parameters. It was found that adoption of cell mass as the intrinsic characteristic of a growing chondrocyte population enables sigmoidal temporal profiles of total counts in the Petri dish, as well as cell size distributions at 'balanced growth', to be adequately predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarski, T., E-mail: tomasz.czarski@ifpilm.pl; Chernyshova, M.; Malinowski, K.
2016-11-15
The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals,more » and cluster charge values corresponding to the energy spectra.« less
Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.
He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L
2015-10-01
Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
NASA Astrophysics Data System (ADS)
Mayer, D. P.; Kite, E. S.
2016-12-01
Sandblasting, aeolian infilling, and wind deflation all obliterate impact craters on Mars, complicating the use of crater counts for chronology, particularly on sedimentary rock surfaces. However, crater counts on sedimentary rocks can be exploited to constrain wind erosion rates. Relatively small, shallow craters are preferentially obliterated as a landscape undergoes erosion, so the size-frequency distribution of impact craters in a landscape undergoing steady exhumation will develop a shallower power-law slope than a simple production function. Estimating erosion rates is important for several reasons: (1) Wind erosion is a source of mass for the global dust cycle, so the global dust reservoir will disproportionately sample fast-eroding regions; (2) The pace and pattern of recent wind erosion is a sorely-needed constraint on models of the sculpting of Mars' sedimentary-rock mounds; (3) Near-surface complex organic matter on Mars is destroyed by radiation in <108 years, so high rates of surface exhumation are required for preservation of near-surface organic matter. We use crater counts from 18 HiRISE images over sedimentary rock deposits as the basis for estimating erosion rates. Each image was counted by ≥3 analysts and only features agreed on by ≥2 analysts were included in the erosion rate estimation. Erosion rates range from 0.1-0.2 {μ }m/yr across all images. These rates represent an upper limit on surface erosion by landscape lowering. At the conference we will discuss the within and between-image variability of erosion rates and their implications for recent geological processes on Mars.
The association of trail use with weather-related factors on an urban greenway.
Burchfield, Ryan A; Fitzhugh, Eugene C; Bassett, David R
2012-02-01
To study the association between weather-related measures and objectively measured trail use across 3 seasons. Weather has been reported as a barrier to outdoor physical activity (PA), but previous studies have explained only a small amount of the variance in PA using weather-related measures. The dependent variable of this study was trail use measured as mean hourly trail counts by an infrared trail counter located on a greenway. Each trail count represents 1 person breaking the infrared beam of the trail counter. Two sources of weather-related measures were obtained by a site-specific weather station and a public domain weather source. Temperature, relative humidity, and precipitation were significantly correlated with trail counts recorded during daylight hours. More precise hourly weather-related measures explained 42% of the variance in trail counts, regardless of the weather data source with temperature alone explaining 18% of the variance in trail counts. After controlling for all seasonal and weekly factors, every 1°F increase in temperature was associated with an increase of 1.1 trail counts/hr up to 76°F, at which point trail use began to slightly decrease. Weather-related factors have a moderate association with trail use along an urban greenway.
Assessment of some important factors affecting the singing-ground survey
Tautin, J.
1982-01-01
A brief history of the procedures used to analyze singing-ground survey data is outlined. Some weaknesses associated with the analytical procedures are discussed, and preliminary results of efforts to improve the procedures are presented. The most significant finding to date is that counts made by new observers need not be omitted when calculating an index of the woodcock population. Also, the distribution of woodcock heard singing, with respect to time after sunset, affirms the appropriateness of recommended starting times for counting woodcock. Woodcock count data fit the negative binomial probability distribution.
Avalanche photodiode photon counting receivers for space-borne lidars
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic M.
1991-01-01
Avalanche photodiodes (APD) are studied for uses as photon counting detectors in spaceborne lidars. Non-breakdown APD photon counters, in which the APD's are biased below the breakdown point, are shown to outperform: (1) conventional APD photon counters biased above the breakdown point; (2) conventional APD photon counters biased above the breakdown point; and (3) APD's in analog mode when the received optical signal is extremely weak. Non-breakdown APD photon counters were shown experimentally to achieve an effective photon counting quantum efficiency of 5.0 percent at lambda = 820 nm with a dead time of 15 ns and a dark count rate of 7000/s which agreed with the theoretically predicted values. The interarrival times of the counts followed an exponential distribution and the counting statistics appeared to follow a Poisson distribution with no after pulsing. It is predicted that the effective photon counting quantum efficiency can be improved to 18.7 percent at lambda = 820 nm and 1.46 percent at lambda = 1060 nm with a dead time of a few nanoseconds by using more advanced commercially available electronic components.
Within-site variability in surveys of wildlife populations
Link, William A.; Barker, Richard J.; Sauer, John R.; Droege, Sam
1994-01-01
Most large-scale surveys of animal populations are based on counts of individuals observed during a sampling period, which are used as indexes to the population. The variability in these indexes not only reflects variability in population sizes among sites but also variability due to the inexactness of the counts. Repeated counts at survey sites can be used to document this additional source of variability and, in some applications, to mitigate its effects. We present models for evaluating the proportion of total variability in counts that is attributable to this within-site variability and apply them in the analysis of data from repeated counts on routes from the North American Breeding Bird Survey. We analyzed data on 98 species, obtaining estimates of these percentages, which ranged from 3.5 to 100% with a mean of 36.25%. For at least 14 of the species, more than half of the variation in counts was attributable to within-site sources. Counts for species with lower average counts had a higher percentage of within-site variability. We discuss the relative cost efficiency of replicating sites or initiating new sites for several objectives, concluding that it is frequently better to initiate new sites than to attempt to replicate existing sites.
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
Modification of Poisson Distribution in Radioactive Particle Counting.
ERIC Educational Resources Information Center
Drotter, Michael T.
This paper focuses on radioactive practicle counting statistics in laboratory and field applications, intended to aid the Health Physics technician's understanding of the effect of indeterminant errors on radioactive particle counting. It indicates that although the statistical analysis of radioactive disintegration is best described by a Poisson…
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Karulin, Alexey Y; Caspell, Richard; Dittrich, Marcus; Lehmann, Paul V
2015-03-02
Accurate assessment of positive ELISPOT responses for low frequencies of antigen-specific T-cells is controversial. In particular, it is still unknown whether ELISPOT counts within replicate wells follow a theoretical distribution function, and thus whether high power parametric statistics can be used to discriminate between positive and negative wells. We studied experimental distributions of spot counts for up to 120 replicate wells of IFN-γ production by CD8+ T-cell responding to EBV LMP2A (426 - 434) peptide in human PBMC. The cells were tested in serial dilutions covering a wide range of average spot counts per condition, from just a few to hundreds of spots per well. Statistical analysis of the data using diagnostic Q-Q plots and the Shapiro-Wilk normality test showed that in the entire dynamic range of ELISPOT spot counts within replicate wells followed a normal distribution. This result implies that the Student t-Test and ANOVA are suited to identify positive responses. We also show experimentally that borderline responses can be reliably detected by involving more replicate wells, plating higher numbers of PBMC, addition of IL-7, or a combination of these. Furthermore, we have experimentally verified that the number of replicates needed for detection of weak responses can be calculated using parametric statistics.
Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath
2016-06-01
Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.
Weymann, Alexander; Ali-Hasan-Al-Saegh, Sadeq; Sabashnikov, Anton; Popov, Aron-Frederik; Mirhosseini, Seyed Jalil; Nombela-Franco, Luis; Testa, Luca; Lotfaliani, Mohammadreza; Zeriouh, Mohamed; Liu, Tong; Dehghan, Hamidreza; Yavuz, Senol; de Oliveira Sá, Michel Pompeu Barros; Baker, William L.; Jang, Jae-Sik; Gong, Mengqi; Benedetto, Umberto; Dohmen, Pascal M.; D’Ascenzo, Fabrizio; Deshmukh, Abhishek J.; Biondi-Zoccai, Giuseppe; Calkins, Hugh; Stone, Gregg W.
2017-01-01
Background This systematic review with meta-analysis aimed to determine the strength of evidence for evaluating the association of platelet cellular and functional characteristics including platelet count (PC), MPV, platelet distribution width (PDW), platelet factor 4, beta thromboglobulin (BTG), and p-selectin with the occurrence of atrial fibrillation (AF) and consequent stroke. Material/Methods We conducted a meta-analysis of observational studies evaluating platelet characteristics in patients with paroxysmal, persistent and permanent atrial fibrillations. A comprehensive subgroup analysis was performed to explore potential sources of heterogeneity. Results Literature search of all major databases retrieved 1,676 studies. After screening, a total of 73 studies were identified. Pooled analysis showed significant differences in PC (weighted mean difference (WMD)=−26.93 and p<0.001), MPV (WMD=0.61 and p<0.001), PDW (WMD=−0.22 and p=0.002), BTG (WMD=24.69 and p<0.001), PF4 (WMD=4.59 and p<0.001), and p-selectin (WMD=4.90 and p<0.001). Conclusions Platelets play a critical and precipitating role in the occurrence of AF. Whereas distribution width of platelets as well as factors of platelet activity was significantly greater in AF patients compared to SR patients, platelet count was significantly lower in AF patients. PMID:28302997
Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovard R. Perry; David L. Georgeson
2011-07-01
This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J).more » Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.« less
NASA Astrophysics Data System (ADS)
Watkins, Stephen E.; Whittaker, Alexander C.; Bell, Rebecca E.; Brooke, Sam A. S.; McNeill, Lisa C.; Gawthorpe, Robert L.
2017-04-01
The volumes, grain sizes and characteristics of sediment supplied from source catchments fundamentally controls basin stratigraphy. However, to date, few studies have constrained sediment budgets, including grain size, released into an active rift basin at a regional scale. The Gulf of Corinth, central Greece, is one of the most rapidly extending rifts in the world, with geodetic measurements of 5 mm/yr in the East to 15 mm/yr in the West. It has well-constrained climatic and tectonic boundary conditions and bedrock lithologies are well-characterised. It is therefore an ideal natural laboratory to study the grain-size export for a rift. In the field, we visited the river mouths of 49 catchments draining into the Corinth Gulf, which in total drain 83% of the rift. At each site, hydraulic geometries, surface grain-size of channel bars and full-weighted grain-size distributions of river sediment were obtained. The surface grain-size was measured using the Wolman point count method and the full-weighted grain-size distribution of the bedload by in-situ sieving. In total, approximately 17,000 point counts and 3 tonnes of sediment were processed. The grain-size distributions show an overall increase from East to West on the southern coast of the gulf, with largest grain-sizes exported from the Western rift catchments. D84 ranges from 20 to 110 mm, however 50% of D84 grain-sizes are less than 40 mm. Subsequently, we derived the full Holocene sediment budget for the Corinth Gulf by combining our grain size data with catchment sediment fluxes, constrained using the BQART model and calibrated to known Holocene sediment volumes in the basin from seismic data (c.f. Watkins et al., in review). This is the first time such a budget has been derived for the Corinth Rift. Finally, our estimates of sediment budgets and grain sizes were compared to regional uplift constraints, fault distributions, slip rates and lithology to identify the relative importance of these controls on sediment supply to the basin.
Fission product yield measurements using monoenergetic photon beams
NASA Astrophysics Data System (ADS)
Krishichayan; Bhike, M.; Tonchev, A. P.; Tornow, W.
2017-09-01
Measurements of fission products yields (FPYs) are an important source of information on the fission process. During the past couple of years, a TUNL-LANL-LLNL collaboration has provided data on the FPYs from quasi monoenergetic neutron-induced fission on 235U, 238U, and 239Pu and has revealed an unexpected energy dependence of both asymmetric fission fragments at energies below 4 MeV. This peculiar FPY energy dependence was more pronounced in neutron-induced fission of 239Pu. In an effort to understand and compare the effect of the incoming probe on the FPY distribution, we have carried out monoenergetic photon-induced fission experiments on the same 235U, 238U, and 239Pu targets. Monoenergetic photon beams of Eγ = 13.0 MeV were provided by the HIγS facility, the world's most intense γ-ray source. In order to determine the total number of fission events, a dual-fission chamber was used during the irradiation. These irradiated samples were counted at the TUNL's low-background γ-ray counting facility using high efficient HPGe detectors over a period of 10 weeks. Here we report on our first ever photofission product yield measurements obtained with monoenegetic photon beams. These results are compared with neutron-induced FPY data.
NASA Technical Reports Server (NTRS)
Chartas, G.; Flanagan, K.; Hughes, J. P.; Kellogg, E. M.; Nguyen, D.; Zombek, M.; Joy, M.; Kolodziejezak, J.
1993-01-01
The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters, correcting for the reflectivity of the mirror and convolving with the detector response.
NASA Technical Reports Server (NTRS)
Chartas, G.; Flanagan, Kathy; Hughes, John P.; Kellogg, Edwin M.; Nguyen, D.; Zombeck, M.; Joy, M.; Kolodziejezak, J.
1992-01-01
The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters correcting for the reflectivity of the mirror and convolving with the detector response.
Kathryn L. Purcell; Sylvia R. Mori; Mary K. Chase
2005-01-01
We used data from two oak-woodland sites in California to develop guidelines for the design of bird monitoring programs using point counts. We used power analysis to determine sample size adequacy when varying the number of visits, count stations, and years for examining trends in abundance. We assumed an overdispersed Poisson distribution for count data, with...
XMM-Newton 13H deep field - I. X-ray sources
NASA Astrophysics Data System (ADS)
Loaring, N. S.; Dwelly, T.; Page, M. J.; Mason, K.; McHardy, I.; Gunn, K.; Moss, D.; Seymour, N.; Newsam, A. M.; Takata, T.; Sekguchi, K.; Sasseen, T.; Cordova, F.
2005-10-01
We present the results of a deep X-ray survey conducted with XMM-Newton, centred on the UK ROSAT13H deep field area. This region covers 0.18 deg2, and is the first of the two areas covered with XMM-Newton as part of an extensive multiwavelength survey designed to study the nature and evolution of the faint X-ray source population. We have produced detailed Monte Carlo simulations to obtain a quantitative characterization of the source detection procedure and to assess the reliability of the resultant sourcelist. We use the simulations to establish a likelihood threshold, above which we expect less than seven (3 per cent) of our sources to be spurious. We present the final catalogue of 225 sources. Within the central 9 arcmin, 68 per cent of source positions are accurate to 2 arcsec, making optical follow-up relatively straightforward. We construct the N(>S) relation in four energy bands: 0.2-0.5, 0.5-2, 2-5 and 5-10 keV. In all but our highest energy band we find that the source counts can be represented by a double power law with a bright-end slope consistent with the Euclidean case and a break around 10-14yergcm-2s-1. Below this flux, the counts exhibit a flattening. Our source counts reach densities of 700, 1300, 900 and 300 deg-2 at fluxes of 4.1 × 10-16,4.5 × 10-16,1.1 × 10-15 and 5.3 × 10-15ergcm-2s-1 in the 0.2-0.5, 0.5-2, 2-5 and 5-10 keV energy bands, respectively. We have compared our source counts with those in the two Chandra deep fields and Lockman hole, and found our source counts to be amongst the highest of these fields in all energy bands. We resolve >51 per cent (>50 per cent) of the X-ray background emission in the 1-2 keV (2-5 keV) energy bands.
HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorat, K.; Subrahmanyan, R.; Saripalli, L.
2013-01-01
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less
Recommended methods for monitoring change in bird populations by counting and capture of migrants
David J. T. Hussell; C. John Ralph
2005-01-01
Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...
Monitoring trends in bird populations: addressing background levels of annual variability in counts
Jared Verner; Kathryn L. Purcell; Jennifer G. Turner
1996-01-01
Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...
NASA Astrophysics Data System (ADS)
Aira, María-Jesús; Rodríguez-Rajo, Francisco-Javier; Fernández-González, María; Seijo, Carmen; Elvira-Rendueles, Belén; Abreu, Ilda; Gutiérrez-Bustillo, Montserrat; Pérez-Sánchez, Elena; Oliveira, Manuela; Recio, Marta; Tormo, Rafael; Morales, Julia
2013-03-01
This paper provides an updated of airborne Alternaria spore spatial and temporal distribution patterns in the Iberian Peninsula, using a common non-viable volumetric sampling method. The highest mean annual spore counts were recorded in Sevilla (39,418 spores), Mérida (33,744) and Málaga (12,947), while other sampling stations never exceeded 5,000. The same cities also recorded the highest mean daily spore counts (Sevilla 109 spores m-3; Mérida 53 spores m-3 and Málaga 35 spores m-3) and the highest number of days on which counts exceeded the threshold levels required to trigger allergy symptoms (Sevilla 38 % and Mérida 30 % of days). Analysis of annual spore distribution patterns revealed either one or two peaks, depending on the location and prevailing climate of sampling stations. For all stations, average temperature was the weather parameter displaying the strongest positive correlation with airborne spore counts, whilst negative correlations were found for rainfall and relative humidity.
Aira, María-Jesús; Rodríguez-Rajo, Francisco-Javier; Fernández-González, María; Seijo, Carmen; Elvira-Rendueles, Belén; Abreu, Ilda; Gutiérrez-Bustillo, Montserrat; Pérez-Sánchez, Elena; Oliveira, Manuela; Recio, Marta; Tormo, Rafael; Morales, Julia
2013-03-01
This paper provides an updated of airborne Alternaria spore spatial and temporal distribution patterns in the Iberian Peninsula, using a common non-viable volumetric sampling method. The highest mean annual spore counts were recorded in Sevilla (39,418 spores), Mérida (33,744) and Málaga (12,947), while other sampling stations never exceeded 5,000. The same cities also recorded the highest mean daily spore counts (Sevilla 109 spores m(-3); Mérida 53 spores m(-3) and Málaga 35 spores m(-3)) and the highest number of days on which counts exceeded the threshold levels required to trigger allergy symptoms (Sevilla 38 % and Mérida 30 % of days). Analysis of annual spore distribution patterns revealed either one or two peaks, depending on the location and prevailing climate of sampling stations. For all stations, average temperature was the weather parameter displaying the strongest positive correlation with airborne spore counts, whilst negative correlations were found for rainfall and relative humidity.
Takemoto, Kazuya; Nambu, Yoshihiro; Miyazawa, Toshiyuki; Sakuma, Yoshiki; Yamamoto, Tsuyoshi; Yorozu, Shinichi; Arakawa, Yasuhiko
2015-01-01
Advances in single-photon sources (SPSs) and single-photon detectors (SPDs) promise unique applications in the field of quantum information technology. In this paper, we report long-distance quantum key distribution (QKD) by using state-of-the-art devices: a quantum-dot SPS (QD SPS) emitting a photon in the telecom band of 1.5 μm and a superconducting nanowire SPD (SNSPD). At the distance of 100 km, we obtained the maximal secure key rate of 27.6 bps without using decoy states, which is at least threefold larger than the rate obtained in the previously reported 50-km-long QKD experiment. We also succeeded in transmitting secure keys at the rate of 0.307 bps over 120 km. This is the longest QKD distance yet reported by using known true SPSs. The ultralow multiphoton emissions of our SPS and ultralow dark count of the SNSPD contributed to this result. The experimental results demonstrate the potential applicability of QD SPSs to practical telecom QKD networks. PMID:26404010
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.
2011-11-01
We present results of a 1.1 mm deep survey of the AKARI Deep Field South (ADF-S) with AzTEC mounted on the Atacama Submillimetre Telescope Experiment (ASTE). We obtained a map of 0.25 deg2 area with an rms noise level of 0.32-0.71 mJy. This is one of the deepest and widest maps thus far at millimetre and submillimetre wavelengths. We uncovered 198 sources with a significance of 3.5-15.6σ, providing the largest catalog of 1.1 mm sources in a contiguous region. Most of the sources are not detected in the far-infrared bands of the AKARI satellite, suggesting that they are mostly at z ≥ 1.5 given the detection limits. We construct differential and cumulative number counts of the ADF-S, the Subaru/XMM Newton Deep Field (SXDF), and the SSA 22 field surveyed by AzTEC/ASTE, which provide currently the tightest constraints on the faint end. The integration of the differential number counts of the ADF-S find that the contribution of 1.1 mm sources with ≥1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with ≥1 mJy using the differential number counts and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z ~ 2-3. Clustering analyses of AzTEC sources in the ADF-S and the SXDF find that bright (>3 mJy) AzTEC sources are more strongly clustered than faint (< 3 mJy) AzTEC sources and the average mass of dark halos hosting bright AzTEC sources was calculated to be 1013-1014M⊙. Comparison of correlation length of AzTEC sources with other populations and with a bias evolution model suggests that dark halos hosting bright AzTEC sources evolve into systems of clusters at present universe and the AzTEC sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Astrophysics Data System (ADS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonsdale, C.J.; Hacking, P.B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-01-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
SPITZER 70 AND 160 {mu}m OBSERVATIONS OF THE COSMOS FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayer, D. T.; Huynh, M. T.; Bhattacharya, B.
2009-11-15
We present Spitzer 70 and 160 {mu}m observations of the COSMOS Spitzer survey (S-COSMOS). The data processing techniques are discussed for the publicly released products consisting of images and source catalogs. We present accurate 70 and 160 {mu}m source counts of the COSMOS field and find reasonable agreement with measurements in other fields and with model predictions. The previously reported counts for GOODS-North and the extragalactic First Look Survey are updated with the latest calibration, and counts are measured based on the large area SWIRE survey to constrain the bright source counts. We measure an extragalactic confusion noise level ofmore » {sigma} {sub c} = 9.4 {+-} 3.3 mJy (q = 5) for the MIPS 160 {mu}m band based on the deep S-COSMOS data and report an updated confusion noise level of {sigma} {sub c} = 0.35 {+-} 0.15 mJy (q = 5) for the MIPS 70 {mu}m band.« less
Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...
Effects of lint cleaning on lint trash particle size distribution
USDA-ARS?s Scientific Manuscript database
Cotton quality trash measurements used today typically yield a single value for trash parameters for a lint sample (i.e. High Volume Instrument – percent area; Advanced Fiber Information System – total count, trash size, dust count, trash count, and visible foreign matter). A Cotton Trash Identifica...
2013 Kids Count in Colorado! Community Matters
ERIC Educational Resources Information Center
Colorado Children's Campaign, 2013
2013-01-01
"Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…
NASA Astrophysics Data System (ADS)
Matsuura, Hideharu
2015-04-01
High-resolution silicon X-ray detectors with a large active area are required for effectively detecting traces of hazardous elements in food and soil through the measurement of the energies and counts of X-ray fluorescence photons radially emitted from these elements. The thicknesses and areas of commercial silicon drift detectors (SDDs) are up to 0.5 mm and 1.5 cm2, respectively. We describe 1.5-mm-thick gated SDDs (GSDDs) that can detect photons with energies up to 50 keV. We simulated the electric potential distributions in GSDDs with a Si thickness of 1.5 mm and areas from 0.18 to 168 cm2 at a single high reverse bias. The area of a GSDD could be enlarged simply by increasing all the gate widths by the same multiple, and the capacitance of the GSDD remained small and its X-ray count rate remained high.
Davis, Letitia; Wellman, Helen; Hart, James; Cleary, Robert; Gardstein, Betsey M; Sciuchetti, Paul
2004-09-01
This study examined whether a state surveillance system for work-related carpal tunnel syndrome (WR-CTS) based on workers' compensation claims (Sentinel Event Notification System for Occupational Risks, SENSOR) and the Annual Survey of Occupational Injuries and Illnesses (SOII) identified the same industries, occupations, sources of injury, and populations for intervention. Trends in counts, rates, and female/male ratios of WR-CTS during 1994-1997, and age distributions were compared across three data sources: SENSOR, Massachusetts SOII, and National SOII. SENSOR and National SOII data on WR-CTS were compared by industry, occupation, and injury source. Due to small sample size and subsequent gaps in available information, state SOII data on WR-CTS were of little use in identifying specific industries and occupations for intervention. SENSOR and National SOII data on the frequency of WR-CTS cases identified many similar occupations and industries, and both surveillance systems pointed to computer use as a risk factor for WR-CTS. Some high rate industries identified by SENSOR were not identified using National SOII rates even when national findings were restricted to take into account the distribution of the Massachusetts workforce. Use of national SOII data on rates of WR-CTS for identifying state industry priorities for WR-CTS prevention should be undertaken with caution. Options for improving state SOII data and use of other state data systems should be pursued.
Is a top-heavy initial mass function needed to reproduce the submillimetre galaxy number counts?
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Lu, Yu; Hayward, Christopher C.
2017-12-01
Matching the number counts and redshift distribution of submillimetre galaxies (SMGs) without invoking modifications to the initial mass ffunction (IMF) has proved challenging for semi-analytic models (SAMs) of galaxy formation. We adopt a previously developed SAM that is constrained to match the z = 0 galaxy stellar mass function and makes various predictions which agree well with observational constraints; we do not recalibrate the SAM for this work. We implement three prescriptions to predict the submillimetre flux densities of the model galaxies; two depend solely on star formation rate, whereas the other also depends on the dust mass. By comparing the predictions of the models, we find that taking into account the dust mass, which affects the dust temperature and thus influences the far-infrared spectral energy distribution, is crucial for matching the number counts and redshift distribution of SMGs. Moreover, despite using a standard IMF, our model can match the observed SMG number counts and redshift distribution reasonably well, which contradicts the conclusions of some previous studies that a top-heavy IMF, in addition to taking into account the effect of dust mass, is needed to match these observations. Although we have not identified the key ingredient that is responsible for our model matching the observed SMG number counts and redshift distribution without IMF variation - which is challenging given the different prescriptions for physical processes employed in the SAMs of interest - our results demonstrate that in SAMs, IMF variation is degenerate with other physical processes, such as stellar feedback.
Are the birch trees in Southern England a source of Betula pollen for North London?
Skjøth, C A; Smith, M; Brandt, J; Emberlin, J
2009-01-01
Birch pollen is highly allergenic. Knowledge of daily variations, atmospheric transport and source areas of birch pollen is important for exposure studies and for warnings to the public, especially for large cities such as London. Our results show that broad-leaved forests with high birch tree densities are located to the south and west of London. Bi-hourly Betula pollen concentrations for all the days included in the study, and for all available days with high birch pollen counts (daily average birch pollen counts>80 grains/m3), show that, on average, there is a peak between 1400 hours and 1600 hours. Back-trajectory analysis showed that, on days with high birch pollen counts (n=60), 80% of air masses arriving at the time of peak diurnal birch pollen count approached North London from the south in a 180 degree arc from due east to due west. Detailed investigations of three Betula pollen episodes, with distinctly different diurnal patterns compared to the mean daily cycle, were used to illustrate how night-time maxima (2200-0400 hours) in Betula pollen counts could be the result of transport from distant sources or long transport times caused by slow moving air masses. We conclude that the Betula pollen recorded in North London could originate from sources found to the west and south of the city and not just trees within London itself. Possible sources outside the city include Continental Europe and the Betula trees within the broad-leaved forests of Southern England.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
NASA Astrophysics Data System (ADS)
Gamage, K. A. A.; Joyce, M. J.; Taylor, G. C.
2013-04-01
In this paper we discuss the possibility of locating radioactive sources in space using a scanning-based method, relative to the three-dimensional location of the detector. The scanning system comprises an organic liquid scintillator detector, a tungsten collimator and an adjustable equatorial mount. The detector output is connected to a bespoke fast digitiser (Hybrid Instruments Ltd., UK) which streams digital samples to a personal computer. A radioactive source has been attached to a vertical wall and the data have been collected in two stages. In the first case, the scanning system was placed a couple of metres away from the wall and in the second case it moved few centimetres from the previous location, parallel to the wall. In each case data were collected from a grid of measurement points (set of azimuth angles for set of elevation angles) which covered the source on the wall. The discrimination of fast neutrons and gamma rays, detected by the organic liquid scintillator detector, is carried out on the basis of pulse gradient analysis. Images are then produced in terms of the angular distribution of events for total counts, gamma rays and neutrons for both cases. The three-dimensional location of the neutron source can be obtained by considering the relative separation of the centres of the corresponding images of angular distribution of events. The measurements have been made at the National Physical Laboratory, Teddington, Middlesex, UK.
Data-based Considerations in Portal Radiation Monitoring of Cargo Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weier, Dennis R.; O'Brien, Robert F.; Ely, James H.
2004-07-01
Radiation portal monitoring of cargo vehicles often includes a configuration of four-panel monitors that record gamma and neutron counts from vehicles transporting cargo. As vehicles pass the portal monitors, they generate a count profile over time that can be compared to the average panel background counts obtained just prior to the time the vehicle entered the area of the monitors. Pacific Northwest National Laboratory has accumulated considerable data regarding such background radiation and vehicle profiles from portal installations, as well as in experimental settings using known sources and cargos. Several considerations have a bearing on how alarm thresholds are setmore » in order to maintain sensitivity to radioactive sources while also controlling to a manageable level the rate of false or nuisance alarms. False alarms are statistical anomalies while nuisance alarms occur due to the presence of naturally occurring radioactive material (NORM) in cargo, for example, kitty litter. Considerations to be discussed include: • Background radiation suppression due to the shadow shielding from the vehicle. • The impact of the relative placement of the four panels on alarm decision criteria. • Use of plastic scintillators to separate gamma counts into energy windows. • The utility of using ratio criteria for the energy window counts rather than simply using total window counts. • Detection likelihood for these various decision criteria based on computer simulated injections of sources into vehicle profiles.« less
SCUBA-2 follow-up of Herschel-SPIRE observed Planck overdensities
NASA Astrophysics Data System (ADS)
MacKenzie, Todd P.; Scott, Douglas; Bianconi, Matteo; Clements, David L.; Dole, Herve A.; Flores-Cacho, Inés; Guery, David; Kneissl, Ruediger; Lagache, Guilaine; Marleau, Francine R.; Montier, Ludovic; Nesvadba, Nicole P. H.; Pointecouteau, Etienne; Soucail, Genevieve
2017-07-01
We present SCUBA-2 follow-up of 61 candidate high-redshift Planck sources. Of these, 10 are confirmed strong gravitational lenses and comprise some of the brightest such submm sources on the observed sky, while 51 are candidate proto-cluster fields undergoing massive starburst events. With the accompanying Herschel-Spectral and Photometric Imaging Receiver observations and assuming an empirical dust temperature prior of 34^{+13}_{-9} K, we provide photometric redshift and far-IR luminosity estimates for 172 SCUBA-2-selected sources within these Planck overdensity fields. The redshift distribution of the sources peak between a redshift of 2 and 4, with one-third of the sources having S500/S350 > 1. For the majority of the sources, we find far-IR luminosities of approximately 1013 L⊙, corresponding to star formation rates of around 1000 M⊙ yr-1. For S850 > 8 mJy sources, we show that there is up to an order of magnitude increase in star formation rate density and an increase in uncorrected number counts of 6 for S850 > 8 mJy when compared to typical cosmological survey fields. The sources detected with SCUBA-2 account for only approximately 5 per cent of the Planck flux at 353 GHz, and thus many more fainter sources are expected in these fields.
A statistical treatment of bioassay pour fractions
NASA Astrophysics Data System (ADS)
Barengoltz, Jack; Hughes, David
A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
Concurrent generation of multivariate mixed data with variables of dissimilar types.
Amatya, Anup; Demirtas, Hakan
2016-01-01
Data sets originating from wide range of research studies are composed of multiple variables that are correlated and of dissimilar types, primarily of count, binary/ordinal and continuous attributes. The present paper builds on the previous works on multivariate data generation and develops a framework for generating multivariate mixed data with a pre-specified correlation matrix. The generated data consist of components that are marginally count, binary, ordinal and continuous, where the count and continuous variables follow the generalized Poisson and normal distributions, respectively. The use of the generalized Poisson distribution provides a flexible mechanism which allows under- and over-dispersed count variables generally encountered in practice. A step-by-step algorithm is provided and its performance is evaluated using simulated and real-data scenarios.
Characterization of gigahertz (GHz) bandwidth photomultipliers
NASA Technical Reports Server (NTRS)
Abshire, J. B.; Rowe, H. E.
1977-01-01
The average impulse response, root-mean-square times jitter as a function of signal level, single photoelectron distribution, and multiphotoelectron dark-count distribution have been measured for two static crossed-field and five electrostatic photomultipliers. The optical signal source for the first three of these tests was a 30 picosecond mode-locked laser pulse at 0.53 micron. The static crossed-field detectors had 2-photoelectron resolution, less than 200 ps rise times, and rms time jitters of 30 ps at the single photoelectron level. The electrostatic photomultipliers had rise times from 1 to 2.5 nanoseconds, and rms time jitters from 160 to 650 ps at the same signal level. The two static crossed-field photomultipliers had ion-feedback-generated dark pulses to the 50-photoelectron level, whereas one electrostatic photomultiplier had dark pulses to the 30-photoelectron level.
Mapping of bird distributions from point count surveys
Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemcov, M.; Cooray, A.; Bock, J.
We have observed four massive galaxy clusters with the SPIRE instrument on the Herschel Space Observatory and measure a deficit of surface brightness within their central region after removing detected sources. We simulate the effects of instrumental sensitivity and resolution, the source population, and the lensing effect of the clusters to estimate the shape and amplitude of the deficit. The amplitude of the central deficit is a strong function of the surface density and flux distribution of the background sources. We find that for the current best fitting faint end number counts, and excellent lensing models, the most likely amplitudemore » of the central deficit is the full intensity of the cosmic infrared background (CIB). Our measurement leads to a lower limit to the integrated total intensity of the CIB of I{sub 250{mu}m}>0.69{sub -0.03}{sup +0.03}(stat.){sub -0.06}{sup +0.11}(sys.) MJy sr{sup -1}, with more CIB possible from both low-redshift sources and from sources within the target clusters. It should be possible to observe this effect in existing high angular resolution data at other wavelengths where the CIB is bright, which would allow tests of models of the faint source component of the CIB.« less
Tillman, P. Glynn; Cottrell, Ted E.
2015-01-01
The green stink bug, Chinavia hilaris (Say) (Hemiptera: Pentatomidae), is a pest of cotton in the southeastern United States, but little is known concerning its spatiotemporal distribution in agricultural farmscapes. Therefore, spatiotemporal distribution of C. hilaris in farmscapes where cotton fields adjoined peanut was examined weekly. Spatial patterns of C. hilaris counts were analyzed using SADIE (Spatial Analysis by Distance Indices) methodology. Interpolated maps of C. hilaris density were used to visualize abundance and distribution of C. hilaris in crops. For the six peanut-cotton farmscapes studied, the frequency of C. hilaris in cotton (94.8%) was significantly higher than in peanut (5.2%), and nymphs were rarely detected in peanut, indicating that peanut was not a source of C. hilaris into cotton. Significantly, aggregated spatial distributions were detected in cotton. Maps of local clustering indices depicted patches of C. hilaris in cotton, mainly at field edges including the peanut-to-cotton interface. Black cherry (Prunus serotina Ehrh.) and elderberry (Sambucus nigra subsp. canadensis [L.] R. Bolli) grew in habitats adjacent to crops, C. hilaris were captured in pheromone-baited stink bug traps in these habitats, and in most instances, C. hilaris were observed feeding on black cherry and elderberry in these habitats before colonization of cotton. Spatial distribution of C. hilaris in these farmscapes revealed that C. hilaris colonized cotton field edges near these two noncrop hosts. Altogether, these findings suggest that black cherry and elderberry were sources of C. hilaris into cotton. Factors affecting the spatiotemporal dynamics of C. hilaris in peanut-cotton farmscapes are discussed. PMID:26175464
2010-01-01
Background In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. Results The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Conclusions Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements. PMID:20205909
Nuel, Gregory; Regad, Leslie; Martin, Juliette; Camproux, Anne-Claude
2010-01-26
In bioinformatics it is common to search for a pattern of interest in a potentially large set of rather short sequences (upstream gene regions, proteins, exons, etc.). Although many methodological approaches allow practitioners to compute the distribution of a pattern count in a random sequence generated by a Markov source, no specific developments have taken into account the counting of occurrences in a set of independent sequences. We aim to address this problem by deriving efficient approaches and algorithms to perform these computations both for low and high complexity patterns in the framework of homogeneous or heterogeneous Markov models. The latest advances in the field allowed us to use a technique of optimal Markov chain embedding based on deterministic finite automata to introduce three innovative algorithms. Algorithm 1 is the only one able to deal with heterogeneous models. It also permits to avoid any product of convolution of the pattern distribution in individual sequences. When working with homogeneous models, Algorithm 2 yields a dramatic reduction in the complexity by taking advantage of previous computations to obtain moment generating functions efficiently. In the particular case of low or moderate complexity patterns, Algorithm 3 exploits power computation and binary decomposition to further reduce the time complexity to a logarithmic scale. All these algorithms and their relative interest in comparison with existing ones were then tested and discussed on a toy-example and three biological data sets: structural patterns in protein loop structures, PROSITE signatures in a bacterial proteome, and transcription factors in upstream gene regions. On these data sets, we also compared our exact approaches to the tempting approximation that consists in concatenating the sequences in the data set into a single sequence. Our algorithms prove to be effective and able to handle real data sets with multiple sequences, as well as biological patterns of interest, even when the latter display a high complexity (PROSITE signatures for example). In addition, these exact algorithms allow us to avoid the edge effect observed under the single sequence approximation, which leads to erroneous results, especially when the marginal distribution of the model displays a slow convergence toward the stationary distribution. We end up with a discussion on our method and on its potential improvements.
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
Puc, Małgorzata; Wolski, Tomasz
2013-01-01
The allergenic pollen content of the atmosphere varies according to climate, biogeography and vegetation. Minimisation of the pollen allergy symptoms is related to the possibility of avoidance of large doses of the allergen. Measurements performed in Szczecin over a period of 13 years (2000-2012 inclusive) permitted prediction of theoretical maximum concentrations of pollen grains and their probability for the pollen season of Poaceae, Artemisia and Ambrosia. Moreover, the probabilities were determined of a given date as the beginning of the pollen season, the date of the maximum pollen count, Seasonal Pollen Index value and the number of days with pollen count above threshold values. Aerobiological monitoring was conducted using a Hirst volumetric trap (Lanzoni VPPS). Linear trend with determination coefficient (R(2)) was calculated. Model for long-term forecasting was performed by the method based on Gumbel's distribution. A statistically significant negative correlation was determined between the duration of pollen season of Poaceae and Artemisia and the Seasonal Pollen Index value. Seasonal, total pollen counts of Artemisia and Ambrosia showed a strong and statistically significant decreasing tendency. On the basis of Gumbel's distribution, a model was proposed for Szczecin, allowing prediction of the probabilities of the maximum pollen count values that can appear once in e.g. 5, 10 or 100 years. Short pollen seasons are characterised by a higher intensity of pollination than long ones. Prediction of the maximum pollen count values, dates of the pollen season beginning, and the number of days with pollen count above the threshold, on the basis of Gumbel's distribution, is expected to lead to improvement in the prophylaxis and therapy of persons allergic to pollen.
Modeling the evolution of infrared galaxies: a parametric backward evolution model
NASA Astrophysics Data System (ADS)
Béthermin, M.; Dole, H.; Lagache, G.; Le Borgne, D.; Penin, A.
2011-05-01
Aims: We attempt to model the infrared galaxy evolution in as simple a way as possible and reproduce statistical properties such as the number counts between 15 μm and 1.1 mm, the luminosity functions, and the redshift distributions. We then use the fitted model to interpret observations from Spitzer, AKARI, BLAST, LABOCA, AzTEC, SPT, and Herschel, and make predictions for Planck and future experiments such as CCAT or SPICA. Methods: This model uses an evolution in density and luminosity of the luminosity function parametrized by broken power-laws with two breaks at redshift ~0.9 and 2, and contains the two populations of the Lagache model: normal and starburst galaxies. We also take into account the effect of the strong lensing of high-redshift sub-millimeter galaxies. This effect is significant in the sub-mm and mm range near 50 mJy. It has 13 free parameters and eight additional calibration parameters. We fit the parameters to the IRAS, Spitzer, Herschel, and AzTEC measurements with a Monte Carlo Markov chain. Results: The model adjusted to deep counts at key wavelengths reproduces the counts from mid-infrared to millimeter wavelengths, as well as the mid-infrared luminosity functions. We discuss the contribution to both the cosmic infrared background (CIB) and the infrared luminosity density of the different populations. We also estimate the effect of the lensing on the number counts, and discuss the discovery by the South Pole Telescope (SPT) of a very bright population lying at high redshift. We predict the contribution of the lensed sources to the Planck number counts, the confusion level for future missions using a P(D) formalism, and the Universe opacity to TeV photons caused by the CIB. Material of the model (software, tables and predictions) is available online.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Legionella in industrial cooling towers: monitoring and control strategies.
Carducci, A; Verani, M; Battistini, R
2010-01-01
Legionella contamination of industrial cooling towers has been identified as the cause of sporadic cases and outbreaks of legionellosis among people living nearby. To evaluate and control Legionella contamination in industrial cooling tower water, microbiological monitoring was carried out to determine the effectiveness of the following different disinfection treatments: (i) continuous chlorine concentration of 0.01 ppm and monthly chlorine shock dosing (5 ppm) on a single cooling tower; (ii) continuous chlorine concentration of 0.4 ppm and monthly shock of biocide P3 FERROCID 8580 (BKG Water Solution) on seven towers. Legionella spp. and total bacterial count (TBC) were determined 3 days before and after each shock dose. Both strategies demonstrated that when chlorine was maintained at low levels, the Legionella count grew to levels above 10(4) CFU l(-1) while TBC still remained above 10(8 )CFU l(-1). Chlorine shock dosing was able to eliminate bacterial contamination, but only for 10-15 days. Biocide shock dosing was also insufficient to control the problem when the disinfectant concentration was administered at only one point in the plant and at the concentration of 30 ppm. On the other hand, when at a biocide concentration of 30 or 50 ppm was distributed throughout a number of points, depending on the plant hydrodynamics, Legionella counts decreased significantly and often remained below the warning limit. Moreover, the contamination of water entering the plant and the presence of sediment were also important factors for Legionella growth. For effective decontamination of outdoor industrial cooling towers, disinfectants should be distributed in a targeted way, taking into account the possible sources of contamination. The data of the research permitted to modify the procedure of disinfection for better reduce the water and aerosol contamination and consequently the exposure risk.
Quantum Key Distribution Using Polarized Single Photons
2009-04-01
liquid helium the SSPD with a low - noise , cryogenic high-electron-mobility transistor (HEMT) with high-input impedance. This arrangement allowed us...Sobolewski, IEEE Trans. Appl. Supercon., accepted (2009). 19. " Measurements of amplitude distributions of dark counts and photon counts in NbN ...75, 174507 (2007). 6. "Fiber-Coupled NbN Superconducting Single-Photon Detectors for Quantum Correlation Measurements ," W. Slysz, M. Wegrzecki, J
Shokrollahi, Borhan; Mansouri, Marouf; Amanlou, Hamid
2013-06-01
Thirty male and female (n = 15 for each one) Markhoz newborn goat kids (aged 7 ± 3 days) were distributed in a randomized block design in a 2 × 2 + 1 factorial arrangement: two levels of sodium selenite as a source of selenium (0.2 or 0.3 ppm Se), two levels of α-tocopherol acetate as a source of vitamin E (150 or 200 IU Vit E), and one control treatment with six repetitions per treatment (each replicate included three male and three female kids). Animals were fed daily by Se-Vit E-enriched milk (Se-Vit E treatments) or non-enriched milk (control treatment). Growth rate, hematology, and serum biological parameters were measured. The levels of serum albumin (P < 0.01), serum globulin (P < 0.05), total serum protein levels (P < 0.01), erythrocyte counts (RBC) (P < 0.001), hemoglobin (P < 0.001), hematocrit (P < 0.001), leukocyte counts (WBC) (P < 0.001), IgA (P < 0.05), IgG (P < 0.01), and IgM (P < 0.01) significantly differed among treatments, while no significant differences were observed for calcium, lymphocyte, neutrophil average daily gain and body weight among treatments. Kids feeding by enriched milk with 0.3 ppm Se and 200 IU Vit E had significantly higher serum total protein, globulin, RBC, IgA, IgG, and IgM compared to control and those fed by enriched milk to 0.2 ppm Se and 200 IU Vit E had significantly higher WBC counts.
Properties and determinants of codon decoding time distributions
2014-01-01
Background Codon decoding time is a fundamental property of mRNA translation believed to affect the abundance, function, and properties of proteins. Recently, a novel experimental technology--ribosome profiling--was developed to measure the density, and thus the speed, of ribosomes at codon resolution. Specifically, this method is based on next-generation sequencing, which theoretically can provide footprint counts that correspond to the probability of observing a ribosome in this position for each nucleotide in each transcript. Results In this study, we report for the first time various novel properties of the distribution of codon footprint counts in five organisms, based on large-scale analysis of ribosomal profiling data. We show that codons have distinctive footprint count distributions. These tend to be preserved along the inner part of the ORF, but differ at the 5' and 3' ends of the ORF, suggesting that the translation-elongation stage actually includes three biophysical sub-steps. In addition, we study various basic properties of the codon footprint count distributions and show that some of them correlate with the abundance of the tRNA molecule types recognizing them. Conclusions Our approach emphasizes the advantages of analyzing ribosome profiling and similar types of data via a comparative genomic codon-distribution-centric view. Thus, our methods can be used in future studies related to translation and even transcription elongation. PMID:25572668
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
The Herschel-ATLAS data release 1 - I. Maps, catalogues and number counts
NASA Astrophysics Data System (ADS)
Valiante, E.; Smith, M. W. L.; Eales, S.; Maddox, S. J.; Ibar, E.; Hopwood, R.; Dunne, L.; Cigan, P. J.; Dye, S.; Pascale, E.; Rigby, E. E.; Bourne, N.; Furlanetto, C.; Ivison, R. J.
2016-11-01
We present the first major data release of the largest single key-project in area carried out in open time with the Herschel Space Observatory. The Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS) is a survey of 600 deg2 in five photometric bands - 100, 160, 250, 350 and 500 μm - with the Photoconductor Array Camera and Spectrometer and Spectral and Photometric Imaging Receiver (SPIRE) cameras. In this paper and the companion Paper II, we present the survey of three fields on the celestial equator, covering a total area of 161.6 deg2 and previously observed in the Galaxy and Mass Assembly (GAMA) spectroscopic survey. This paper describes the Herschel images and catalogues of the sources detected on the SPIRE 250 μm images. The 1σ noise for source detection, including both confusion and instrumental noise, is 7.4, 9.4 and 10.2 mJy at 250, 350 and 500 μm. Our catalogue includes 120 230 sources in total, with 113 995, 46 209 and 11 011 sources detected at >4σ at 250, 350 and 500 μm. The catalogue contains detections at >3σ at 100 and 160 μm for 4650 and 5685 sources, and the typical noise at these wavelengths is 44 and 49 mJy. We include estimates of the completeness of the survey and of the effects of flux bias and also describe a novel method for determining the true source counts. The H-ATLAS source counts are very similar to the source counts from the deeper HerMES survey at 250 and 350 μm, with a small difference at 500 μm. Appendix A provides a quick start in using the released data sets, including instructions and cautions on how to use them.
The SWIFT AGN and Cluster Survey. I. Number Counts of AGNs and Galaxy Clusters
NASA Astrophysics Data System (ADS)
Dai, Xinyu; Griffin, Rhiannon D.; Kochanek, Christopher S.; Nugent, Jenna M.; Bregman, Joel N.
2015-05-01
The Swift active galactic nucleus (AGN) and Cluster Survey (SACS) uses 125 deg2 of Swift X-ray Telescope serendipitous fields with variable depths surrounding γ-ray bursts to provide a medium depth (4× {{10}-15} erg cm-2 s-1) and area survey filling the gap between deep, narrow Chandra/XMM-Newton surveys and wide, shallow ROSAT surveys. Here, we present a catalog of 22,563 point sources and 442 extended sources and examine the number counts of the AGN and galaxy cluster populations. SACS provides excellent constraints on the AGN number counts at the bright end with negligible uncertainties due to cosmic variance, and these constraints are consistent with previous measurements. We use Wide-field Infrared Survey Explorer mid-infrared (MIR) colors to classify the sources. For AGNs we can roughly separate the point sources into MIR-red and MIR-blue AGNs, finding roughly equal numbers of each type in the soft X-ray band (0.5-2 keV), but fewer MIR-blue sources in the hard X-ray band (2-8 keV). The cluster number counts, with 5% uncertainties from cosmic variance, are also consistent with previous surveys but span a much larger continuous flux range. Deep optical or IR follow-up observations of this cluster sample will significantly increase the number of higher-redshift (z\\gt 0.5) X-ray-selected clusters.
Golwala, Zainab Mohammedi; Shah, Hardik; Gupta, Neeraj; Sreenivas, V; Puliyel, Jacob M
2016-06-01
Thrombocytopenia has been shown to predict mortality. We hypothesize that platelet indices may be more useful prognostic indicators. Our study subjects were children one month to 14 years old admitted to our hospital. To determine whether platelet count, plateletcrit (PCT), mean platelet volume (MPV) and platelet distribution width (PDW) and their ratios can predict mortality in hospitalised children. Children who died during hospital stay were the cases. Controls were age matched children admitted contemporaneously. The first blood sample after admission was used for analysis. Receiver operating characteristic (ROC) curve was used to identify the best threshold for measured variables and the ratios studied. Multiple regression analysis was done to identify independent predictors of mortality. Forty cases and forty controls were studied. Platelet count, PCT and the ratios of MPV/Platelet count, MPV/PCT, PDW/Platelet count, PDW/PCT and MPV × PDW/Platelet count × PCT were significantly different among children who survived compared to those who died. On multiple regression analysis the ratio of MPV/PCT, PDW/Platelet count and MPV/Platelet count were risk factors for mortality with an odds ratio of 4.31(95% CI, 1.69-10.99), 3.86 (95% CI, 1.53-9.75), 3.45 (95% CI, 1.38-8.64) respectively. In 67% of the patients who died MPV/PCT ratio was above 41.8 and PDW/Platelet count was above 3.86. In 65% of patients who died MPV/Platelet count was above 3.45. The MPV/PCT, PDW/Platelet count and MPV/Platelet count, in the first sample after admission in this case control study were predictors of mortality and could predict 65% to 67% of deaths accurately.
Impact of particles on sediment accumulation in a drinking water distribution system.
Vreeburg, J H G; Schippers, D; Verberk, J Q J C; van Dijk, J C
2008-10-01
Discolouration of drinking water is one of the main reasons customers complain to their water company. Though corrosion of cast iron is often seen as the main source for this problem, the particles originating from the treatment plant play an important and potentially dominant role in the generation of a discolouration risk in drinking water distribution systems. To investigate this thesis a study was performed in a drinking water distribution system. In two similar isolated network areas the effect of particles on discolouration risk was studied with particle counting, the Resuspension Potential Method (RPM) and assessment of the total accumulated sediment. In the 'Control Area', supplied with normal drinking water, the discolouration risk was regenerated within 1.5 year. In the 'Research Area', supplied with particle-free water, this will take 10-15 years. An obvious remedy for controlling the discolouration risk is to improve the treatment with respect to the short peaks that are caused by particle breakthrough.
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-07-15
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper's capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determinedmore » for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency ({eta} {approx} 20%) and dark count probability (p{sub dark} {approx} 10{sup -7})« less
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Cryptographic robustness of practical quantum cryptography: BB84 key distribution protocol
NASA Astrophysics Data System (ADS)
Molotkov, S. N.
2008-07-01
In real fiber-optic quantum cryptography systems, the avalanche photodiodes are not perfect, the source of quantum states is not a single-photon one, and the communication channel is lossy. For these reasons, key distribution is impossible under certain conditions for the system parameters. A simple analysis is performed to find relations between the parameters of real cryptography systems and the length of the quantum channel that guarantee secure quantum key distribution when the eavesdropper’s capabilities are limited only by fundamental laws of quantum mechanics while the devices employed by the legitimate users are based on current technologies. Critical values are determined for the rate of secure real-time key generation that can be reached under the current technology level. Calculations show that the upper bound on channel length can be as high as 300 km for imperfect photodetectors (avalanche photodiodes) with present-day quantum efficiency (η ≈ 20%) and dark count probability ( p dark ˜ 10-7).
Yu, Xubiao; Ladewig, Samantha; Bao, Shaowu; Toline, Catherine A; Whitmire, Stefanie; Chow, Alex T
2018-02-01
To investigate the occurrence and distribution of microplastics in the southeastern coastal region of the United States, we quantified the amount of microplastics in sand samples from multiple coastal sites and developed a predictive model to understand the drift of plastics via ocean currents. Sand samples from eighteen National Park Service (NPS) beaches in the Southeastern Region were collected and microplastics were isolated from each sample. Microplastic counts were compared among sites and local geography was used to make inferences about sources and modes of distribution. Samples were analyzed to identify the composition of particles using Fourier transform infrared spectroscopy (FTIR). To predict the spatiotemporal distribution and movements of particles via coastal currents, a Regional Ocean Modeling System (ROMS) was applied. Microplastics were detected in each of the sampled sites although abundance among sites was highly variable. Approximately half of the samples were dominated by thread-like and fibrous materials as opposed to beads and particles. Results of FTIR suggested that 24% consisted of polyethylene terephthalate (PET), while about 68% of the fibers tested were composed of man-made cellulosic materials such as rayon. Based on published studies examining sources of microplastics, the shape of the particles found here (mostly fibers) and the presence of PET, we infer the source of microplastics in coastal areas is mainly from urban areas, such as wastewater discharge, rather than breakdown of larger marine debris drifting in the ocean. Local geographic features, e.g., the nearness of sites to large rivers and urbanized areas, explain variance in amount of microplastics among sites. Additionally, the distribution of simulated particles is explained by ocean current patterns; computer simulations were correlated with field observations, reinforcing the idea that ocean currents can be a good predictor of the fate and distribution of microplastics at the sites sampled here. Copyright © 2017 Elsevier B.V. All rights reserved.
Detection of anomalies in radio tomography of asteroids: Source count and forward errors
NASA Astrophysics Data System (ADS)
Pursiainen, S.; Kaasalainen, M.
2014-09-01
The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.
Dermatoglyphic analysis of La Liébana (Cantabria, Spain). 2. Finger ridge counts.
Martín, J; Gómez, P
1993-06-01
The results of univariate and multivariate analyses of the quantitative finger dermatoglyphic traits (i.e. ridge counts) of a sample of 109 males and 88 females from La Liébana (Cantabria, Spain) are reported. Univariate results follow the trends usually found in previous studies, e.g., ranking of finger ridge counts, bilateral asymmetry or shape of the distributions of the frequencies. However, sexual dimorphism is nearly inexistent concerning finger ridge counts. This lack of dimorphism could be related to certain characteristics of the distribution of finger dermatoglyphic patterns previously reported by the same authors. The multivariate description has been carried out by means of principal component analysis (with varimax rotation to obtain the final solution) of the correlation matrices computed from the 10 maximal finger ridge counts. Although the results do not necessarily prove the concept of developmental fields ("field theory" and later modifications), some precepts of the theory are present: field polarization and field overlapping.
Neelon, Brian; O'Malley, A James; Smith, Valerie A
2016-11-30
Health services data often contain a high proportion of zeros. In studies examining patient hospitalization rates, for instance, many patients will have no hospitalizations, resulting in a count of zero. When the number of zeros is greater or less than expected under a standard count model, the data are said to be zero modified relative to the standard model. A similar phenomenon arises with semicontinuous data, which are characterized by a spike at zero followed by a continuous distribution with positive support. When analyzing zero-modified count and semicontinuous data, flexible mixture distributions are often needed to accommodate both the excess zeros and the typically skewed distribution of nonzero values. Various models have been introduced over the past three decades to accommodate such data, including hurdle models, zero-inflated models, and two-part semicontinuous models. This tutorial describes recent modeling strategies for zero-modified count and semicontinuous data and highlights their role in health services research studies. Part 1 of the tutorial, presented here, provides a general overview of the topic. Part 2, appearing as a companion piece in this issue of Statistics in Medicine, discusses three case studies illustrating applications of the methods to health services research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Galaxy and Mass Assembly (GAMA): Exploring the WISE Web in G12
NASA Astrophysics Data System (ADS)
Jarrett, T. H.; Cluver, M. E.; Magoulas, C.; Bilicki, M.; Alpaslan, M.; Bland-Hawthorn, J.; Brough, S.; Brown, M. J. I.; Croom, S.; Driver, S.; Holwerda, B. W.; Hopkins, A. M.; Loveday, J.; Norberg, P.; Peacock, J. A.; Popescu, C. C.; Sadler, E. M.; Taylor, E. N.; Tuffs, R. J.; Wang, L.
2017-02-01
We present an analysis of the mid-infrared Wide-field Infrared Survey Explorer (WISE) sources seen within the equatorial GAMA G12 field, located in the North Galactic Cap. Our motivation is to study and characterize the behavior of WISE source populations in anticipation of the deep multiwavelength surveys that will define the next decade, with the principal science goal of mapping the 3D large-scale structures and determining the global physical attributes of the host galaxies. In combination with cosmological redshifts, we identify galaxies from their WISE W1 (3.4 μm) resolved emission, and we also perform a star-galaxy separation using apparent magnitude, colors, and statistical modeling of star counts. The resulting galaxy catalog has ≃590,000 sources in 60 deg2, reaching a W1 5σ depth of 31 μJy. At the faint end, where redshifts are not available, we employ a luminosity function analysis to show that approximately 27% of all WISE extragalactic sources to a limit of 17.5 mag (31 μJy) are at high redshift, z> 1. The spatial distribution is investigated using two-point correlation functions and a 3D source density characterization at 5 Mpc and 20 Mpc scales. For angular distributions, we find that brighter and more massive sources are strongly clustered relative to fainter sources with lower mass; likewise, based on WISE colors, spheroidal galaxies have the strongest clustering, while late-type disk galaxies have the lowest clustering amplitudes. In three dimensions, we find a number of distinct groupings, often bridged by filaments and superstructures. Using special visualization tools, we map these structures, exploring how clustering may play a role with stellar mass and galaxy type.
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts
NASA Technical Reports Server (NTRS)
Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.
2010-01-01
We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX measurements, which is not attributed to cluster member galaxies. Our galaxy counts are a better match to deeper UV counts measured with HST.
Automatic, time-interval traffic counts for recreation area management planning
D. L. Erickson; C. J. Liu; H. K. Cordell
1980-01-01
Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...
Guarddon, Mónica; Miranda, Jose M; Vázquez, Beatriz I; Cepeda, Alberto; Franco, Carlos M
2012-07-01
The evolution of antimicrobial-resistant bacteria has become a threat to food safety and methods to control them are necessary. Counts of tetracycline-resistant (TR) bacteria by microbiological methods were compared with those obtained by quantitative PCR (qPCR) in 80 meat samples. TR Enterobacteriaceae counts were similar between the count plate method and qPCR (P= 0.24), whereas TR aerobic mesophilic bacteria counts were significantly higher by the microbiological method (P < 0.001). The distribution of tetA and tetB genes was investigated in different types of meat. tetA was detected in chicken meat (40%), turkey meat (100%), pork (20%), and beef (40%) samples, whereas tetB was detected in chicken meat (45%), turkey meat (70%), pork (30%), and beef (35%) samples. The presence of tetracycline residues was also investigated by a receptor assay. This study offers an alternative and rapid method for monitoring the presence of TR bacteria in meat and furthers the understanding of the distribution of tetA and tetB genes. © 2012 Institute of Food Technologists®
Method for spatially distributing a population
Bright, Edward A [Knoxville, TN; Bhaduri, Budhendra L [Knoxville, TN; Coleman, Phillip R [Knoxville, TN; Dobson, Jerome E [Lawrence, KS
2007-07-24
A process for spatially distributing a population count within a geographically defined area can include the steps of logically correlating land usages apparent from a geographically defined area to geospatial features in the geographically defined area and allocating portions of the population count to regions of the geographically defined area having the land usages, according to the logical correlation. The process can also include weighing the logical correlation for determining the allocation of portions of the population count and storing the allocated portions within a searchable data store. The logically correlating step can include the step of logically correlating time-based land usages to geospatial features of the geographically defined area. The process can also include obtaining a population count for the geographically defined area, organizing the geographically defined area into a plurality of sectors, and verifying the allocated portions according to direct observation.
10C survey of radio sources at 15.7 GHz - II. First results
NASA Astrophysics Data System (ADS)
AMI Consortium; Davies, Mathhew L.; Franzen, Thomas M. O.; Waldram, Elizabeth M.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony; Olamaie, Malak; Pooley, Guy G.; Riley, Julia M.; Rodríguez-Gonzálvez, Carmen; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel P.; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Zwart, Jonathan T. L.
2011-08-01
In a previous paper (Paper I), the observational, mapping and source-extraction techniques used for the Tenth Cambridge (10C) Survey of Radio Sources were described. Here, the first results from the survey, carried out using the Arcminute Microkelvin Imager Large Array (LA) at an observing frequency of 15.7 GHz, are presented. The survey fields cover an area of ≈27 deg2 to a flux-density completeness of 1 mJy. Results for some deeper areas, covering ≈12 deg2, wholly contained within the total areas and complete to 0.5 mJy, are also presented. The completeness for both areas is estimated to be at least 93 per cent. The 10C survey is the deepest radio survey of any significant extent (≳0.2 deg2) above 1.4 GHz. The 10C source catalogue contains 1897 entries and is available online. The source catalogue has been combined with that of the Ninth Cambridge Survey to calculate the 15.7-GHz source counts. A broken power law is found to provide a good parametrization of the differential count between 0.5 mJy and 1 Jy. The measured source count has been compared with that predicted by de Zotti et al. - the model is found to display good agreement with the data at the highest flux densities. However, over the entire flux-density range of the measured count (0.5 mJy to 1 Jy), the model is found to underpredict the integrated count by ≈30 per cent. Entries from the source catalogue have been matched with those contained in the catalogues of the NRAO VLA Sky Survey and the Faint Images of the Radio Sky at Twenty-cm survey (both of which have observing frequencies of 1.4 GHz). This matching provides evidence for a shift in the typical 1.4-GHz spectral index to 15.7-GHz spectral index of the 15.7-GHz-selected source population with decreasing flux density towards sub-mJy levels - the spectra tend to become less steep. Automated methods for detecting extended sources, developed in Paper I, have been applied to the data; ≈5 per cent of the sources are found to be extended relative to the LA-synthesized beam of ≈30 arcsec. Investigations using higher resolution data showed that most of the genuinely extended sources at 15.7 GHz are classical doubles, although some nearby galaxies and twin-jet sources were also identified.
Pearson-type goodness-of-fit test with bootstrap maximum likelihood estimation.
Yin, Guosheng; Ma, Yanyuan
2013-01-01
The Pearson test statistic is constructed by partitioning the data into bins and computing the difference between the observed and expected counts in these bins. If the maximum likelihood estimator (MLE) of the original data is used, the statistic generally does not follow a chi-squared distribution or any explicit distribution. We propose a bootstrap-based modification of the Pearson test statistic to recover the chi-squared distribution. We compute the observed and expected counts in the partitioned bins by using the MLE obtained from a bootstrap sample. This bootstrap-sample MLE adjusts exactly the right amount of randomness to the test statistic, and recovers the chi-squared distribution. The bootstrap chi-squared test is easy to implement, as it only requires fitting exactly the same model to the bootstrap data to obtain the corresponding MLE, and then constructs the bin counts based on the original data. We examine the test size and power of the new model diagnostic procedure using simulation studies and illustrate it with a real data set.
Point count length and detection of forest neotropical migrant birds
Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.
A Statistical Treatment of Bioassay Pour Fractions
NASA Technical Reports Server (NTRS)
Barengoltz, Jack; Hughes, David W.
2014-01-01
The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(
Fast distributed large-pixel-count hologram computation using a GPU cluster.
Pan, Yuechao; Xu, Xuewu; Liang, Xinan
2013-09-10
Large-pixel-count holograms are one essential part for big size holographic three-dimensional (3D) display, but the generation of such holograms is computationally demanding. In order to address this issue, we have built a graphics processing unit (GPU) cluster with 32.5 Tflop/s computing power and implemented distributed hologram computation on it with speed improvement techniques, such as shared memory on GPU, GPU level adaptive load balancing, and node level load distribution. Using these speed improvement techniques on the GPU cluster, we have achieved 71.4 times computation speed increase for 186M-pixel holograms. Furthermore, we have used the approaches of diffraction limits and subdivision of holograms to overcome the GPU memory limit in computing large-pixel-count holograms. 745M-pixel and 1.80G-pixel holograms were computed in 343 and 3326 s, respectively, for more than 2 million object points with RGB colors. Color 3D objects with 1.02M points were successfully reconstructed from 186M-pixel hologram computed in 8.82 s with all the above three speed improvement techniques. It is shown that distributed hologram computation using a GPU cluster is a promising approach to increase the computation speed of large-pixel-count holograms for large size holographic display.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Estimation of Confidence Intervals for Multiplication and Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J
2009-07-17
Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoreticalmore » count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.« less
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY ...
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY COUNTING MEASURES RADIOACTIVE MATERIAL IN THE BODY. DESIGNED TO MINIMIZE EXTERNAL SOURCES OF RADIATION, BODY COUNTING ROOMS ARE CONSTRUCTED OF PRE-WORLD WAR II (WWII) STEEL. PRE-WWII STEEL, WHICH HAS NOT BEEN AFFECTED BY NUCLEAR FALLOUT, IS LOWER IS RADIOACTIVITY THAN STEEL CREATED AFTER WWII. (10/25/85) - Rocky Flats Plant, Emergency Medical Services Facility, Southwest corner of Central & Third Avenues, Golden, Jefferson County, CO
General relativistic corrections in density-shear correlations
NASA Astrophysics Data System (ADS)
Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena
2018-06-01
We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.
NASA Astrophysics Data System (ADS)
Nishizawa, Yukiyasu; Sugita, Takeshi; Sanada, Yukihisa; Torii, Tatsuo
2015-04-01
Since 2011, MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan) have been conducting aerial monitoring to investigate the distribution of radioactive cesium dispersed into the atmosphere after the accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP), Tokyo Electric Power Company. Distribution maps of the air dose-rate at 1 m above the ground and the radioactive cesium deposition concentration on the ground are prepared using spectrum obtained by aerial monitoring. The radioactive cesium deposition is derived from its dose rate, which is calculated by excluding the dose rate of the background radiation due to natural radionuclides from the air dose-rate at 1 m above the ground. The first step of the current method of calculating the dose rate due to natural radionuclides is calculate the ratio of the total count rate of areas where no radioactive cesium is detected and the count rate of regions with energy levels of 1,400 keV or higher (BG-Index). Next, calculate the air dose rate of radioactive cesium by multiplying the BG-Index and the integrated count rate of 1,400 keV or higher for the area where the radioactive cesium is distributed. In high dose-rate areas, however, the count rate of the 1,365-keV peak of Cs-134, though small, is included in the integrated count rate of 1,400 keV or higher, which could cause an overestimation of the air dose rate of natural radionuclides. We developed a method for accurately evaluating the distribution maps of natural air dose-rate by excluding the effect of radioactive cesium, even in contaminated areas, and obtained the accurate air dose-rate map attributed the radioactive cesium deposition on the ground. Furthermore, the natural dose-rate distribution throughout Japan has been obtained by this method.
Kaur, S; Nieuwenhuijsen, M J
2009-07-01
Short-term human exposure concentrations to PM2.5, ultrafine particle counts (particle range: 0.02-1 microm), and carbon monoxide (CO) were investigated at and around a street canyon intersection in Central London, UK. During a four week field campaign, groups of four volunteers collected samples at three timings (morning, lunch, and afternoon), along two different routes (a heavily trafficked route and a backstreet route) via five modes of transport (walking, cycling, bus, car, and taxi). This was followed by an investigation into the determinants of exposure using a regression technique which incorporated the site-specific traffic counts, meteorological variables (wind speed and temperature) and the mode of transport used. The analyses explained 9, 62, and 43% of the variability observed in the exposure concentrations to PM2.5, ultrafine particle counts, and CO in this study, respectively. The mode of transport was a statistically significant determinant of personal exposure to PM2.5, ultrafine particle counts, and CO, and for PM2.5 and ultrafine particle counts it was the most important determinant. Traffic count explained little of the variability in the PM2.5 concentrations, but it had a greater influence on ultrafine particle count and CO concentrations. The analyses showed that temperature had a statistically significant impact on ultrafine particle count and CO concentrations. Wind speed also had a statistically significant effect but smaller. The small proportion in variability explained in PM2.5 by the model compared to the largest proportion in ultrafine particle counts and CO may be due to the effect of long-range transboundary sources, whereas for ultrafine particle counts and CO, local traffic is the main source.
Local time variations of high-energy plasmaspheric ion pitch angle distributions
Sarno-Smith, Lois K.; Liemohn, Michael W.; Skoug, Ruth M.; ...
2016-07-01
Recent observations from the Van Allen Probes Helium Oxygen Proton Electron (HOPE) instrument revealed a persistent depletion in the 1–10 eV ion population in the postmidnight sector during quiet times in the 2 < L < 3 region. This study explores the source of this ion depletion by developing an algorithm to classify 26 months of pitch angle distributions measured by the HOPE instrument. We correct the HOPE low energy fluxes for spacecraft potential using measurements from the Electric Field and Waves (EFW) instrument. A high percentage of low count pitch angle distributions is found in the postmidnight sector coupledmore » with a low percentage of ion distributions peaked perpendicular to the field line. A peak in loss cone distributions in the dusk sector is also observed. Here, these results characterize the nature of the dearth of the near 90° pitch angle 1–10 eV ion population in the near-Earth postmidnight sector. This study also shows, for the first time, low-energy HOPE differential number fluxes corrected for spacecraft potential and 1–10 eV H + fluxes at different levels of geomagnetic activity.« less
NASA Technical Reports Server (NTRS)
Weedman, Daniel W.
1987-01-01
The infrared properties of star-forming galaxies, primarily as determined by the Infrared Astronomy Satellite (IRAS), are compared to X-ray, optical, and radio properties. Luminosity functions are reviewed and combined with those derived from optically discovered samples using 487 Markarian galaxies with redshifts and published IRAS 60 micron fluxes, and 1074 such galaxies in the Center for Astrophysics redshift survey. It is found that the majority of infrared galaxies which could be detected are low luminosity sources already known from the optical samples, but non-infrared surveys have found only a very small fraction of the highest luminosity sources. Distributions of infrared to optical fluxes and available spectra indicate that the majority of IRAS-selected galaxies are starburst galaxies. Having a census of starburst galaxies and associated dust allow severl important global calculations. The source counts are predicted as a function of flux limits for both infrared and radio fluxes. These galaxies are found to be important radio sources at faint flux limits. Taking the integrated flux to z = 3 indicates that such galaxies are a significant component of the diffuse X-ray background, and could be the the dominant component depending on the nature of the X-ray spectra and source evolution.
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
NASA Astrophysics Data System (ADS)
Fenske, Roger; Näther, Dirk U.; Dennis, Richard B.; Smith, S. Desmond
2010-02-01
Commercial Fluorescence Lifetime Spectrometers have long suffered from the lack of a simple, compact and relatively inexpensive broad spectral band light source that can be flexibly employed for both quasi-steady state and time resolved measurements (using Time Correlated Single Photon Counting [TCSPC]). This paper reports the integration of an optically pumped photonic crystal fibre, supercontinuum source1 (Fianium model SC400PP) as a light source in Fluorescence Lifetime Spectrometers (Edinburgh Instruments FLS920 and Lifespec II), with single photon counting detectors (micro-channel plate photomultiplier and a near-infrared photomultiplier) covering the UV to NIR range. An innovative method of spectral selection of the supercontinuum source involving wedge interference filters is also discussed.
Legionella prevalence and risk of legionellosis in Hungarian hospitals.
Barna, Zsófia; Kádár, Mihály; Kálmán, Emese; Róka, Eszter; Szax, Anita Sch; Vargha, Márta
2015-12-01
Nosocomial legionellosis is a growing concern worldwide. In Hungary, about 20% of the reported cases are health-care associated, but in the absence of legal regulation, environmental monitoring of Legionella is not routinely performed in hospitals. In the present study, 23 hospitals were investigated. The hot water distribution system was colonized by Legionella in over 90%; counts generally exceeded the public health limit value. Hot water temperature was critically low in all systems (<45 °C), and large differences (3-38 °C temperature drop) were observed within buildings, indicating insufficient circulation. Most facilities were older than 30 years (77%); however, new systems (n = 3) were also shown to be rapidly colonized at low hot water temperature. Vulnerable source of drinking water, complex distribution system, and large volume hot water storage increased the risk of Legionella prevalence (OR = 28.0, 27.3, 27.7, respectively). Risk management interventions (including thermal or chemical disinfection) were only efficient if the system operation was optimized. Though the risk factors were similar, in those hospitals where nosocomial legionellosis was reported, Legionella counts and the proportion of L. pneumophila sg 1 isolates were significantly higher. The results of environmental prevalence of legionellae in hospitals suggest that the incidence of nosocomial legionellosis is likely to be underreported. The observed colonization rates call for the introduction of a mandatory environmental monitoring scheme.
High-redshift radio galaxies and divergence from the CMB dipole
NASA Astrophysics Data System (ADS)
Colin, Jacques; Mohayaee, Roya; Rameez, Mohamed; Sarkar, Subir
2017-10-01
Previous studies have found our velocity in the rest frame of radio galaxies at high redshift to be much larger than that inferred from the dipole anisotropy of the cosmic microwave background. We construct a full sky catalogue, NVSUMSS, by merging the NRAO VLA Sky Survey and the Sydney University Molonglo Sky Survey catalogues and removing local sources by various means including cross-correlating with the 2MASS Redshift Survey catalogue. We take into account both aberration and Doppler boost to deduce our velocity from the hemispheric number count asymmetry, as well as via a three-dimensional linear estimator. Both its magnitude and direction depend on cuts made to the catalogue, e.g. on the lowest source flux; however these effects are small. From the hemispheric number count asymmetry we obtain a velocity of 1729 ± 187 km s-1, I.e. about four times larger than that obtained from the cosmic microwave background dipole, but close in direction, towards RA=149° ± 2°, Dec. = -17° ± 12°. With the three-dimensional estimator, the derived velocity is 1355 ± 174 km s-1 towards RA = 141° ± 11°, Dec. = -9° ± 10°. We assess the statistical significance of these results by comparison with catalogues of random distributions, finding it to be 2.81σ (99.75 per cent confidence).
Ventilation System Effectiveness and Tested Indoor Air Quality Impacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rudd, Armin; Bergey, Daniel
Ventilation system effectiveness testing was conducted at two unoccupied, single-family, detached lab homes at the University of Texas - Tyler. Five ventilation system tests were conducted with various whole-building ventilation systems. Multizone fan pressurization testing characterized building and zone enclosure leakage. PFT testing showed multizone air change rates and interzonal airflow. Cumulative particle counts for six particle sizes, and formaldehyde and other Top 20 VOC concentrations were measured in multiple zones. The testing showed that single-point exhaust ventilation was inferior as a whole-house ventilation strategy. It was inferior because the source of outside air was not direct from outside, themore » ventilation air was not distributed, and no provision existed for air filtration. Indoor air recirculation by a central air distribution system can help improve the exhaust ventilation system by way of air mixing and filtration. In contrast, the supply and balanced ventilation systems showed that there is a significant benefit to drawing outside air from a known outside location, and filtering and distributing that air. Compared to the Exhaust systems, the CFIS and ERV systems showed better ventilation air distribution and lower concentrations of particulates, formaldehyde and other VOCs. System improvement percentages were estimated based on four System Factor Categories: Balance, Distribution, Outside Air Source, and Recirculation Filtration. Recommended System Factors could be applied to reduce ventilation fan airflow rates relative to ASHRAE Standard 62.2 to save energy and reduce moisture control risk in humid climates. HVAC energy savings were predicted to be 8-10%, or $50-$75/year.« less
Events and the Ontology of Individuals: Verbs as a Source of Individuating Mass and Count Nouns
ERIC Educational Resources Information Center
Barner, David; Wagner, Laura; Snedeker, Jesse
2008-01-01
What does mass-count syntax contribute to the interpretation of noun phrases (NPs), and how much of NP meaning is contributed by lexical items alone? Many have argued that count syntax specifies reference to countable individuals (e.g., "cats") while mass syntax specifies reference to unindividuated entities (e.g., "water"). We evaluated this…
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2010 CFR
2010-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2011 CFR
2011-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
Passive decoy-state quantum key distribution with practical light sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curty, Marcos; Ma, Xiongfeng; Qi, Bing
2010-02-15
Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. Although active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase-randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty et al., Opt. Lett. 34, 3238 (2009).] This proposal requires only linear optics together with a simplemore » threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy-state setup with an infinite number of decoy settings. In this article we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we consider sources emitting thermal states, phase-randomized WCP, and strong coherent light in combination with several types of photodetectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photodetectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret key rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations.« less
Thermospheric temperature measurement technique.
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Fowler, P.
1972-01-01
A method for measurement of temperature in the earth's lower thermosphere from a high-velocity probes is described. An undisturbed atmospheric sample is admitted to the instrument by means of a free molecular flow inlet system of skimmers which avoids surface collisions of the molecules prior to detection. Measurement of the time-of-flight distribution of an initially well-localized group of nitrogen metastable molecular states produced in an open, crossed electron-molecular beam source, yields information on the atmospheric temperature. It is shown that for high vehicle velocities, the time-of-flight distribution of the metastable flux is a sensitive indicator of atmospheric temperature. The temperature measurement precision should be greater than 94% at the 99% confidence level over the range of altitudes from 120-170 km. These precision and altitude range estimates are based on the statistical consideration of the counting rates achieved with a multichannel analyzer using realistic values for system parameters.
Evidence for ion heat flux in the light ion polar wind
NASA Technical Reports Server (NTRS)
Biddle, A. P.; Moore, T. E.; Chappell, C. R.
1985-01-01
Cold flowing hydrogen and helium ions have been observed using the retarding ion mass spectrometer on board the Dynamics Explorer 1 spacecraft in the dayside magnetosphere at subauroral latitudes. The ions show a marked flux asymmetry with respect to the relative wind direction. The observed data are fitted by a model of drifting Maxwellian distributions perturbed by a first order-Spritzer-Haerm heat flux distribution function. It is shown that both ion species are supersonic just equatorward of the auroral zone at L = 14, and the shape of asymmetry and direction of the asymmetry are consistent with the presence of an upward heat flux. At L = 6, both species evolve smoothly into warmer subsonic upward flows with downward heat fluxes. In the case of subsonic flows the downward heat flux implies a significant heat source at higher altitudes. Spin curves of the spectrometer count rate versus the spin phase angle are provided.
The Hawaii SCUBA-2 Lensing Cluster Survey: Number Counts and Submillimeter Flux Ratios
NASA Astrophysics Data System (ADS)
Hsu, Li-Yen; Cowie, Lennox L.; Chen, Chian-Chou; Barger, Amy J.; Wang, Wei-Hao
2016-09-01
We present deep number counts at 450 and 850 μm using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μm and 850 μm, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength is contributed by faint sources with L IR < 1012 L ⊙, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μm stacking of K-selected sources from the literature, we conclude that the K-selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L IR < 1012 L ⊙. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μm and 850 μm selected sources. At 850 μm, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μm, we do not see a clear relation between the flux ratio and the observed flux.
A Predictive Model for Microbial Counts on Beaches where Intertidal Sand is the Primary Source
Feng, Zhixuan; Reniers, Ad; Haus, Brian K.; Solo-Gabriele, Helena M.; Wang, John D.; Fleming, Lora E.
2015-01-01
Human health protection at recreational beaches requires accurate and timely information on microbiological conditions to issue advisories. The objective of this study was to develop a new numerical mass balance model for enterococci levels on nonpoint source beaches. The significant advantage of this model is its easy implementation, and it provides a detailed description of the cross-shore distribution of enterococci that is useful for beach management purposes. The performance of the balance model was evaluated by comparing predicted exceedances of a beach advisory threshold value to field data, and to a traditional regression model. Both the balance model and regression equation predicted approximately 70% the advisories correctly at the knee depth and over 90% at the waist depth. The balance model has the advantage over the regression equation in its ability to simulate spatiotemporal variations of microbial levels, and it is recommended for making more informed management decisions. PMID:25840869
Automated food microbiology: potential for the hydrophobic grid-membrane filter.
Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L
1978-01-01
Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054
Color quench correction for low level Cherenkov counting.
Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B
2009-05-01
The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.
The topology of galaxy clustering.
NASA Astrophysics Data System (ADS)
Coles, P.; Plionis, M.
The authors discuss an objective method for quantifying the topology of the galaxy distribution using only projected galaxy counts. The method is a useful complement to fully three-dimensional studies of topology based on the genus by virtue of the enormous projected data sets available. Applying the method to the Lick counts they find no evidence for large-scale non-gaussian behaviour, whereas the small-scale distribution is strongly non-gaussian, with a shift in the meatball direction.
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
Automatic measurements and computations for radiochemical analyses
Rosholt, J.N.; Dooley, J.R.
1960-01-01
In natural radioactive sources the most important radioactive daughter products useful for geochemical studies are protactinium-231, the alpha-emitting thorium isotopes, and the radium isotopes. To resolve the abundances of these thorium and radium isotopes by their characteristic decay and growth patterns, a large number of repeated alpha activity measurements on the two chemically separated elements were made over extended periods of time. Alpha scintillation counting with automatic measurements and sample changing is used to obtain the basic count data. Generation of the required theoretical decay and growth functions, varying with time, and the least squares solution of the overdetermined simultaneous count rate equations are done with a digital computer. Examples of the complex count rate equations which may be solved and results of a natural sample containing four ??-emitting isotopes of thorium are illustrated. These methods facilitate the determination of the radioactive sources on the large scale required for many geochemical investigations.
The Planck Catalogue of Galactic Cold Clumps : Looking at the early stages of star-formation
NASA Astrophysics Data System (ADS)
Montier, Ludovic
2015-08-01
The Planck satellite has provided an unprecedented view of the submm sky, allowing us to search for the dust emission of Galactic cold sources. Combining Planck-HFI all-sky maps in the high frequency channels with the IRAS map at 100um, we built the Planck catalogue of Galactic Cold Clumps (PGCC, Planck 2015 results XXVIII 2015), counting 13188 sources distributed over the whole sky, and following mainly the Galactic structures at low and intermediate latitudes. This is the first all-sky catalogue of Galactic cold sources obtained with a single instrument at this resolution and sensitivity, which opens a new window on star-formation processes in our Galaxy.I will briefly describe the colour detection method used to extract the Galactic cold sources, i.e., the Cold Core Colour Detection Tool (CoCoCoDeT, Montier et al. 2010), and its application to the Planck data. I will discuss the statistical distribution of the properties of the PGCC sources (in terms of dust temperature, distance, mass, density and luminosity), which illustrates that the PGCC catalogue spans a large variety of environments and objects, from molecular clouds to cold cores, and covers various stages of evolution. The Planck catalogue is a very powerful tool to study the formation and the evolution of prestellar objects and star-forming regions.I will finally present an overview of the Herschel Key Program Galactic Cold Cores (PI. M.Juvela), which allowed us to follow-up about 350 Planck Galactic Cold Clumps, in various stages of evolution and environments. With this program, the nature and the composition of the 5' Planck sources have been revealed at a sub-arcmin resolution, showing very different configurations, such as starless cold cores or multiple Young Stellar objects still embedded in their cold envelope.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
High-spatial-resolution nanoparticle x-ray fluorescence tomography
NASA Astrophysics Data System (ADS)
Larsson, Jakob C.; Vâgberg, William; Vogt, Carmen; Lundström, Ulf; Larsson, Daniel H.; Hertz, Hans M.
2016-03-01
X-ray fluorescence tomography (XFCT) has potential for high-resolution 3D molecular x-ray bio-imaging. In this technique the fluorescence signal from targeted nanoparticles (NPs) is measured, providing information about the spatial distribution and concentration of the NPs inside the object. However, present laboratory XFCT systems typically have limited spatial resolution (>1 mm) and suffer from long scan times and high radiation dose even at high NP concentrations, mainly due to low efficiency and poor signal-to-noise ratio. We have developed a laboratory XFCT system with high spatial resolution (sub-100 μm), low NP concentration and vastly decreased scan times and dose, opening up the possibilities for in-vivo small-animal imaging research. The system consists of a high-brightness liquid-metal-jet microfocus x-ray source, x-ray focusing optics and an energy-resolving photon-counting detector. By using the source's characteristic 24 keV line-emission together with carefully matched molybdenum nanoparticles the Compton background is greatly reduced, increasing the SNR. Each measurement provides information about the spatial distribution and concentration of the Mo nanoparticles. A filtered back-projection method is used to produce the final XFCT image.
High sensitivity pulse-counting mass spectrometer system for noble gas analysis
NASA Technical Reports Server (NTRS)
Hohenberg, C. M.
1980-01-01
A pulse-counting mass spectrometer is described which is comprised of a new ion source of cylindrical geometry, with exceptional optical properties (the Baur source), a dual focal plane externally adjustable collector slits, and a 17-stage Allen-type electron multiplier, all housed in a metal 21 cm radius, 90 deg magnetic sector flight tube. Mass discrimination of the instrument is less than 1 per mil per mass unit; the optical transmission is more than 90%; the source sensitivity (Faraday collection) is 4 ma/torr at 250 micron emission; and the abundance sensitivity is 30,000.
Cade, Brian S.; Noon, Barry R.; Scherer, Rick D.; Keane, John J.
2017-01-01
Counts of avian fledglings, nestlings, or clutch size that are bounded below by zero and above by some small integer form a discrete random variable distribution that is not approximated well by conventional parametric count distributions such as the Poisson or negative binomial. We developed a logistic quantile regression model to provide estimates of the empirical conditional distribution of a bounded discrete random variable. The logistic quantile regression model requires that counts are randomly jittered to a continuous random variable, logit transformed to bound them between specified lower and upper values, then estimated in conventional linear quantile regression, repeating the 3 steps and averaging estimates. Back-transformation to the original discrete scale relies on the fact that quantiles are equivariant to monotonic transformations. We demonstrate this statistical procedure by modeling 20 years of California Spotted Owl fledgling production (0−3 per territory) on the Lassen National Forest, California, USA, as related to climate, demographic, and landscape habitat characteristics at territories. Spotted Owl fledgling counts increased nonlinearly with decreasing precipitation in the early nesting period, in the winter prior to nesting, and in the prior growing season; with increasing minimum temperatures in the early nesting period; with adult compared to subadult parents; when there was no fledgling production in the prior year; and when percentage of the landscape surrounding nesting sites (202 ha) with trees ≥25 m height increased. Changes in production were primarily driven by changes in the proportion of territories with 2 or 3 fledglings. Average variances of the discrete cumulative distributions of the estimated fledgling counts indicated that temporal changes in climate and parent age class explained 18% of the annual variance in owl fledgling production, which was 34% of the total variance. Prior fledgling production explained as much of the variance in the fledgling counts as climate, parent age class, and landscape habitat predictors. Our logistic quantile regression model can be used for any discrete response variables with fixed upper and lower bounds.
Simulated fissioning of uranium and testing of the fission-track dating method
McGee, V.E.; Johnson, N.M.; Naeser, C.W.
1985-01-01
A computer program (FTD-SIM) faithfully simulates the fissioning of 238U with time and 235U with neutron dose. The simulation is based on first principles of physics where the fissioning of 238U with the flux of time is described by Ns = ??f 238Ut and the fissioning of 235U with the fluence of neutrons is described by Ni = ??235U??. The Poisson law is used to set the stochastic variation of fissioning within the uranium population. The life history of a given crystal can thus be traced under an infinite variety of age and irradiation conditions. A single dating attempt or up to 500 dating attempts on a given crystal population can be simulated by specifying the age of the crystal population, the size and variation in the areas to be counted, the amount and distribution of uranium, the neutron dose to be used and its variation, and the desired ratio of 238U to 235U. A variety of probability distributions can be applied to uranium and counting-area. The Price and Walker age equation is used to estimate age. The output of FTD-SIM includes the tabulated results of each individual dating attempt (sample) on demand and/or the summary statistics and histograms for multiple dating attempts (samples) including the sampling age. An analysis of the results from FTD-SIM shows that: (1) The external detector method is intrinsically more precise than the population method. (2) For the external detector method a correlation between spontaneous track count, Ns, and induced track count, Ni, results when the population of grains has a stochastic uranium content and/or when the counting areas between grains are stochastic. For the population method no correlation can exist. (3) In the external detector method the sampling distribution of age is independent of the number of grains counted. In the population method the sampling distribution of age is highly dependent on the number of grains counted. (4) Grains with zero-track counts, either in Ns or Ni, are in integral part of fissioning theory and under certain circumstances must be included in any estimate of age. (5) In estimating standard error of age the standard error of Ns and Ni and ?? must be accurately estimated and propagated through the age equation. Several statistical models are presently available to do so. ?? 1985.
NASA Astrophysics Data System (ADS)
Uttley, P.; Gendreau, K.; Markwardt, C.; Strohmayer, T. E.; Bult, P.; Arzoumanian, Z.; Pottschmidt, K.; Ray, P. S.; Remillard, R.; Pasham, D.; Steiner, J.; Neilsen, J.; Homan, J.; Miller, J. M.; Iwakiri, W.; Fabian, A. C.
2018-03-01
NICER observed the new X-ray transient MAXI J1820+070 (ATel #11399, #11400, #11403, #11404, #11406, #11418, #11420, #11421) on multiple occasions from 2018 March 12 to 14. & nbsp;During this time the source brightened rapidly, from a total NICER mean count rate of 880 count/s on March 12 to 2800 count/s by March 14 17:00 & nbsp;UTC, corresponding to a change in 2-10 keV modelled flux (see below) from 1.9E-9 to 5E-9 erg cm-2 s-1. & nbsp; The broadband X-ray spectrum is absorbed by a low column density (fitting the model given below, we obtain 1.5E21 cm-2), in keeping with the low Galactic column in the direction of the source (ATel #11418; Dickey & Lockman, 1990, ARAA, 28, 215; Kalberla et al. 2005, A &A, 440, 775) and consists of a hard power-law component with weak reflection features (broad iron line and narrow 6.4 keV line core) and an additional soft X-ray component.
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pan, Tony; Flick, Patrick; Jain, Chirag; Liu, Yongchao; Aluru, Srinivas
2017-10-09
Counting and indexing fixed length substrings, or k-mers, in biological sequences is a key step in many bioinformatics tasks including genome alignment and mapping, genome assembly, and error correction. While advances in next generation sequencing technologies have dramatically reduced the cost and improved latency and throughput, few bioinformatics tools can efficiently process the datasets at the current generation rate of 1.8 terabases every 3 days. We present Kmerind, a high performance parallel k-mer indexing library for distributed memory environments. The Kmerind library provides a set of simple and consistent APIs with sequential semantics and parallel implementations that are designed to be flexible and extensible. Kmerind's k-mer counter performs similarly or better than the best existing k-mer counting tools even on shared memory systems. In a distributed memory environment, Kmerind counts k-mers in a 120 GB sequence read dataset in less than 13 seconds on 1024 Xeon CPU cores, and fully indexes their positions in approximately 17 seconds. Querying for 1% of the k-mers in these indices can be completed in 0.23 seconds and 28 seconds, respectively. Kmerind is the first k-mer indexing library for distributed memory environments, and the first extensible library for general k-mer indexing and counting. Kmerind is available at https://github.com/ParBLiSS/kmerind.
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Lehtola, Markku J; Miettinen, Ilkka T; Hirvonen, Arja; Vartiainen, Terttu; Martikainen, Pertti J
2007-12-01
The numbers of bacteria generally increase in distributed water. Often household pipelines or water fittings (e.g., taps) represent the most critical location for microbial growth in water distribution systems. According to the European Union drinking water directive, there should not be abnormal changes in the colony counts in water. We used a pilot distribution system to study the effects of water stagnation on drinking water microbial quality, concentration of copper and formation of biofilms with two commonly used pipeline materials in households; copper and plastic (polyethylene). Water stagnation for more than 4h significantly increased both the copper concentration and the number of bacteria in water. Heterotrophic plate counts were six times higher in PE pipes and ten times higher in copper pipes after 16 h of stagnation than after only 40 min stagnation. The increase in the heterotrophic plate counts was linear with time in both copper and plastic pipelines. In the distribution system, bacteria originated mainly from biofilms, because in laboratory tests with water, there was only minor growth of bacteria after 16 h stagnation. Our study indicates that water stagnation in the distribution system clearly affects microbial numbers and the concentration of copper in water, and should be considered when planning the sampling strategy for drinking water quality control in distribution systems.
Neutron multiplicity ,easurements With 3He alternative: Straw neutron detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sanjoy; Wolff, Ronald S.; Meade, John A.
Counting neutrons emitted by special nuclear material (SNM) and differentiating them from the background neutrons of various origins is the most effective passive means of detecting SNM. Unfortunately, neutron detection, counting, and partitioning in a maritime environment are complex due to the presence of high-multiplicity spallation neutrons (commonly known as “ship effect”) and to the complicated nature of the neutron scattering in that environment. In this study, a prototype neutron detector was built using 10B as the converter in a special form factor called “straws” that would address the above problems by looking into the details of multiplicity distributions ofmore » neutrons originating from a fissioning source. This paper describes the straw neutron multiplicity counter (NMC) and assesses the performance with those of a commercially available fission meter. The prototype straw neutron detector provides a large-area, efficient, lightweight, more granular (than fission meter) neutron-responsive detection surface (to facilitate imaging) to enhance the ease of application of fission meters. Presented here are the results of preliminary investigations, modeling, and engineering considerations leading to the construction of this prototype. This design is capable of multiplicity and Feynman variance measurements. This prototype may lead to a near-term solution to the crisis that has arisen from the global scarcity of 3He by offering a viable alternative to fission meters. This paper describes the work performed during a 2-year site-directed research and development (SDRD) project that incorporated straw detectors for neutron multiplicity counting. The NMC is a two-panel detector system. We used 10B (in the form of enriched boron carbide: 10B 4C) for neutron detection instead of 3He. In the first year, the project worked with a panel of straw neutron detectors, investigated its characteristics, and developed a data acquisition (DAQ) system to collect neutron multiplicity information from spontaneous fission sources using a single panel consisting of 60 straws equally distributed over three rows in high-density polyethylene moderator. In the following year, we developed the field-programmable gate array and associated DAQ software. Finally, this SDRD effort successfully produced a prototype NMC with ~33% detection efficiency compared to a commercial fission meter.« less
Neutron multiplicity measurements with 3He alternative: Straw neutron detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, Sanjoy; Wolff, Ronald; Detwiler, Ryan
Counting neutrons emitted by special nuclear material (SNM) and differentiating them from the background neutrons of various origins is the most effective passive means of detecting SNM. Unfortunately, neutron detection, counting, and partitioning in a maritime environment are complex due to the presence of high-multiplicity spallation neutrons (commonly known as ‘‘ship effect ’’) and to the complicated nature of the neutron scattering in that environment. A prototype neutron detector was built using 10B as the converter in a special form factor called ‘‘straws’’ that would address the above problems by looking into the details of multiplicity distributions of neutrons originatingmore » from a fissioning source. This paper describes the straw neutron multiplicity counter (NMC) and assesses the performance with those of a commercially available fission meter. The prototype straw neutron detector provides a large-area, efficient, lightweight, more granular (than fission meter) neutron-responsive detection surface (to facilitate imaging) to enhance the ease of application of fission meters. Presented here are the results of preliminary investigations, modeling, and engineering considerations leading to the construction of this prototype. This design is capable of multiplicity and Feynman variance measurements. This prototype may lead to a near-term solution to the crisis that has arisen from the global scarcity of 3He by offering a viable alternative to fission meters. This paper describes the work performed during a 2-year site-directed research and development (SDRD) project that incorporated straw detectors for neutron multiplicity counting. The NMC is a two-panel detector system. We used 10B (in the form of enriched boron carbide: 10B 4C) for neutron detection instead of 3He. In the first year, the project worked with a panel of straw neutron detectors, investigated its characteristics, and developed a data acquisition (DAQ) system to collect neutron multiplicity information from spontaneous fission sources using a single panel consisting of 60 straws equally distributed over three rows in high-density polyethylenemoderator. In the following year, we developed the field-programmable gate array and associated DAQ software. This SDRD effort successfully produced a prototype NMC with*33% detection efficiency compared to a commercial fission meter.« less
Neutron multiplicity ,easurements With 3He alternative: Straw neutron detectors
Mukhopadhyay, Sanjoy; Wolff, Ronald S.; Meade, John A.; ...
2015-01-27
Counting neutrons emitted by special nuclear material (SNM) and differentiating them from the background neutrons of various origins is the most effective passive means of detecting SNM. Unfortunately, neutron detection, counting, and partitioning in a maritime environment are complex due to the presence of high-multiplicity spallation neutrons (commonly known as “ship effect”) and to the complicated nature of the neutron scattering in that environment. In this study, a prototype neutron detector was built using 10B as the converter in a special form factor called “straws” that would address the above problems by looking into the details of multiplicity distributions ofmore » neutrons originating from a fissioning source. This paper describes the straw neutron multiplicity counter (NMC) and assesses the performance with those of a commercially available fission meter. The prototype straw neutron detector provides a large-area, efficient, lightweight, more granular (than fission meter) neutron-responsive detection surface (to facilitate imaging) to enhance the ease of application of fission meters. Presented here are the results of preliminary investigations, modeling, and engineering considerations leading to the construction of this prototype. This design is capable of multiplicity and Feynman variance measurements. This prototype may lead to a near-term solution to the crisis that has arisen from the global scarcity of 3He by offering a viable alternative to fission meters. This paper describes the work performed during a 2-year site-directed research and development (SDRD) project that incorporated straw detectors for neutron multiplicity counting. The NMC is a two-panel detector system. We used 10B (in the form of enriched boron carbide: 10B 4C) for neutron detection instead of 3He. In the first year, the project worked with a panel of straw neutron detectors, investigated its characteristics, and developed a data acquisition (DAQ) system to collect neutron multiplicity information from spontaneous fission sources using a single panel consisting of 60 straws equally distributed over three rows in high-density polyethylene moderator. In the following year, we developed the field-programmable gate array and associated DAQ software. Finally, this SDRD effort successfully produced a prototype NMC with ~33% detection efficiency compared to a commercial fission meter.« less
Counts-in-cylinders in the Sloan Digital Sky Survey with Comparisons to N-body Simulations
NASA Astrophysics Data System (ADS)
Berrier, Heather D.; Barton, Elizabeth J.; Berrier, Joel C.; Bullock, James S.; Zentner, Andrew R.; Wechsler, Risa H.
2011-01-01
Environmental statistics provide a necessary means of comparing the properties of galaxies in different environments, and a vital test of models of galaxy formation within the prevailing hierarchical cosmological model. We explore counts-in-cylinders, a common statistic defined as the number of companions of a particular galaxy found within a given projected radius and redshift interval. Galaxy distributions with the same two-point correlation functions do not necessarily have the same companion count distributions. We use this statistic to examine the environments of galaxies in the Sloan Digital Sky Survey Data Release 4 (SDSS DR4). We also make preliminary comparisons to four models for the spatial distributions of galaxies, based on N-body simulations and data from SDSS DR4, to study the utility of the counts-in-cylinders statistic. There is a very large scatter between the number of companions a galaxy has and the mass of its parent dark matter halo and the halo occupation, limiting the utility of this statistic for certain kinds of environmental studies. We also show that prevalent empirical models of galaxy clustering, that match observed two- and three-point clustering statistics well, fail to reproduce some aspects of the observed distribution of counts-in-cylinders on 1, 3, and 6 h -1 Mpc scales. All models that we explore underpredict the fraction of galaxies with few or no companions in 3 and 6 h -1 Mpc cylinders. Roughly 7% of galaxies in the real universe are significantly more isolated within a 6 h -1 Mpc cylinder than the galaxies in any of the models we use. Simple phenomenological models that map galaxies to dark matter halos fail to reproduce high-order clustering statistics in low-density environments.
THE HAWAII SCUBA-2 LENSING CLUSTER SURVEY: NUMBER COUNTS AND SUBMILLIMETER FLUX RATIOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Li-Yen; Cowie, Lennox L.; Barger, Amy J.
2016-09-20
We present deep number counts at 450 and 850 μ m using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μ m and 850 μ m, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength ismore » contributed by faint sources with L {sub IR} < 10{sup 12} L {sub ⊙}, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μ m stacking of K -selected sources from the literature, we conclude that the K -selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L {sub IR} < 10{sup 12} L {sub ⊙}. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μ m and 850 μ m selected sources. At 850 μ m, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μ m, we do not see a clear relation between the flux ratio and the observed flux.« less
NASA Astrophysics Data System (ADS)
Kitaygorsky, J.; Słysz, W.; Shouten, R.; Dorenbos, S.; Reiger, E.; Zwiller, V.; Sobolewski, Roman
2017-01-01
We present a new operation regime of NbN superconducting single-photon detectors (SSPDs) by integrating them with a low-noise cryogenic high-electron-mobility transistor and a high-load resistor. The integrated sensors are designed to get a better understanding of the origin of dark counts triggered by the detector, as our scheme allows us to distinguish the origin of dark pulses from the actual photon pulses in SSPDs. The presented approach is based on a statistical analysis of amplitude distributions of recorded trains of the SSPD photoresponse transients. It also enables to obtain information on energy of the incident photons, as well as demonstrates some photon-number-resolving capability of meander-type SSPDs.
Corsi, Steven R.; Walker, John F.; Graczyk, D.J.; Greb, S.R.; Owens, D.W.; Rappold, K.F.
1995-01-01
A special study was done to determine the effect of holding time on fecal coliform colony counts. A linear regression indicated that the mean decrease in colony counts over 72 hours was 8.2 percent per day. Results after 24 hours showed that colony counts increased in some samples and decreased in others.
Predicting Attack-Prone Components with Source Code Static Analyzers
2009-05-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications
Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance
NASA Technical Reports Server (NTRS)
Cohen, Martin
1994-01-01
I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).
Evolution of Combustion-Generated Particles at Tropospheric Conditions
NASA Technical Reports Server (NTRS)
Tacina, Kathleen M.; Heath, Christopher M.
2012-01-01
This paper describes particle evolution measurements taken in the Particulate Aerosol Laboratory (PAL). The PAL consists of a burner capable of burning jet fuel that exhausts into an altitude chamber that can simulate temperature and pressure conditions up to 13,700 m. After presenting results from initial temperature distributions inside the chamber, particle count data measured in the altitude chamber are shown. Initial particle count data show that the sampling system can have a significant effect on the measured particle distribution: both the value of particle number concentration and the shape of the radial distribution of the particle number concentration depend on whether the measurement probe is heated or unheated.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
NASA Astrophysics Data System (ADS)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy
2018-03-01
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...
2018-03-06
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Numerically exact full counting statistics of the nonequilibrium Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Michael; Singh, Viveka N.; Gull, Emanuel
The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events
Radionuclide counting technique for measuring wind velocity and direction
NASA Technical Reports Server (NTRS)
Singh, J. J. (Inventor)
1984-01-01
An anemometer utilizing a radionuclide counting technique for measuring both the velocity and the direction of wind is described. A pendulum consisting of a wire and a ball with a source of radiation on the lower surface of the ball is positioned by the wind. Detectors and are located in a plane perpendicular to pendulum (no wind). The detectors are located on the circumferene of a circle and are equidistant from each other as well as the undisturbed (no wind) source ball position.
OpenCFU, a new free and open-source software to count cell colonies and other circular objects.
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.
High-Mass X-ray Binaries in hard X- rays
NASA Astrophysics Data System (ADS)
Lutovinov, Alexander
We present a review of the latest results of the all-sky survey, performed with the INTEGRAL observatory. The deep exposure spent by INTEGRAL in the Galactic plane region, as well as for nearby galaxies allowed us to obtain a flux limited sample for High Mass X-ray Binaries in the Local Galactic Group and measure their physical properties, like a luminosity function, spatial density distribution, etc. Particularly, it was determined the most accurate up to date spatial density distribution of HMXBs in the Galaxy and its correlation with the star formation rate distribution. Based on the measured value of the vertical distribution of HMXBs (a scale-height h~85 pc) we also estimated a kinematical age of HMXBs. Properties of the population of HMXBs are explained in the framework of the population synthesis model. Based on this model we argue that a flaring activity of so-called supergiant fast X-ray transients (SFXTs), the recently recognized sub-sample of HMXBs, is likely related with the magnetic arrest of their accretion. The resulted global characteristics of the HMXB population are used for predictions of sources number counts in sky surveys of future X-ray missions.
Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas
2014-09-30
A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated information and, therefore, more reliable estimated breeding values were obtained. The proposed unified method integrated and blended several sources of information well into a genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. The unified method can also be extended to other types of situations such as single-step genomic or multi-trait evaluations, combining information across different traits.
A tritium activity monitor for the KATRIN Experiment
NASA Astrophysics Data System (ADS)
Schmitt, Udo
2008-06-01
The KArlsruhe TRItium Neutrino experiment KATRIN is designed to measure the absolute neutrino mass scale by analyzing the endpoint region of the tritium beta-decay spectrum with a sensitivity of 0.2 eV/c2 (90 % C.L.). A high-luminous windowless gaseous tritium source with an activity of 1.7 · 1011 Bq will produce the decay electrons, their energy spectrum will be analyzed by a combination of two electrostatic retarding spectrometers with magnetic adiabatic collimation (MAC-E-filter). Fluctuations of the source column density and inelastic scattering processes within the source affect the energy distribution of the decay electrons. Hence, a precise and continuous monitoring of the source activity is necessary to correct the data taken by the main detector. A prototype of the beam monitor detector, based on a silicon drift diode, has been developed to measure an expected counting rate in the range of 106/(s · mm2). The detector element shall be moveable across the complete beam in a magnetic field of 0.8 T, resulting in a beam diameter of 20 cm. A precise sensor positioning device has been designed and built to be compatible with the primary beamline vacuum of 10-11 mbar.
NASA Astrophysics Data System (ADS)
Carrasco, D.; Trenti, M.; Mutch, S.; Oesch, P. A.
2018-06-01
The luminosity function is a fundamental observable for characterising how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterisation of the completeness and redshift selection functions for the observations. In this paper, we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman-break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sérsic profiles with different indexes and with customisable size, redshift, and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z 10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey. Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for the artificial sources can impact completeness estimates.
Heterotrophic plate count and consumer's health under special consideration of water softeners.
Hambsch, Beate; Sacré, Clara; Wagner, Ivo
2004-05-01
The phenomenon of bacterial growth in water softeners is well known since years. To upgrade the hygienic safety of water softeners, the German DIN Standard 19636 was developed, to assure that the distribution system could not be contaminated by these devices and that the drinking water to be used in the household still meets the microbiological standards according to the German drinking water guidelines, i.e. among others heterotrophic plate count (HPC) below 100 CFU/ml. Moreover, the standard for the water softeners includes a test for contamination with Pseudomonas aeruginosa which has to be disinfected during the regeneration phase. This is possible by sanitizing the resin bed during regeneration by producing chlorine. The results of the last 10 years of tests of water softeners according to DIN 19636 showed that it is possible to produce water softeners that comply with that standard. Approximately 60% of the tested models were accepted. P. aeruginosa is used as an indicator for potentially pathogenic bacteria being able to grow also in low nutrient conditions which normally prevail in drinking water. Like other heterotrophs, the numbers of P. aeruginosa increase rapidly as stagnation occurs. Normally P. aeruginosa is not present in the distributed drinking water. However, under certain conditions, P. aeruginosa can be introduced into the drinking water distribution system, for instance, during construction work. The occurrence of P. aeruginosa is shown in different cases in treatment plants, public drinking water systems and in-house installations. The compliance with DIN 19636 provides assurance that a water softener will not be a constant source of contamination, even if it is once inoculated with a potentially pathogenic bacterium like P. aeruginosa. Copyright 2003 Elsevier B.V.
Towards a street-level pollen concentration and exposure forecast
NASA Astrophysics Data System (ADS)
van der Molen, Michiel; Krol, Maarten; van Vliet, Arnold; Heuvelink, Gerard
2015-04-01
Atmospheric pollen are an increasing source of nuisance for people in industrialised countries and are associated with significant cost of medication and sick leave. Citizen pollen warnings are often based on emission mapping based on local temperature sum approaches or on long-range atmospheric model approaches. In practise, locally observed pollen may originate from both local sources (plants in streets and gardens) and from long-range transport. We argue that making this distinction is relevant because the diurnal and spatial variation in pollen concentrations is much larger for pollen from local sources than for pollen from long-range transport due to boundary layer processes. This may have an important impact on exposure of citizens to pollen and on mitigation strategies. However, little is known about the partitioning of pollen into local and long-range origin categories. Our objective is to study how the concentrations of pollen from different sources vary temporally and spatially, and how the source region influences exposure and mitigation strategies. We built a Hay Fever Forecast system (HFF) based on WRF-chem, Allergieradar.nl, and geo-statistical downscaling techniques. HFF distinguishes between local (individual trees) and regional sources (based on tree distribution maps). We show first results on how the diurnal variation of pollen concentrations depends on source proximity. Ultimately, we will compare the model with local pollen counts, patient nuisance scores and medicine use.
SU-E-I-79: Source Geometry Dependence of Gamma Well-Counter Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, M; Belanger, A; Kijewski, M
Purpose: To determine the effect of liquid sample volume and geometry on counting efficiency in a gamma well-counter, and to assess the relative contributions of sample geometry and self-attenuation. Gamma wellcounters are standard equipment in clinical and preclinical studies, for measuring patient blood radioactivity and quantifying animal tissue uptake for tracer development and other purposes. Accurate measurements are crucial. Methods: Count rates were measured for aqueous solutions of 99m- Tc at four liquid volume values in a 1-cm-diam tube and at six volume values in a 2.2-cm-diam vial. Total activity was constant for all volumes, and data were corrected formore » decay. Count rates from a point source in air, supported by a filter paper, were measured at seven heights between 1.3 and 5.7 cm from the bottom of a tube. Results: Sample volume effects were larger for the tube than for the vial. For the tube, count efficiency relative to a 1-cc volume ranged from 1.05 at 0.05 cc to 0.84 at 3 cc. For the vial, relative count efficiency ranged from 1.02 at 0.05 cc to 0.87 at 15 cc. For the point source, count efficiency relative to 1.3 cm from the tube bottom ranged from 0.98 at 1.8 cm to 0.34 at 5.7 cm. The relative efficiency of a 3-cc liquid sample in a tube compared to a 1-cc sample is 0.84; the average relative efficiency for the solid sample in air between heights in the tube corresponding to the surfaces of those volumes (1.3 and 4.8 cm) is 0.81, implying that the major contribution to efficiency loss is geometry, rather than attenuation. Conclusion: Volume-dependent correction factors should be used for accurate quantitation radioactive of liquid samples. Solid samples should be positioned at the bottom of the tube for maximum count efficiency.« less
Lithium and boron based semiconductors for thermal neutron counting
NASA Astrophysics Data System (ADS)
Kargar, Alireza; Tower, Joshua; Hong, Huicong; Cirignano, Leonard; Higgins, William; Shah, Kanai
2011-09-01
Thermal neutron detectors in planar configuration were fabricated from LiInSe2 and B2Se3 crystals grown at RMD Inc. All fabricated semiconductor devices were characterized for the current-voltage (I-V) characteristic and neutron counting measurement. Pulse height spectra were collected from 241AmBe (neutron source on all samples), as well as 137Cs and 60Co gamma ray sources. In this study, the resistivity of all crystals is reported and the collected pulse height spectra are presented for fabricated devices. Note that, the 241AmBe neutron source was custom designed with polyethylene around the source as the neutron moderator, mainly to thermalize the fast neutrons before reaching the detectors. Both LiInSe2 and B2Se3 devices showed response to thermal neutrons of the 241AmBe source.
Quantifying evenly distributed states in exclusion and nonexclusion processes
NASA Astrophysics Data System (ADS)
Binder, Benjamin J.; Landman, Kerry A.
2011-04-01
Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.
Szakács, Zoltán; Mészáros, Tamás; de Jonge, Marien I; Gyurcsányi, Róbert E
2018-05-30
Detection and counting of single virus particles in liquid samples are largely limited to narrow size distribution of viruses and purified formulations. To address these limitations, here we propose a calibration-free method that enables concurrently the selective recognition, counting and sizing of virus particles as demonstrated through the detection of human respiratory syncytial virus (RSV), an enveloped virus with a broad size distribution, in throat swab samples. RSV viruses were selectively labeled through their attachment glycoproteins (G) with fluorescent aptamers, which further enabled their identification, sizing and counting at the single particle level by fluorescent nanoparticle tracking analysis. The proposed approach seems to be generally applicable to virus detection and quantification. Moreover, it could be successfully applied to detect single RSV particles in swab samples of diagnostic relevance. Since the selective recognition is associated with the sizing of each detected particle, this method enables to discriminate viral elements linked to the virus as well as various virus forms and associations.
Montesinos-López, Osval A.; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H.; Montesinos-López, José C.; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-01-01
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. PMID:28364037
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollister, R
2009-08-26
Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less
Ultraviolet Communication for Medical Applications
2013-06-01
sky was clear and no moonlight was visible during testing. There was light fog and high pollen count (9 grains per m3), and relative humidity was...improved LED light source was evaluated outdoors using the test bench system at a range of 50 m, and received photon counts were consistent with...bench system at a range of 50 m, and received photon counts were consistent with medium data rate communication. Future Phase II efforts will develop
NuSTAR Reveals Extreme Absorption in z < 0.5 Type 2 Quasars
NASA Astrophysics Data System (ADS)
Lansbury, G. B.; Gandhi, P.; Alexander, D. M.; Assef, R. J.; Aird, J.; Annuar, A.; Ballantyne, D. R.; Baloković, M.; Bauer, F. E.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Civano, F.; Comastri, A.; Craig, W. W.; Del Moro, A.; Grefenstette, B. W.; Hailey, C. J.; Harrison, F. A.; Hickox, R. C.; Koss, M.; LaMassa, S. M.; Luo, B.; Puccetti, S.; Stern, D.; Treister, E.; Vignali, C.; Zappacosta, L.; Zhang, W. W.
2015-08-01
The intrinsic column density (NH) distribution of quasars is poorly known. At the high obscuration end of the quasar population and for redshifts z < 1, the X-ray spectra can only be reliably characterized using broad-band measurements that extend to energies above 10 keV. Using the hard X-ray observatory NuSTAR, along with archival Chandra and XMM-Newton data, we study the broad-band X-ray spectra of nine optically selected (from the SDSS), candidate Compton-thick (NH > 1.5 × 1024 cm-2) type 2 quasars (CTQSO2s); five new NuSTAR observations are reported herein, and four have been previously published. The candidate CTQSO2s lie at z < 0.5, have observed [O iii] luminosities in the range 8.4\\lt {log}({L}[{{O} {{III}}]}/{L}⊙ )\\lt 9.6, and show evidence for extreme, Compton-thick absorption when indirect absorption diagnostics are considered. Among the nine candidate CTQSO2s, five are detected by NuSTAR in the high-energy (8-24 keV) band: two are weakly detected at the ≈3σ confidence level and three are strongly detected with sufficient counts for spectral modeling (≳90 net source counts at 8-24 keV). For these NuSTAR-detected sources direct (i.e., X-ray spectral) constraints on the intrinsic active galactic nucleus properties are feasible, and we measure column densities ≈2.5-1600 times higher and intrinsic (unabsorbed) X-ray luminosities ≈10-70 times higher than pre-NuSTAR constraints from Chandra and XMM-Newton. Assuming the NuSTAR-detected type 2 quasars are representative of other Compton-thick candidates, we make a correction to the NH distribution for optically selected type 2 quasars as measured by Chandra and XMM-Newton for 39 objects. With this approach, we predict a Compton-thick fraction of {f}{CT}={36}-12+14 %, although higher fractions (up to 76%) are possible if indirect absorption diagnostics are assumed to be reliable.
X-ray-bright optically faint active galactic nuclei in the Subaru Hyper Suprime-Cam wide survey
NASA Astrophysics Data System (ADS)
Terashima, Yuichi; Suganuma, Makoto; Akiyama, Masayuki; Greene, Jenny E.; Kawaguchi, Toshihiro; Iwasawa, Kazushi; Nagao, Tohru; Noda, Hirofumi; Toba, Yoshiki; Ueda, Yoshihiro; Yamashita, Takuji
2018-01-01
We construct a sample of X-ray-bright optically faint active galactic nuclei by combining Subaru Hyper Suprime-Cam, XMM-Newton, and infrared source catalogs. Fifty-three X-ray sources satisfying i-band magnitude fainter than 23.5 mag and X-ray counts with the EPIC-PN detector larger than 70 are selected from 9.1 deg2, and their spectral energy distributions (SEDs) and X-ray spectra are analyzed. Forty-four objects with an X-ray to i-band flux ratio FX/Fi > 10 are classified as extreme X-ray-to-optical flux sources. Spectral energy distributions of 48 among 53 are represented by templates of type 2 AGNs or star-forming galaxies and show the optical signature of stellar emission from host galaxies in the source rest frame. Infrared/optical SEDs indicate a significant contribution of emission from dust to the infrared fluxes, and that the central AGN is dust obscured. The photometric redshifts determined from the SEDs are in the range of 0.6-2.5. The X-ray spectra are fitted by an absorbed power-law model, and the intrinsic absorption column densities are modest (best-fit log NH = 20.5-23.5 cm-2 in most cases). The absorption-corrected X-ray luminosities are in the range of 6 × 1042-2 × 1045 erg s-1. Twenty objects are classified as type 2 quasars based on X-ray luminsosity and NH. The optical faintness is explained by a combination of redshifts (mostly z > 1.0), strong dust extinction, and in part a large ratio of dust/gas.
Emerson, Jane F; Emerson, Scott S
2005-01-01
A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.
The effect of public awareness campaigns on suicides: evidence from Nagoya, Japan.
Matsubayashi, Tetsuya; Ueda, Michiko; Sawada, Yasuyuki
2014-01-01
Public awareness campaigns about depression and suicide have been viewed as highly effective strategies in preventing suicide, yet their effectiveness has not been established in previous studies. This study evaluates the effectiveness of a public-awareness campaign by comparing suicide counts before and after a city-wide campaign in Nagoya, Japan, where the city government distributed promotional materials that were aimed to stimulate public awareness of depression and promote care-seeking behavior during the period of 2010-2012. In each of the sixteen wards of the city of Nagoya, we count the number of times that the promotional materials were distributed per month and then examine the association between the suicide counts and the frequency of distributions in the months following such distributions. We run a Poisson regression model that controls for the effects of ward-specific observed and unobserved heterogeneities and temporal shocks. Our analysis indicates that more frequent distribution of the campaign material is associated with a decrease in the number of suicides in the subsequent months. The campaign was estimated to have been especially effective for the male residents of the city. The underlying mechanism of how the campaign reduced suicides remains to be unclear. Public awareness campaigns can be an effective strategy in preventing suicide. © 2013 Elsevier B.V. All rights reserved.
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
Playing at Statistical Mechanics
ERIC Educational Resources Information Center
Clark, Paul M.; And Others
1974-01-01
Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)
Neutronic analysis of the 1D and 1E banks reflux detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
1999-12-21
Two H Canyon neutron monitoring systems for early detection of postulated abnormal reflux conditions in the Second Uranium Cycle 1E and 1D Mixer-Settle Banks have been designed and built. Monte Carlo neutron transport simulations using the general purpose, general geometry, n-particle MCNP code have been performed to model expected response of the monitoring systems to varying conditions.The confirmatory studies documented herein conclude that the 1E and 1D neutron monitoring systems are able to achieve adequate neutron count rates for various neutron source and detector configurations, thereby eliminating excessive integration count time. Neutron count rate sensitivity studies are also performed. Conversely,more » the transport studies concluded that the neutron count rates are statistically insensitive to nitric acid content in the aqueous region and to the transition region length. These studies conclude that the 1E and 1D neutron monitoring systems are able to predict the postulated reflux conditions for all examined perturbations in the neutron source and detector configurations. In the cases examined, the relative change in the neutron count rates due to postulated transitions from normal {sup 235}U concentration levels to reflux levels remain satisfactory detectable.« less
Active Galactic Nuclei, Host Star Formation, and the Far Infrared
NASA Astrophysics Data System (ADS)
Draper, Aden R.; Ballantyne, D. R.
2011-05-01
Telescopes like Herschel and the Atacama Large Millimeter/submillimeter Array (ALMA) are creating new opportunities to study sources in the far infrared (FIR), a wavelength region dominated by cold dust emission. Probing cold dust in active galaxies allows for study of the star formation history of active galactic nuclei (AGN) hosts. The FIR is also an important spectral region for observing AGN which are heavily enshrouded by dust, such as Compton thick (CT) AGN. By using information from deep X-ray surveys and cosmic X-ray background synthesis models, we compute Cloudy photoionization simulations which are used to predict the spectral energy distribution (SED) of AGN in the FIR. Expected differential number counts of AGN and their host galaxies are calculated in the Herschel bands. The expected contribution of AGN and their hosts to the cosmic infrared background (CIRB) is also computed. Multiple star formation scenarios are investigated using a modified blackbody star formation SED. It is found that FIR observations at 350 and 500 um are an excellent tool in determining the star formation history of AGN hosts. Additionally, the AGN contribution to the CIRB can be used to determine whether star formation in AGN hosts evolves differently than in normal galaxies. AGN and host differential number counts are dominated by CT AGN in the Herschel-SPIRE bands. Therefore, X-ray stacking of bright SPIRE sources is likely to disclose a large fraction of the CT AGN population.
NASA Astrophysics Data System (ADS)
Taut, Andreas; Drews, Christian; Berger, Lars; Wimmer-Schweingruber, Robert
2016-04-01
The 1D Velocity Distribution Function (VDF) of He+ pickup ions shows two distinct populations that reflect the sources of these ions. The highly suprathermal population is the result of the ionization and pickup of almost resting interstellar neutrals that are injected into the solar wind as a highly anisotropic torus distribution. The nearly thermalized population is centered around the solar wind bulk speed and is mainly attributed to inner-source pickup ions that originate in the inner heliosphere. Current pickup ion models assume a rapid isotropization of the initial VDF by resonant wave-particle interactions, but recent observations by Drews et al. (2015) of a torus-like VDF strongly limit this isotropization. This in turn means that more observational data is needed to further characterize the kinetic behavior of pickup ions. The Charge-Time-Of-Flight sensor on-board SOHO offers unrivaled counting statistics for He+ and a sufficient mass-per-charge resolution. Thus, the He+ VDF can be observed on comparatively short timescales. We combine this data with the magnetic field data from WIND via an extrapolation to the location of SOHO. On the one hand we investigate the 1D VDF of He+ pickup ions with respect to different magnetic field orientations. Our findings complement on previous studies with other instruments that show an anisotropy of the VDF that is linked to the initial torus VDF. On the other hand we find a significant modification of the VDF during stream-interaction region. This may be linked to a different cooling behaviour in these regions and/or the absence of inner-source He+ during these times. Here, we report on our preliminary results.
Estimate of main local sources to ambient ultrafine particle number concentrations in an urban area
NASA Astrophysics Data System (ADS)
Rahman, Md Mahmudur; Mazaheri, Mandana; Clifford, Sam; Morawska, Lidia
2017-09-01
Quantifying and apportioning the contribution of a range of sources to ultrafine particles (UFPs, D < 100 nm) is a challenge due to the complex nature of the urban environments. Although vehicular emissions have long been considered one of the major sources of ultrafine particles in urban areas, the contribution of other major urban sources is not yet fully understood. This paper aims to determine and quantify the contribution of local ground traffic, nucleated particle (NP) formation and distant non-traffic (e.g. airport, oil refineries, and seaport) sources to the total ambient particle number concentration (PNC) in a busy, inner-city area in Brisbane, Australia using Bayesian statistical modelling and other exploratory tools. The Bayesian model was trained on the PNC data on days where NP formations were known to have not occurred, hourly traffic counts, solar radiation data, and smooth daily trend. The model was applied to apportion and quantify the contribution of NP formations and local traffic and non-traffic sources to UFPs. The data analysis incorporated long-term measured time-series of total PNC (D ≥ 6 nm), particle number size distributions (PSD, D = 8 to 400 nm), PM2.5, PM10, NOx, CO, meteorological parameters and traffic counts at a stationary monitoring site. The developed Bayesian model showed reliable predictive performances in quantifying the contribution of NP formation events to UFPs (up to 4 × 104 particles cm- 3), with a significant day to day variability. The model identified potential NP formation and no-formations days based on PNC data and quantified the sources contribution to UFPs. Exploratory statistical analyses show that total mean PNC during the middle of the day was up to 32% higher than during peak morning and evening traffic periods, which were associated with NP formation events. The majority of UFPs measured during the peak traffic and NP formation periods were between 30-100 nm and smaller than 30 nm, respectively. To date, this is the first application of Bayesian model to apportion different sources contribution to UFPs, and therefore the importance of this study is not only in its modelling outcomes but in demonstrating the applicability and advantages of this statistical approach to air pollution studies.
NASA Astrophysics Data System (ADS)
Scott, K. S.; Yun, M. S.; Wilson, G. W.; Austermann, J. E.; Aguilar, E.; Aretxaga, I.; Ezawa, H.; Ferrusca, D.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Giavalisco, M.; Kawabe, R.; Kohno, K.; Mauskopf, P. D.; Oshima, T.; Perera, T. A.; Rand, J.; Tamura, Y.; Tosaki, T.; Velazquez, M.; Williams, C. C.; Zeballos, M.
2010-07-01
We present the first results from a confusion-limited map of the Great Observatories Origins Deep Survey-South (GOODS-S) taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment. We imaged a field to a 1σ depth of 0.48-0.73 mJybeam-1, making this one of the deepest blank-field surveys at mm-wavelengths ever achieved. Although by traditional standards our GOODS-S map is extremely confused due to a sea of faint underlying sources, we demonstrate through simulations that our source identification and number counts analyses are robust, and the techniques discussed in this paper are relevant for other deeply confused surveys. We find a total of 41 dusty starburst galaxies with signal-to-noise ratios S/N >= 3. 5 within this uniformly covered region, where only two are expected to be false detections, and an additional seven robust source candidates located in the noisier (1σ ~ 1 mJybeam-1) outer region of the map. We derive the 1.1 mm number counts from this field using two different methods: a fluctuation or ``P(d)'' analysis and a semi-Bayesian technique and find that both methods give consistent results. Our data are well fit by a Schechter function model with . Given the depth of this survey, we put the first tight constraints on the 1.1 mm number counts at S1.1mm = 0.5 mJy, and we find evidence that the faint end of the number counts at from various SCUBA surveys towards lensing clusters are biased high. In contrast to the 870μm survey of this field with the LABOCA camera, we find no apparent underdensity of sources compared to previous surveys at 1.1mm the estimates of the number counts of SMGs at flux densities >1mJy determined here are consistent with those measured from the AzTEC/SHADES survey. Additionally, we find a significant number of SMGs not identified in the LABOCA catalogue. We find that in contrast to observations at λ <= 500μm, MIPS 24μm sources do not resolve the total energy density in the cosmic infrared background at 1.1 mm, demonstrating that a population of z >~ 3 dust-obscured galaxies that are unaccounted for at these shorter wavelengths potentially contribute to a large fraction (~2/3) of the infrared background at 1.1 mm.
RELATING WEIGHT AND COUNT DISTRIBUTIONS OF STREAM BED GRAVEL
The size distribution of particles in a stream bed reflects the stream hydrology as well as its physical and chemical water quality characteristics. In environmental assessments, gravel distribution determines habitat quality for aquatic insects and stream suitability for spawnin...
Whitman, R.L.; Nevers, M.B.; Byappanahalli, M.N.
2006-01-01
Recent research has highlighted the occurrence of Escherichia coli in natural habitats not directly influenced by sewage inputs. Most studies on E. coli in recreational water typically focus on discernible sources (e.g., effluent discharge and runoff) and fall short of integrating riparian, nearshore, onshore, and outfall sources. An integrated “beachshed” approach that links E. coli inputs and interactions would be helpful to understand the difference between background loading and sewage pollution; to develop more accurate predictive models; and to understand the differences between potential, net, and apparent culturable E. coli. The objective of this study was to examine the interrelatedness of E. colioccurrence from various coastal watershed components along southern Lake Michigan. The study shows that once established in forest soil, E. coli can persist throughout the year, potentially acting as a continuous non-point source of E. colito nearby streams. Year-round background stream loading of E. coli can influence beach water quality. E. coli is present in highly variable counts in beach sand to depths just below the water table and to distances at least 5 m inland from the shore, providing a large potential area of input to beach water. In summary, E. coliin the fluvial-lacustrine system may be stored in forest soils, sediments surrounding springs, bank seeps, stream margins and pools, foreshore sand, and surface groundwater. While rainfall events may increase E. coli counts in the foreshore sand and lake water, concentrations quickly decline to prerain concentrations. Onshore winds cause an increase in E. coli in shallow nearshore water, likely resulting from resuspension of E. coli-laden beach sand. When examining indicator bacteria source, flux, and context, the entire “beachshed” as a dynamic interacting system should be considered.
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
A new model to predict weak-lensing peak counts. II. Parameter constraint strategies
NASA Astrophysics Data System (ADS)
Lin, Chieh-An; Kilbinger, Martin
2015-11-01
Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.
A pilot study of physical activity and sedentary behavior distribution patterns in older women.
Fortune, Emma; Mundell, Benjamin; Amin, Shreyasee; Kaufman, Kenton
2017-09-01
The study aims were to investigate free-living physical activity and sedentary behavior distribution patterns in a group of older women, and assess the cross-sectional associations with body mass index (BMI). Eleven older women (mean (SD) age: 77 (9) yrs) wore custom-built activity monitors, each containing a tri-axial accelerometer (±16g, 100Hz), on the waist and ankle for lab-based walking trials and 4 days in free-living. Daily active time, step counts, cadence, and sedentary break number were estimated from acceleration data. The sedentary bout length distribution and sedentary time accumulation pattern, using the Gini index, were investigated. Associations of the parameters' total daily values and coefficients of variation (CVs) of their hourly values with BMI were assessed using linear regression. The algorithm demonstrated median sensitivity, positive predictive value, and agreement values >98% and <1% mean error in cadence calculations with video identification during lab trials. Participants' sedentary bouts were found to be power law distributed with 56% of their sedentary time occurring in 20min bouts or longer. Meaningful associations were detectable in the relationships of total active time, step count, sedentary break number and their CVs with BMI. Active time and step counts had moderate negative associations with BMI while sedentary break number had a strong negative association. Active time, step count and sedentary break number CVs also had strong positive associations with BMI. The results highlight the importance of measuring sedentary behavior and suggest a more even distribution of physical activity throughout the day is associated with lower BMI. Copyright © 2017 Elsevier B.V. All rights reserved.
Xiong, Xi-Xi; Ding, Gao-Zhong; Zhao, Wen-E; Li, Xue; Ling, Yu-Ting; Sun, Li; Gong, Qing-Li; Lu, Yan
2017-07-01
Skin color is determined by the number of melanin granules produced by melanocytes that are transferred to keratinocytes. Melanin synthesis and the distribution of melanosomes to keratinocytes within the epidermal melanin unit (EMU) within the skin of vitiligo patients have been poorly studied. The ultrastructure and distribution of melanosomes in melanocytes and surrounding keratinocytes in perilesional vitiligo and normal skin were investigated using transmission electron microscopy (TEM). Furthermore, we performed a quantitative analysis of melanosome distribution within the EMUs with scatter plot. Melanosome count within keratinocytes increased significantly compared with melanocytes in perilesional stable vitiligo (P < 0.001), perilesional halo nevi (P < 0.01) and the controls (P < 0.01), but not in perilesional active vitiligo. Furthermore, melanosome counts within melanocytes and their surrounding keratinocytes in perilesional active vitiligo skin decreased significantly compared with the other groups. In addition, taking the means-standard error of melanosome count within melanocytes and keratinocytes in healthy controls as a normal lower limit, EMUs were graded into 3 stages (I-III). Perilesional active vitiligo presented a significantly different constitution in stages compared to other groups (P < 0.001). The distribution and constitution of melanosomes were normal in halo nevi. Impaired melanin synthesis and melanosome transfer are involved in the pathogenesis of vitiligo. Active vitiligo varies in stages and in stage II, EMUs are slightly impaired, but can be resuscitated, providing a golden opportunity with the potential to achieve desired repigmentation with an appropriate therapeutic choice. Adverse milieu may also contribute to the low melanosome count in keratinocytes.
Werner, S.C.; Tanaka, K.L.
2011-01-01
For the boundaries of each chronostratigraphic epoch on Mars, we present systematically derived crater-size frequencies based on crater counts of geologic referent surfaces and three proposed " standard" crater size-frequency production distributions as defined by (a) a simple -2 power law, (b) Neukum and Ivanov, (c) Hartmann. In turn, these crater count values are converted to model-absolute ages based on the inferred cratering rate histories. We present a new boundary definition for the Late Hesperian-Early Amazonian transition. Our fitting of crater size-frequency distributions to the chronostratigraphic record of Mars permits the assignment of cumulative counts of craters down to 100. m, 1. km, 2. km, 5. km, and 16. km diameters to martian epochs. Due to differences in the " standard" crater size-frequency production distributions, a generalized crater-density-based definition to the chronostratigraphic system cannot be provided. For the diameter range used for the boundary definitions, the resulting model absolute age fits vary within 1.5% for a given set of production function and chronology model ages. Crater distributions translated to absolute ages utilizing different curve descriptions can result in absolute age differences exceeding 10%. ?? 2011 Elsevier Inc.
Wei, Ziping; McEvoy, Matt; Razinkov, Vladimir; Polozova, Alla; Li, Elizabeth; Casas-Finet, Jose; Tous, Guillermo I; Balu, Palani; Pan, Alfred A; Mehta, Harshvardhan; Schenerman, Mark A
2007-09-01
Adequate biophysical characterization of influenza virions is important for vaccine development. The influenza virus vaccines are produced from the allantoic fluid of developing chicken embryos. The process of viral replication produces a heterogeneous mixture of infectious and non-infectious viral particles with varying states of aggregation. The study of the relative distribution and behavior of different subpopulations and their inter-correlation can assist in the development of a robust process for a live virus vaccine. This report describes a field flow fractionation and multiangle light scattering (FFF-MALS) method optimized for the analysis of size distribution and total particle counts. The FFF-MALS method was compared with several other methods such as transmission electron microscopy (TEM), atomic force microscopy (AFM), size exclusion chromatography followed by MALS (SEC-MALS), quantitative reverse transcription polymerase chain reaction (RT Q-PCR), median tissue culture dose (TCID(50)), and the fluorescent focus assay (FFA). The correlation between the various methods for determining total particle counts, infectivity and size distribution is reported. The pros and cons of each of the analytical methods are discussed.
Bai, Kelvin; Barnett, Gregory V; Kar, Sambit R; Das, Tapan K
2017-04-01
Characterization of submicron protein particles continues to be challenging despite active developments in the field. NTA is a submicron particle enumeration technique, which optically tracks the light scattering signal from suspended particles undergoing Brownian motion. The submicron particle size range NTA can monitor in common protein formulations is not well established. We conducted a comprehensive investigation with several protein formulations along with corresponding placebos using NTA to determine submicron particle size distributions and shed light on potential non-particle origin of size distribution in the range of approximately 50-300 nm. NTA and DLS are performed on polystyrene size standards as well as protein and placebo formulations. Protein formulations filtered through a 20 nm filter, with and without polysorbate-80, show NTA particle counts. As such, particle counts above 20 nm are not expected in these solutions. Several other systems including positive and negative controls were studied using NTA and DLS. These apparent particles measured by NTA are not observed in DLS measurements and may not correspond to real particles. The intent of this article is to raise awareness about the need to interpret particle counts and size distribution from NTA with caution.
CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.
Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T
2017-12-28
In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .
A very deep IRAS survey - Constraints on the evolution of starburst galaxies
NASA Astrophysics Data System (ADS)
Hacking, Perry; Condon, J. J.; Houck, J. R.
1987-05-01
Counts of sources (primarily starburst galaxies) from a deep 60 microns IRAS survey published by Hacking and Houck (1987) are compared with four evolutionary models. The counts below 100 mJy are higher than expected if no evolution has taken place out to a redshift of approximately 0.2. Redshift measurements of the survey sources should be able to distinguish between luminosity-evolution and density-evolution models and detect as little as a 20 percent brightening or increase in density of infrared sources per billion years ago (H/0/ = 100 km/s per Mpc). Starburst galaxies cannot account for the reported 100 microns background without extreme evolution at high redshifts.
OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. PMID:23457446
NASA Technical Reports Server (NTRS)
Turner, J. W. (Inventor)
1973-01-01
A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
The Fifth Cell: Correlation Bias in U.S. Census Adjustment.
ERIC Educational Resources Information Center
Wachter, Kenneth W.; Freedman, David A.
2000-01-01
Presents a method for estimating the total national number of doubly missing people (missing from Census counts and adjusted counts as well) and their distribution by race and sex. Application to the 1990 U.S. Census yields an estimate of three million doubly-missing people. (SLD)
Programmable random interval generator
NASA Technical Reports Server (NTRS)
Lindsey, R. S., Jr.
1973-01-01
Random pulse generator can supply constant-amplitude randomly distributed pulses with average rate ranging from a few counts per second to more than one million counts per second. Generator requires no high-voltage power supply or any special thermal cooling apparatus. Device is uniquely versatile and provides wide dynamic range of operation.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
2017-01-01
The annual report presents data tables describing the electricity industry in each State. Data include: summary statistics; the 10 largest plants by generating capacity; the top five entities ranked by sector; electric power industry generating capacity by primary energy source; electric power industry generation by primary energy source; utility delivered fuel prices for coal, petroleum, and natural gas; electric power industry emissions estimates; retail sales, revenue, and average retail price by sector; retail electricity sales statistics; and supply and disposition of electricity; net metering counts and capacity by technology and customer type; and advanced metering counts by customer type.
Population Census of a Large Common Tern Colony with a Small Unmanned Aircraft
Chabot, Dominique; Craik, Shawn R.; Bird, David M.
2015-01-01
Small unmanned aircraft systems (UAS) may be useful for conducting high-precision, low-disturbance waterbird surveys, but limited data exist on their effectiveness. We evaluated the capacity of a small UAS to census a large (>6,000 nests) coastal Common tern (Sterna hirundo) colony of which ground surveys are particularly disruptive and time-consuming. We compared aerial photographic tern counts to ground nest counts in 45 plots (5-m radius) throughout the colony at three intervals over a nine-day period in order to identify sources of variation and establish a coefficient to estimate nest numbers from UAS surveys. We also compared a full colony ground count to full counts from two UAS surveys conducted the following day. Finally, we compared colony disturbance levels over the course of UAS flights to matched control periods. Linear regressions between aerial and ground counts in plots had very strong correlations in all three comparison periods (R 2 = 0.972–0.989, P < 0.001) and regression coefficients ranged from 0.928–0.977 terns/nest. Full colony aerial counts were 93.6% and 94.0%, respectively, of the ground count. Varying visibility of terns with ground cover, weather conditions and image quality, and changing nest attendance rates throughout incubation were likely sources of variation in aerial detection rates. Optimally timed UAS surveys of Common tern colonies following our method should yield population estimates in the 93–96% range of ground counts. Although the terns were initially disturbed by the UAS flying overhead, they rapidly habituated to it. Overall, we found no evidence of sustained disturbance to the colony by the UAS. We encourage colonial waterbird researchers and managers to consider taking advantage of this burgeoning technology. PMID:25874997
Le Corre, Mathieu; Carey, Susan
2007-11-01
Since the publication of [Gelman, R., & Gallistel, C. R. (1978). The child's understanding of number. Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present experiments explore proposals according to which the verbal counting principles are acquired by mapping numerals in the count list onto systems of numerical representation for which there is evidence in infancy, namely, analog magnitudes, parallel individuation, and set-based quantification. By asking 3- and 4-year-olds to estimate the number of elements in sets without counting, we investigate whether the numerals that are assigned cardinal meaning as part of the acquisition process display the signatures of what we call "enriched parallel individuation" (which combines properties of parallel individuation and of set-based quantification) or analog magnitudes. Two experiments demonstrate that while "one" to "four" are mapped onto core representations of small sets prior to the acquisition of the counting principles, numerals beyond "four" are only mapped onto analog magnitudes about six months after the acquisition of the counting principles. Moreover, we show that children's numerical estimates of sets from 1 to 4 elements fail to show the signature of numeral use based on analog magnitudes - namely, scalar variability. We conclude that, while representations of small sets provided by parallel individuation, enriched by the resources of set-based quantification are recruited in the acquisition process to provide the first numerical meanings for "one" to "four", analog magnitudes play no role in this process.
A double-observer approach for estimating detection probability and abundance from point counts
Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.
2000-01-01
Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro
2016-02-10
For a network, the accomplishment of its functions despite perturbations is called robustness. Although this property has been extensively studied, in most cases, the network is modified by removing nodes. In our approach, it is no longer perturbed by site percolation, but evolves after site invasion. The process transforming resident/healthy nodes into invader/mutant/diseased nodes is described by the Moran model. We explore the sources of robustness (or its counterpart, the propensity to spread favourable innovations) of the US high-voltage power grid network, the Internet2 academic network, and the C. elegans connectome. We compare them to three modular and non-modular benchmark networks, and samples of one thousand random networks with the same degree distribution. It is found that, contrary to what happens with networks of small order, fixation probability and robustness are poorly correlated with most of standard statistics, but they depend strongly on the degree distribution. While community detection techniques are able to detect the existence of a central core in Internet2, they are not effective in detecting hierarchical structures whose topological complexity arises from the repetition of a few rules. Box counting dimension and Rent's rule are applied to show a subtle trade-off between topological and wiring complexity.
Alcalde Cuesta, Fernando; González Sequeiros, Pablo; Lozano Rojo, Álvaro
2016-01-01
For a network, the accomplishment of its functions despite perturbations is called robustness. Although this property has been extensively studied, in most cases, the network is modified by removing nodes. In our approach, it is no longer perturbed by site percolation, but evolves after site invasion. The process transforming resident/healthy nodes into invader/mutant/diseased nodes is described by the Moran model. We explore the sources of robustness (or its counterpart, the propensity to spread favourable innovations) of the US high-voltage power grid network, the Internet2 academic network, and the C. elegans connectome. We compare them to three modular and non-modular benchmark networks, and samples of one thousand random networks with the same degree distribution. It is found that, contrary to what happens with networks of small order, fixation probability and robustness are poorly correlated with most of standard statistics, but they depend strongly on the degree distribution. While community detection techniques are able to detect the existence of a central core in Internet2, they are not effective in detecting hierarchical structures whose topological complexity arises from the repetition of a few rules. Box counting dimension and Rent’s rule are applied to show a subtle trade-off between topological and wiring complexity. PMID:26861189
Origin and spatial-temporal distribution of faecal bacteria in a bay of Lake Geneva, Switzerland.
Poté, John; Goldscheider, Nico; Haller, Laurence; Zopfi, Jakob; Khajehnouri, Fereidoun; Wildi, Walter
2009-07-01
The origin and distribution of microbial contamination in Lake Geneva's most polluted bay were assessed using faecal indicator bacteria (FIB). The lake is used as drinking water, for recreation and fishing. During 1 year, water samples were taken at 23 points in the bay and three contamination sources: a wastewater treatment plant (WWTP), a river and a storm water outlet. Analyses included Escherichia coli, enterococci (ENT), total coliforms (TC), and heterotrophic plate counts (HPC). E. coli input flux rates from the WWTP can reach 2.5 x 10(10) CFU/s; those from the river are one to three orders of magnitude lower. Different pathogenic Salmonella serotypes were identified in water from these sources. FIB levels in the bay are highly variable. Results demonstrate that (1) the WWTP outlet at 30 m depth impacts near-surface water quality during holomixis in winter; (2) when the lake is stratified, the effluent water is generally trapped below the thermocline; (3) during major floods, upwelling across the thermocline may occur; (4) the river permanently contributes to contamination, mainly near the river mouth and during floods, when the storm water outlet contributes additionally; (5) the lowest FIB levels in the near-surface water occur during low-flow periods in the bathing season.
Singh, Bismark; Meyers, Lauren Ancel
2017-05-08
We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.
High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey
NASA Astrophysics Data System (ADS)
Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.
2018-06-01
We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Polack, J. K.; Febbraro, Michael; Kolata, J. J.; Flaska, Marek; Pozzi, S. A.; Becchetti, F. D.
2017-02-01
The literature discussing pulse-shape discrimination (PSD) in organic scintillators dates back several decades. However, little has been written about PSD techniques that are optimized for neutron spectrum unfolding. Variation in n-γ misclassification rates and in γ/n ratio of incident fields can distort the neutron pulse-height response of scintillators and these distortions can in turn cause large errors in unfolded spectra. New applications in arms-control verification call for detection of lower-energy neutrons, for which PSD is particularly problematic. In this article, we propose techniques for removing distortions on pulse-height response that result from the merging of PSD distributions in the low-pulse-height region. These techniques take advantage of the repeatable shapes of PSD distributions that are governed by the counting statistics of scintillation-photon populations. We validate the proposed techniques using accelerator-based time-of-flight measurements and then demonstrate them by unfolding the Watt spectrum from measurement with a 252Cf neutron source.
Distribution of pink-pigmented facultative methylotrophs on leaves of vegetables.
Mizuno, Masayuki; Yurimoto, Hiroya; Yoshida, Naoko; Iguchi, Hiroyuki; Sakai, Yasuyoshi
2012-01-01
The distribution of pink-pigmented facultative methylotrophs (PPFMs) on the leaves of various vegetables was studied. All kinds of vegetable leaves tested gave pink-pigmented colonies on agar plates containing methanol as sole carbon source. The numbers of PPFMs on the leaves, colony-forming units (CFU)/g of fresh leaves, differed among the plants, although they were planted and grown at the same farm. Commercial green perilla, Perilla frutescens viridis (Makino) Makino, gave the highest counts of PPFMs (2.0-4.1×10(7) CFU/g) of all the commercial vegetable leaves tested, amounting to 15% of total microbes on the leaves. The PPFMs isolated from seeds of two varieties of perilla, the red and green varieties, exhibited high sequence similarity as to the 16S rRNA gene to two different Methylobacterium species, M. fujisawaense DSM5686(T) and M. radiotolerans JCM2831(T) respectively, suggesting that there is specific interaction between perilla and the PPFMs.
Distributing entanglement and single photons through an intra-city, free-space quantum channel.
Resch, K; Lindenthal, M; Blauensteiner, B; Böhm, H; Fedrizzi, A; Kurtsiefer, C; Poppe, A; Schmitt-Manderbach, T; Taraba, M; Ursin, R; Walther, P; Weier, H; Weinfurter, H; Zeilinger, A
2005-01-10
We have distributed entangled photons directly through the atmosphere to a receiver station 7.8 km away over the city of Vienna, Austria at night. Detection of one photon from our entangled pairs constitutes a triggered single photon source from the sender. With no direct time-stable connection, the two stations found coincidence counts in the detection events by calculating the cross-correlation of locally-recorded time stamps shared over a public internet channel. For this experiment, our quantum channel was maintained for a total of 40 minutes during which time a coincidence lock found approximately 60000 coincident detection events. The polarization correlations in those events yielded a Bell parameter, S=2.27+/-0.019, which violates the CHSH-Bell inequality by 14 standard deviations. This result is promising for entanglement-based freespace quantum communication in high-density urban areas. It is also encouraging for optical quantum communication between ground stations and satellites since the length of our free-space link exceeds the atmospheric equivalent.
The 124Sb activity standardization by gamma spectrometry for medical applications
NASA Astrophysics Data System (ADS)
de Almeida, M. C. M.; Iwahara, A.; Delgado, J. U.; Poledna, R.; da Silva, R. L.
2010-07-01
This work describes a metrological activity determination of 124Sb, which can be used as radiotracer, applying gamma spectrometry methods with hyper pure germanium detector and efficiency curves. This isotope with good activity and high radionuclidic purity is employed in the form of meglumine antimoniate (Glucantime) or sodium stibogluconate (Pentostam) to treat leishmaniasis. 124Sb is also applied in animal organ distribution studies to solve some questions in pharmacology. 124Sb decays by β-emission and it produces several photons (X and gamma rays) with energy varying from 27 to 2700 keV. Efficiency curves to measure point 124Sb solid sources were obtained from a 166mHo standard that is a multi-gamma reference source. These curves depend on radiation energy, sample geometry, photon attenuation, dead time and sample-detector position. Results for activity determination of 124Sb samples using efficiency curves and a high purity coaxial germanium detector were consistent in different counting geometries. Also uncertainties of about 2% ( k=2) were obtained.
NASA Astrophysics Data System (ADS)
Zhao, S.
2014-12-01
Levels of microplastics (MPs) in China are completely unknown. Here suspended MPs were characterized quantitatively and qualitatively for the Yangtze Estuary and East China Sea. MPs were extracted via a floatation method. MPs were then counted and categorized according to shape and size under a dissecting microscope. The MP densities were 4137.3±2461.5 and 0.167±0.138 n/m3 in the estuarine and the sea waters, respectively. Plastic abundances varied strongly in the estuary. Higher density in the C transect corroborated that rivers were the important sources of MP to the marine environment. MPs (0.5-5mm) constituted more than 90% of total plastics. Plastic particles (> 5 mm) were observed with a maximum size of 12.46 mm. The most frequent plastics were fibres, followed by granules and films. Plastic spherules occurred sparsely. Transparent and coloured plastics comprised the majority of the particle colours. This study provides clues in understanding MPs fate and potential source.
Scientific applications of frequency-stabilized laser technology in space
NASA Technical Reports Server (NTRS)
Schumaker, Bonny L.
1990-01-01
A synoptic investigation of the uses of frequency-stabilized lasers for scientific applications in space is presented. It begins by summarizing properties of lasers, characterizing their frequency stability, and describing limitations and techniques to achieve certain levels of frequency stability. Limits to precision set by laser frequency stability for various kinds of measurements are investigated and compared with other sources of error. These other sources include photon-counting statistics, scattered laser light, fluctuations in laser power, and intensity distribution across the beam, propagation effects, mechanical and thermal noise, and radiation pressure. Methods are explored to improve the sensitivity of laser-based interferometric and range-rate measurements. Several specific types of science experiments that rely on highly precise measurements made with lasers are analyzed, and anticipated errors and overall performance are discussed. Qualitative descriptions are given of a number of other possible science applications involving frequency-stabilized lasers and related laser technology in space. These applications will warrant more careful analysis as technology develops.
Interferometry meets the third and fourth dimensions in galaxies
NASA Astrophysics Data System (ADS)
Trimble, Virginia
2015-02-01
Radio astronomy began with one array (Jansky's) and one paraboloid of revolution (Reber's) as collecting areas and has now reached the point where a large number of facilities are arrays of paraboloids, each of which would have looked enormous to Reber in 1932. In the process, interferometry has contributed to the counting of radio sources, establishing superluminal velocities in AGN jets, mapping of sources from the bipolar cow shape on up to full grey-scale and colored images, determining spectral energy distributions requiring non-thermal emission processes, and much else. The process has not been free of competition and controversy, at least partly because it is just a little difficult to understand how earth-rotation, aperture-synthesis interferometry works. Some very important results, for instance the mapping of HI in the Milky Way to reveal spiral arms, warping, and flaring, actually came from single moderate-sized paraboloids. The entry of China into the radio astronomy community has given large (40-110 meter) paraboloids a new lease on life.
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Reply-frequency interference/jamming detector
NASA Astrophysics Data System (ADS)
Bishop, Walton B.
1995-01-01
Received IFF reply-frequency signals are examined to determine whether they are being interfered with by enemy sources and indication of the extent of detected interference is provided. The number of correct replies received from selected range bins surrounding and including the center one in which a target leading edge is first declared is counted and compared with the count of the number of friend-accept decisions made based on replies from the selected range bins. The level of interference is then indicated by the ratio between the two counts.
The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability
NASA Technical Reports Server (NTRS)
Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.
1979-01-01
This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.
Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting.
Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay
2016-04-09
Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e(-) read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor.
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Microbiological Quality Assessment of Game Meats at Retail in Japan.
Asakura, Hiroshi; Kawase, Jun; Ikeda, Tetsuya; Honda, Mioko; Sasaki, Yoshimasa; Uema, Masashi; Kabeya, Hidenori; Sugiyama, Hiromu; Igimi, Shizunobu; Takai, Shinji
2017-12-01
In this study, we examined the prevalence of Shiga toxin-producing Escherichia coli and Salmonella spp. and the distribution of indicator bacteria in 248 samples of game meats (120 venison and 128 wild boar) retailed between November 2015 and March 2016 in Japan. No Salmonella spp. were detected in any of the samples, whereas Shiga toxin-producing Escherichia coli serotype OUT:H25 (stx 2d + , eae - ) was isolated from one deer meat sample, suggesting a possible source for human infection. Plate count assays indicated greater prevalence of coliforms and E. coli in wild boar meat than in venison, whereas their prevalence in processing facilities showed greater variation than in animal species. The 16S rRNA ion semiconductor sequencing analysis of 24 representative samples revealed that the abundances of Acinetobacter and Arthrobacter spp. significantly correlated with the prevalence of E. coli, and quantitative PCR analyses in combination with selective plate count assay verified these correlations. To our knowledge, this is the first report to characterize the diversity of microorganisms of game meats at retail in Japan, together with identification of dominant microbiota. Our data suggest the necessity of bottom-up hygienic assessment in areas of slaughtering and processing facilities to improve microbiological safety.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, Alian; Kuebler, Karla E.; Jolliff, Brad L.
2000-01-01
The distribution of pyroxenes of different Mg' and olivines of different Fo in lithologies A and B were obtained. Three types of olivine formed at different stages of rock formation were found by point counting Raman measurements along linear traverses.
Drivers of Water Quality Variability in Northern Coastal Ecuador
Hubbard, Alan E.; Nelson, Kara L.; Eisenberg, Joseph N.S.
2012-01-01
The microbiological safety of water is commonly measured using indicator organisms, but the spatiotemporal variability of these indicators can make interpretation of data difficult. Here we systematically explore variability in E.coli concentrations in surface source and household drinking water in a rural Ecuadorian village over one year. We observed more variability in water quality on an hourly basis (up to 2.4-log difference) than on a daily (2.2-log difference) or weekly basis (up to 1.8-log difference). E.coli counts were higher in the wet season than in the dry season for both source (0.42-log difference; p<0.0001) and household (0.11-log difference; p=0.077) samples. In the wet season, a one-cm increase in weekly rainfall was associated with a 3% decrease (p=0.006) in E.coli counts in source samples and a 6% decrease (p=0.012) in household samples. Each additional person in the river when source samples were collected was associated with a 4% increase (p=0.026) in E.coli counts in the wet season. Factors affecting household water quality included rainfall, water source, and covering the container. The variability can be understood as a combination of environmental (e.g., seasonal and soil processes) and other drivers (e.g., human river use, water practices and sanitation), each working at different timescales. PMID:19368173
Montesinos-López, Osval A; Montesinos-López, Abelardo; Crossa, José; Toledo, Fernando H; Montesinos-López, José C; Singh, Pawan; Juliana, Philomin; Salinas-Ruiz, Josafhat
2017-05-05
When a plant scientist wishes to make genomic-enabled predictions of multiple traits measured in multiple individuals in multiple environments, the most common strategy for performing the analysis is to use a single trait at a time taking into account genotype × environment interaction (G × E), because there is a lack of comprehensive models that simultaneously take into account the correlated counting traits and G × E. For this reason, in this study we propose a multiple-trait and multiple-environment model for count data. The proposed model was developed under the Bayesian paradigm for which we developed a Markov Chain Monte Carlo (MCMC) with noninformative priors. This allows obtaining all required full conditional distributions of the parameters leading to an exact Gibbs sampler for the posterior distribution. Our model was tested with simulated data and a real data set. Results show that the proposed multi-trait, multi-environment model is an attractive alternative for modeling multiple count traits measured in multiple environments. Copyright © 2017 Montesinos-López et al.
Dead time corrections for inbeam γ-spectroscopy measurements
NASA Astrophysics Data System (ADS)
Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.
2017-08-01
Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.
Statistical models for RNA-seq data derived from a two-condition 48-replicate experiment.
Gierliński, Marek; Cole, Christian; Schofield, Pietà; Schurch, Nicholas J; Sherstnev, Alexander; Singh, Vijender; Wrobel, Nicola; Gharbi, Karim; Simpson, Gordon; Owen-Hughes, Tom; Blaxter, Mark; Barton, Geoffrey J
2015-11-15
High-throughput RNA sequencing (RNA-seq) is now the standard method to determine differential gene expression. Identifying differentially expressed genes crucially depends on estimates of read-count variability. These estimates are typically based on statistical models such as the negative binomial distribution, which is employed by the tools edgeR, DESeq and cuffdiff. Until now, the validity of these models has usually been tested on either low-replicate RNA-seq data or simulations. A 48-replicate RNA-seq experiment in yeast was performed and data tested against theoretical models. The observed gene read counts were consistent with both log-normal and negative binomial distributions, while the mean-variance relation followed the line of constant dispersion parameter of ∼0.01. The high-replicate data also allowed for strict quality control and screening of 'bad' replicates, which can drastically affect the gene read-count distribution. RNA-seq data have been submitted to ENA archive with project ID PRJEB5348. g.j.barton@dundee.ac.uk. © The Author 2015. Published by Oxford University Press.
Statistical models for RNA-seq data derived from a two-condition 48-replicate experiment
Cole, Christian; Schofield, Pietà; Schurch, Nicholas J.; Sherstnev, Alexander; Singh, Vijender; Wrobel, Nicola; Gharbi, Karim; Simpson, Gordon; Owen-Hughes, Tom; Blaxter, Mark; Barton, Geoffrey J.
2015-01-01
Motivation: High-throughput RNA sequencing (RNA-seq) is now the standard method to determine differential gene expression. Identifying differentially expressed genes crucially depends on estimates of read-count variability. These estimates are typically based on statistical models such as the negative binomial distribution, which is employed by the tools edgeR, DESeq and cuffdiff. Until now, the validity of these models has usually been tested on either low-replicate RNA-seq data or simulations. Results: A 48-replicate RNA-seq experiment in yeast was performed and data tested against theoretical models. The observed gene read counts were consistent with both log-normal and negative binomial distributions, while the mean-variance relation followed the line of constant dispersion parameter of ∼0.01. The high-replicate data also allowed for strict quality control and screening of ‘bad’ replicates, which can drastically affect the gene read-count distribution. Availability and implementation: RNA-seq data have been submitted to ENA archive with project ID PRJEB5348. Contact: g.j.barton@dundee.ac.uk PMID:26206307
Martian crater counts on Elysium Mons
NASA Technical Reports Server (NTRS)
Mcbride, Kathleen; Barlow, Nadine G.
1990-01-01
Without returned samples from the Martian surface, relative age chronologies and stratigraphic relationships provide the best information for determining the ages of geomorphic features and surface regions. Crater-size frequency distributions of six recently mapped geological units of Elysium Mons were measured to establish their relative ages. Most of the craters on Elysium Mons and the adjacent plains units are between 500 and 1000 meters in diameter. However, only craters 1 km in diameter or larger were used because of inadequate spatial resolution of some of the Viking images and to reduce probability of counting secondary craters. The six geologic units include all of the Elysium Mons construct and a portion of the plains units west of the volcano. The surface area of the units studied is approximately 128,000 sq km. Four of the geologic units were used to create crater distribution curves. There are no craters larger than 1 km within the Elysium Mons caldera. Craters that lacked raised rims, were irregularly shaped, or were arranged in a linear pattern were assumed to be endogenic in origin and not counted. A crater frequency distribution analysis is presented.
Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza
2018-02-01
Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.
Landis, Sarah; Suruki, Robert; Maskell, Joe; Bonar, Kerina; Hilton, Emma; Compton, Chris
2018-03-20
Blood eosinophil count may be a useful biomarker for predicting response to inhaled corticosteroids and exacerbation risk in chronic obstructive pulmonary disease (COPD) patients. The optimal cut point for categorizing blood eosinophil counts in these contexts remains unclear. We aimed to determine the distribution of blood eosinophil count in COPD patients and matched non-COPD controls, and to describe demographic and clinical characteristics at different cut points. We identified COPD patients within the UK Clinical Practice Research Database aged ≥40 years with a FEV 1 /FVC <0.7, and ≥1 blood eosinophil count recorded during stable disease between January 1, 2010 and December 31, 2012. COPD patients were matched on age, sex, and smoking status to non-COPD controls. Using all blood eosinophil counts recorded during a 12-month period, COPD patients were categorized as "always above," "fluctuating above and below," and "never above" cut points of 100, 150, and 300 cells/μL. The geometric mean blood eosinophil count was statistically significantly higher in COPD patients versus matched controls (196.6 cells/µL vs. 182.1 cells/µL; mean difference 8%, 95% CI: 6.8, 9.2), and in COPD patients with versus without a history of asthma (205.0 cells/µL vs. 192.2 cells/µL; mean difference 6.7%, 95%, CI: 4.9, 8.5). About half of COPD patients had all blood eosinophil counts above 150 cells/μL; this persistent higher eosinophil phenotype was associated with being male, higher body mass index, and history of asthma. In conclusion, COPD patients demonstrated higher blood eosinophil count than non-COPD controls, although there was substantial overlap in the distributions. COPD patients with a history of asthma had significantly higher blood eosinophil count versus those without.
Schein, Stan; Ahmad, Kareem M
2006-11-01
A rod transmits absorption of a single photon by what appears to be a small reduction in the small number of quanta of neurotransmitter (Q(count)) that it releases within the integration period ( approximately 0.1 s) of a rod bipolar dendrite. Due to the quantal and stochastic nature of release, discrete distributions of Q(count) for darkness versus one isomerization of rhodopsin (R*) overlap. We suggested that release must be regular to narrow these distributions, reduce overlap, reduce the rate of false positives, and increase transmission efficiency (the fraction of R* events that are identified as light). Unsurprisingly, higher quantal release rates (Q(rates)) yield higher efficiencies. Focusing here on the effect of small changes in Q(rate), we find that a slightly higher Q(rate) yields greatly reduced efficiency, due to a necessarily fixed quantal-count threshold. To stabilize efficiency in the face of drift in Q(rate), the dendrite needs to regulate the biochemical realization of its quantal-count threshold with respect to its Q(count). These considerations reveal the mathematical role of calcium-based negative feedback and suggest a helpful role for spontaneous R*. In addition, to stabilize efficiency in the face of drift in degree of regularity, efficiency should be approximately 50%, similar to measurements.
De, Rajat K.
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision. PMID:26291322
Sinha, Rituparna; Samaddar, Sandip; De, Rajat K
2015-01-01
Copy number variation (CNV) is a form of structural alteration in the mammalian DNA sequence, which are associated with many complex neurological diseases as well as cancer. The development of next generation sequencing (NGS) technology provides us a new dimension towards detection of genomic locations with copy number variations. Here we develop an algorithm for detecting CNVs, which is based on depth of coverage data generated by NGS technology. In this work, we have used a novel way to represent the read count data as a two dimensional geometrical point. A key aspect of detecting the regions with CNVs, is to devise a proper segmentation algorithm that will distinguish the genomic locations having a significant difference in read count data. We have designed a new segmentation approach in this context, using convex hull algorithm on the geometrical representation of read count data. To our knowledge, most algorithms have used a single distribution model of read count data, but here in our approach, we have considered the read count data to follow two different distribution models independently, which adds to the robustness of detection of CNVs. In addition, our algorithm calls CNVs based on the multiple sample analysis approach resulting in a low false discovery rate with high precision.
Wyoming Kids Count in Wyoming Factbook, 1999.
ERIC Educational Resources Information Center
Wyoming Children's Action Alliance, Cheyenne.
This Kids Count factbook details statewide trends in the well-being of Wyoming's children. Following an overview of key indicators and data sources, the factbook documents trends by county for 20 indicators, including the following: (1) poverty and population; (2) welfare reform; (3) certified day care facilities; (4) births; (5) infant deaths;…
Hagen, Nils T.
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201
Design, development and manufacture of a breadboard radio frequency mass gauging system
NASA Technical Reports Server (NTRS)
1975-01-01
The feasibility of the RF gauging mode, counting technique was demonstrated for gauging liquid hydrogen and liquid oxygen under all attitude conditions. With LH2, it was also demonstrated under dynamic fluid conditions, in which the fluid assumes ever changing positions within the tank, that the RF gauging technique on the average provides a very good indication of mass. It is significant that the distribution of the mode count data at each fill level during dynamic LH2 and LOX orientation testing does approach a statistical normal distribution. Multiple space-diversity probes provide better coupling to the resonant modes than utilization of a single probe element. The variable sweep rate generator technique provides a more uniform mode versus time distribution for processing.
Abong', George Ooko
2018-01-01
Limited information exists on the status of hygiene and probable sources of microbial contamination in Orange Fleshed Sweet Potato (OFSP) puree processing. The current study is aimed at determining the level of compliance to Good Manufacturing Practices (GMPs), hygiene, and microbial quality in OFSP puree processing plant in Kenya. Intensive observation and interviews using a structured GMPs checklist, environmental sampling, and microbial analysis by standard microbiological methods were used in data collection. The results indicated low level of compliance to GMPs with an overall compliance score of 58%. Microbial counts on food equipment surfaces, installations, and personnel hands and in packaged OFSP puree were above the recommended microbial safety and quality legal limits. Steaming significantly (P < 0.05) reduced microbial load in OFSP cooked roots but the counts significantly (P < 0.05) increased in the puree due to postprocessing contamination. Total counts, yeasts and molds, Enterobacteriaceae, total coliforms, and E. coli and S. aureus counts in OFSP puree were 8.0, 4.0, 6.6, 5.8, 4.8, and 5.9 log10 cfu/g, respectively. In conclusion, equipment surfaces, personnel hands, and processing water were major sources of contamination in OFSP puree processing and handling. Plant hygiene inspection, environmental monitoring, and food safety trainings are recommended to improve hygiene, microbial quality, and safety of OFSP puree. PMID:29808161
Malavi, Derick Nyabera; Muzhingi, Tawanda; Abong', George Ooko
2018-01-01
Limited information exists on the status of hygiene and probable sources of microbial contamination in Orange Fleshed Sweet Potato (OFSP) puree processing. The current study is aimed at determining the level of compliance to Good Manufacturing Practices (GMPs), hygiene, and microbial quality in OFSP puree processing plant in Kenya. Intensive observation and interviews using a structured GMPs checklist, environmental sampling, and microbial analysis by standard microbiological methods were used in data collection. The results indicated low level of compliance to GMPs with an overall compliance score of 58%. Microbial counts on food equipment surfaces, installations, and personnel hands and in packaged OFSP puree were above the recommended microbial safety and quality legal limits. Steaming significantly ( P < 0.05) reduced microbial load in OFSP cooked roots but the counts significantly ( P < 0.05) increased in the puree due to postprocessing contamination. Total counts, yeasts and molds, Enterobacteriaceae, total coliforms, and E. coli and S. aureus counts in OFSP puree were 8.0, 4.0, 6.6, 5.8, 4.8, and 5.9 log 10 cfu/g, respectively. In conclusion, equipment surfaces, personnel hands, and processing water were major sources of contamination in OFSP puree processing and handling. Plant hygiene inspection, environmental monitoring, and food safety trainings are recommended to improve hygiene, microbial quality, and safety of OFSP puree.
Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, J E; Fratoni, M; Kramer, K J
A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less
Removing cosmic-ray hits from multiorbit HST Wide Field Camera images
NASA Technical Reports Server (NTRS)
Windhorst, Rogier A.; Franklin, Barbara E.; Neuschaefer, Lyman W.
1994-01-01
We present an optimized algorithm that removes cosmic rays ('CRs') from multiorbit Hubble Space Telescope (HST) Wide Field/Planetary Camera ('WF/PC') images. It computes the image noise in every iteration from the WF/PC CCD equation. This includes all known sources of random and systematic calibration errors. We test this algorithm on WF/PC stacks of 2-12 orbits as a function of the number of available orbits and the formal Poissonian sigma-clipping level. We find that the algorithm needs greater than or equal 4 WF/PC exposures to locate the minimal sky signal (which is noticeably affected by CRs), with an optimal clipping level at 2-2.5 x sigma(sub Poisson). We analyze the CR flux detected on multiorbit 'CR stacks,' which are constructed by subtracting the best CR filtered images from the unfiltered 8-12 orbit average. We use an automated object finder to determine the surface density of CRS as a function of the apparent magnitude (or ADU flux) they would have generated in the images had they not been removed. The power law slope of the CR 'counts' (gamma approximately = 0.6 for N(m) m(exp gamma)) is steeper than that of the faint galaxy counts down to V approximately = 28 mag. The CR counts show a drop off between 28 less than or approximately V less than or approximately 30 mag (the latter is our formal 2 sigma point source sensitivity without spherical aberration). This prevents the CR sky integral from diverging, and is likely due to a real cutoff in the CR energy distribution below approximately 11 ADU per orbit. The integral CR surface density is less than or approximately 10(exp 8)/sq. deg, and their sky signal is V approximately = 25.5-27.0 mag/sq. arcsec, or 3%-13% of our NEP sky background (V = 23.3 mag/sq. arcsec), and well above the EBL integral of the deepest galaxy counts (B(sub J) approximately = 28.0 mag/sq. arcsec). We conclude that faint CRs will always contribute to the sky signal in the deepest WF/PC images. Since WFPC2 has approximately 2.7x lower read noise and a thicker CCD, this will result in more CR detections than in WF/PC, potentially affecting approximately 10%-20% of the pixels in multiorbit WFPC2 data cubes.
NASA Astrophysics Data System (ADS)
Wagner, Roland; Schmedemann, Nico; Neukum, Gerhard; Werner, Stephanie C.; Ivanov, Boris A.; Stephan, Katrin; Jaumann, Ralf; Palumbo, Pasquale
2014-11-01
Crater distributions and origin of potential impactors on the Galilean satellites has been an issue of controversial debate. In this work, we review the current knowledge of the cratering record on Ganymede and Callisto and present strategies for further studies using images from ESA’s JUICE mission to Jupiter. Crater distributions in densely cratered units on these two satellites show a complex shape between 20 m and 200 km crater diameter, similar to lunar highland distributions implying impacts of members of a collisionally evolved projectile family. Also, the complex shape predominantly indicates production distributions. No evidence for apex-antapex asymmetries in crater frequency was found, therefore the majority of projectiles (a) preferentially impacted from planetocentric orbits, or (b) the satellites were rotating non-synchronously during a time of heavy bombardment. The currently available imaging data are insufficient to investigate in detail significant changes in the shape of crater distributions with time. Clusters of secondary craters are well mappable and excluded from crater counts, lack of sufficient image coverage at high resolution, however, in many cases impedes the identification of source craters. ESA’s future JUICE mission will study Ganymede as the first icy satellite in the outer Solar system from an orbit under stable viewing conditions. Measurements of crater distributions can be carried out based on global geologic mapping at highest spatial resolutions (10s of meters down to 3 m/pxl).
Very deep IRAS survey - constraints on the evolution of starburst galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacking, P.; Houck, J.R.; Condon, J.J.
1987-05-01
Counts of sources (primarily starburst galaxies) from a deep 60 microns IRAS survey published by Hacking and Houck (1987) are compared with four evolutionary models. The counts below 100 mJy are higher than expected if no evolution has taken place out to a redshift of approximately 0.2. Redshift measurements of the survey sources should be able to distinguish between luminosity-evolution and density-evolution models and detect as little as a 20 percent brightening or increase in density of infrared sources per billion years ago (H/0/ = 100 km/s per Mpc). Starburst galaxies cannot account for the reported 100 microns background withoutmore » extreme evolution at high redshifts. 21 references.« less
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
Lower white blood cell counts in elite athletes training for highly aerobic sports.
Horn, P L; Pyne, D B; Hopkins, W G; Barnes, C J
2010-11-01
White cell counts at rest might be lower in athletes participating in selected endurance-type sports. Here, we analysed blood tests of elite athletes collected over a 10-year period. Reference ranges were established for 14 female and 14 male sports involving 3,679 samples from 937 females and 4,654 samples from 1,310 males. Total white blood cell counts and counts of neutrophils, lymphocytes and monocytes were quantified. Each sport was scaled (1-5) for its perceived metabolic stress (aerobic-anaerobic) and mechanical stress (concentric-eccentric) by 13 sports physiologists. Substantially lower total white cell and neutrophil counts were observed in aerobic sports of cycling and triathlon (~16% of test results below the normal reference range) compared with team or skill-based sports such as water polo, cricket and volleyball. Mechanical stress of sports had less effect on the distribution of cell counts. The lower white cell counts in athletes in aerobic sports probably represent an adaptive response, not underlying pathology.
The SCUBA-2 Cosmology Legacy Survey: 850 μm maps, catalogues and number counts
NASA Astrophysics Data System (ADS)
Geach, J. E.; Dunlop, J. S.; Halpern, M.; Smail, Ian; van der Werf, P.; Alexander, D. M.; Almaini, O.; Aretxaga, I.; Arumugam, V.; Asboth, V.; Banerji, M.; Beanlands, J.; Best, P. N.; Blain, A. W.; Birkinshaw, M.; Chapin, E. L.; Chapman, S. C.; Chen, C.-C.; Chrysostomou, A.; Clarke, C.; Clements, D. L.; Conselice, C.; Coppin, K. E. K.; Cowley, W. I.; Danielson, A. L. R.; Eales, S.; Edge, A. C.; Farrah, D.; Gibb, A.; Harrison, C. M.; Hine, N. K.; Hughes, D.; Ivison, R. J.; Jarvis, M.; Jenness, T.; Jones, S. F.; Karim, A.; Koprowski, M.; Knudsen, K. K.; Lacey, C. G.; Mackenzie, T.; Marsden, G.; McAlpine, K.; McMahon, R.; Meijerink, R.; Michałowski, M. J.; Oliver, S. J.; Page, M. J.; Peacock, J. A.; Rigopoulou, D.; Robson, E. I.; Roseboom, I.; Rotermund, K.; Scott, Douglas; Serjeant, S.; Simpson, C.; Simpson, J. M.; Smith, D. J. B.; Spaans, M.; Stanley, F.; Stevens, J. A.; Swinbank, A. M.; Targett, T.; Thomson, A. P.; Valiante, E.; Wake, D. A.; Webb, T. M. A.; Willott, C.; Zavala, J. A.; Zemcov, M.
2017-02-01
We present a catalogue of ˜3000 submillimetre sources detected (≥3.5σ) at 850 μm over ˜5 deg2 surveyed as part of the James Clerk Maxwell Telescope (JCMT) SCUBA-2 Cosmology Legacy Survey (S2CLS). This is the largest survey of its kind at 850 μm, increasing the sample size of 850 μm selected submillimetre galaxies by an order of magnitude. The wide 850 μm survey component of S2CLS covers the extragalactic fields: UKIDSS-UDS, COSMOS, Akari-NEP, Extended Groth Strip, Lockman Hole North, SSA22 and GOODS-North. The average 1σ depth of S2CLS is 1.2 mJy beam-1, approaching the SCUBA-2 850 μm confusion limit, which we determine to be σc ≈ 0.8 mJy beam-1. We measure the 850 μm number counts, reducing the Poisson errors on the differential counts to approximately 4 per cent at S850 ≈ 3 mJy. With several independent fields, we investigate field-to-field variance, finding that the number counts on 0.5°-1° scales are generally within 50 per cent of the S2CLS mean for S850 > 3 mJy, with scatter consistent with the Poisson and estimated cosmic variance uncertainties, although there is a marginal (2σ) density enhancement in GOODS-North. The observed counts are in reasonable agreement with recent phenomenological and semi-analytic models, although determining the shape of the faint-end slope (S850 < 3 mJy) remains a key test. The large solid angle of S2CLS allows us to measure the bright-end counts: at S850 > 10 mJy there are approximately 10 sources per square degree, and we detect the distinctive up-turn in the number counts indicative of the detection of local sources of 850 μm emission, and strongly lensed high-redshift galaxies. All calibrated maps and the catalogue are made publicly available.
Hallas, Gary; Monis, Paul
2015-01-01
The enumeration of bacteria using plate-based counts is a core technique used by food and water microbiology testing laboratories. However, manual counting of bacterial colonies is both time and labour intensive, can vary between operators and also requires manual entry of results into laboratory information management systems, which can be a source of data entry error. An alternative is to use automated digital colony counters, but there is a lack of peer-reviewed validation data to allow incorporation into standards. We compared the performance of digital counting technology (ProtoCOL3) against manual counting using criteria defined in internationally recognized standard methods. Digital colony counting provided a robust, standardized system suitable for adoption in a commercial testing environment. The digital technology has several advantages:•Improved measurement of uncertainty by using a standard and consistent counting methodology with less operator error.•Efficiency for labour and time (reduced cost).•Elimination of manual entry of data onto LIMS.•Faster result reporting to customers.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
Experimental Study for Automatic Colony Counting System Based Onimage Processing
NASA Astrophysics Data System (ADS)
Fang, Junlong; Li, Wenzhe; Wang, Guoxin
Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.
NASA Astrophysics Data System (ADS)
Englander, J. G.; Brodrick, P. G.; Brandt, A. R.
2015-12-01
Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.
Gouge, Brian; Ries, Francis J; Dowlatabadi, Hadi
2010-09-15
Macroscale emissions modeling approaches have been widely applied in impact assessments of mobile source emissions. However, these approaches poorly characterize the spatial distribution of emissions and have been shown to underestimate emissions of some pollutants. To quantify the implications of these limitations on exposure assessments, CO, NO(X), and HC emissions from diesel transit buses were estimated at 50 m intervals along a bus rapid transit route using a microscale emissions modeling approach. The impacted population around the route was estimated using census, pedestrian count and transit ridership data. Emissions exhibited significant spatial variability. In intervals near major intersections and bus stops, emissions were 1.6-3.0 times higher than average. The coincidence of these emission hot spots and peaks in pedestrian populations resulted in a 20-40% increase in exposure compared to estimates that assumed homogeneous spatial distributions of emissions and/or populations along the route. An additional 19-30% increase in exposure resulted from the underestimate of CO and NO(X) emissions by macroscale modeling approaches. The results of this study indicate that macroscale modeling approaches underestimate exposure due to poor characterization of the influence of vehicle activity on the spatial distribution of emissions and total emissions.
An astrophysics data program investigation of cluster evolution
NASA Technical Reports Server (NTRS)
Kellogg, Edwin M.
1990-01-01
A preliminary status report is given on studies using the Einstein x ray observations of distant clusters of galaxies that are also candidates for gravitational lenses. The studies will determine the location and surface brightness distribution of the x ray emission from clusters associated with selected gravitational lenses. The x ray emission comes from hot gas that traces out the total gravitational potential in the cluster, so its distribution is approximately the same as the mass distribution causing gravitational lensing. Core radii and x ray virial masses can be computed for several of the brighter Einstein sources, and preliminary results are presented on A2218. Preliminary status is also reported on a study of the optical data from 0024+16. A provisional value of 1800 to 2200 km/s for the equivalent velocity dispersion is obtained. The ultimate objective is to extract the mass of the gravitational lens, and perhaps more detailed information on the distribution of matter as warranted. A survey of the Einstein archive shows that the clusters A520, A1704, 3C295, A2397, A1722, SC5029-247, A3186 and A370 have enough x ray counts observed to warrant more detailed optical observations of arcs for comparison. Mass estimates for these clusters can therefore be obtained from three independent sources: the length scale (core radius) that characterizes the density dropoff of the x ray emitting hot gas away from its center, the velocity dispersion of the galaxies moving in the cluster potential, and gravitational bending of light by the total cluster mass. This study will allow the comparison of these three techniques and ultimately improve the knowledge of cluster masses.
Karulin, Alexey Y.; Karacsony, Kinga; Zhang, Wenji; Targoni, Oleg S.; Moldovan, Ioana; Dittrich, Marcus; Sundararaman, Srividya; Lehmann, Paul V.
2015-01-01
Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. PMID:25612115
A model of the 8-25 micron point source infrared sky
NASA Technical Reports Server (NTRS)
Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.
1992-01-01
We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.
2013-01-01
Background High-throughput RNA sequencing (RNA-seq) offers unprecedented power to capture the real dynamics of gene expression. Experimental designs with extensive biological replication present a unique opportunity to exploit this feature and distinguish expression profiles with higher resolution. RNA-seq data analysis methods so far have been mostly applied to data sets with few replicates and their default settings try to provide the best performance under this constraint. These methods are based on two well-known count data distributions: the Poisson and the negative binomial. The way to properly calibrate them with large RNA-seq data sets is not trivial for the non-expert bioinformatics user. Results Here we show that expression profiles produced by extensively-replicated RNA-seq experiments lead to a rich diversity of count data distributions beyond the Poisson and the negative binomial, such as Poisson-Inverse Gaussian or Pólya-Aeppli, which can be captured by a more general family of count data distributions called the Poisson-Tweedie. The flexibility of the Poisson-Tweedie family enables a direct fitting of emerging features of large expression profiles, such as heavy-tails or zero-inflation, without the need to alter a single configuration parameter. We provide a software package for R called tweeDEseq implementing a new test for differential expression based on the Poisson-Tweedie family. Using simulations on synthetic and real RNA-seq data we show that tweeDEseq yields P-values that are equally or more accurate than competing methods under different configuration parameters. By surveying the tiny fraction of sex-specific gene expression changes in human lymphoblastoid cell lines, we also show that tweeDEseq accurately detects differentially expressed genes in a real large RNA-seq data set with improved performance and reproducibility over the previously compared methodologies. Finally, we compared the results with those obtained from microarrays in order to check for reproducibility. Conclusions RNA-seq data with many replicates leads to a handful of count data distributions which can be accurately estimated with the statistical model illustrated in this paper. This method provides a better fit to the underlying biological variability; this may be critical when comparing groups of RNA-seq samples with markedly different count data distributions. The tweeDEseq package forms part of the Bioconductor project and it is available for download at http://www.bioconductor.org. PMID:23965047
Spillover Compensation in the Presence of Respiratory Motion Embedded in SPECT Perfusion Data
NASA Astrophysics Data System (ADS)
Pretorius, P. Hendrik; King, Michael A.
2008-02-01
Spillover from adjacent significant accumulations of extra-cardiac activity decreases diagnostic accuracy of SPECT perfusion imaging in especially the inferior/septal cardiac region. One method of compensating for the spillover at some location outside of a structure is to estimate it as the counts blurred into this location when a template (3D model) of the structure undergoes simulated imaging followed by reconstruction. The objective of this study was to determine what impact uncorrected respiratory motion has on such spillover compensation of extra-cardiac activity in the right coronary artery (RCA) territory, and if it is possible to use manual segmentation to define the extra-cardiac activity template(s) used in spillover correction. Two separate MCAT phantoms (1283 matrices) were simulated to represent the source and attenuation distributions of patients with and without respiratory motion. For each phantom the heart was modeled: 1) with a normal perfusion pattern and 2) with an RCA defect equal to 50% of the normal myocardium count level. After Monte Carlo simulation of 64times64times120 projections with appropriate noise, data were reconstructed using the rescaled block iterative (RBI) algorithm with 30 subsets and 5 iterations with compensation for attenuation, scatter and resolution. A 3D Gaussian post-filter with a sigma of 0.476 cm was used to suppress noise. Manual segmentation of the liver in filtered emission slices was used to create 3D binary templates. The true liver distribution (with and without respiratory motion included) was also used as binary templates. These templates were projected using a ray-driven projector simulating the imaging system with the exclusion of Compton scatter and reconstructed using the same protocol as for the emission data, excluding scatter compensation. Reconstructed templates were scaled using reconstructed emission count levels from the liver, and spillover subtracted outside the template. It was evident from the polar maps that the manually segmented template reconstructions were unable to remove all the spillover originating in the liver from the inferior wall. This was especially noticeable when a perfusion defect is present. Templates based on the true liver distribution appreciably improved spillover correction. Thus the emerging combined SPECT/CT technology may play a vital role in identifying and segmenting extra-cardiac structures more reliably thereby facilitating spillover correction. This study also indicates that compensation for respiratory motion might play an important role in spillover compensation.
Garza-Gisholt, Eduardo; Hemmi, Jan M; Hart, Nathan S; Collin, Shaun P
2014-01-01
Topographic maps that illustrate variations in the density of different neuronal sub-types across the retina are valuable tools for understanding the adaptive significance of retinal specialisations in different species of vertebrates. To date, such maps have been created from raw count data that have been subjected to only limited analysis (linear interpolation) and, in many cases, have been presented as iso-density contour maps with contour lines that have been smoothed 'by eye'. With the use of stereological approach to count neuronal distribution, a more rigorous approach to analysing the count data is warranted and potentially provides a more accurate representation of the neuron distribution pattern. Moreover, a formal spatial analysis of retinal topography permits a more robust comparison of topographic maps within and between species. In this paper, we present a new R-script for analysing the topography of retinal neurons and compare methods of interpolating and smoothing count data for the construction of topographic maps. We compare four methods for spatial analysis of cell count data: Akima interpolation, thin plate spline interpolation, thin plate spline smoothing and Gaussian kernel smoothing. The use of interpolation 'respects' the observed data and simply calculates the intermediate values required to create iso-density contour maps. Interpolation preserves more of the data but, consequently includes outliers, sampling errors and/or other experimental artefacts. In contrast, smoothing the data reduces the 'noise' caused by artefacts and permits a clearer representation of the dominant, 'real' distribution. This is particularly useful where cell density gradients are shallow and small variations in local density may dramatically influence the perceived spatial pattern of neuronal topography. The thin plate spline and the Gaussian kernel methods both produce similar retinal topography maps but the smoothing parameters used may affect the outcome.
Rehbein, S; Lindner, T; Kollmannsberger, M; Winter, R; Visser, M
1997-06-01
The distribution of Chabertia (Ch.) ovina, Oesophagostomum (O.) venulosum and Trichuris spp. within the large intestine of naturally infected sheep in the dependence of worm counts and the presence of nematodes of other species or genera was evaluated. The large intestine was divided into 4 sections. More than 75% of Ch. ovina were found within the disk-like section of the colon independently from worm count and presence of nematodes of other species. O. venulosum and Trichuris spp. preferred the caecum and the first section of the colon up to the beginning of the disk-like section. In both, the share of worms recovered from the first section of the colon did increase with higher worm counts. The simultaneous presence of O. venulosum and Trichuris spp. had a significantly negative influence on the share of Trichuris spp. isolated from the caecum.
A note on the tissue star dose in personnel radiation monitoring in space
NASA Technical Reports Server (NTRS)
Schaefer, H. J.
1978-01-01
Secondaries from nuclear interactions of high energy primaries in the body tissues themselves contribute substantially to the astronaut's radiation exposure in space. The so-called tissue star dose is assessed from the prong number distribution of disintegration stars in emulsion. Prong counts of 1,000 emulsion stars from the Apollo-Soyuz mission reported earlier were re-evaluated. The original scores were divided into sets of 250, 500, 750, and 1,000 emulsion stars and the respective prong number distributions established. The statistical error of the gelatin star number for the four consecutively larger was found to vary, on the 67 percent confidence level, from + or - 25 percent for the count of 250 emulsion stars to + or - 11 percent for 1,000 stars. Seen in the context of the other limitations of the experimental design, the lowest effort of prong-counting 250 stars appears entirely appropriate.
Two-Part and Related Regression Models for Longitudinal Data
Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.
2017-01-01
Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906
The Lambert-Beer law in time domain form and its application.
Mosorov, Volodymyr
2017-10-01
The majority of current radioisotope gauges utilize measurements of intensity for a chosen sampling time interval using a detector. Such an approach has several disadvantages: temporal resolution of the gauge is fixed and the accuracy of the measurements is not the same for different count rate. The solution can be the use of a stronger radioactive source, but it will be conflicted with ALARA (As Low As Reasonably Achievable) principle. Therefore, the article presents an alternative approach which is based on modified Lambert-Beer law. The basis of the approach is the registration of time intervals instead of the registration of counts. It allows to increase the temporal resolution of a gauge without the necessity of using a stronger radioactive source and the accuracy of the measurements will not depend on count rate. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Hopkins Ultraviolet Telescope - Performance and calibration during the Astro-1 mission
NASA Technical Reports Server (NTRS)
Davidsen, Arthur F.; Long, Knox S.; Durrance, Samuel T.; Blair, William P.; Bowers, Charles W.; Conard, Steven J.; Feldman, Paul D.; Ferguson, Henry C.; Fountain, Glen H.; Kimble, Randy A.
1992-01-01
Results are reported of spectrophotometric observations, made with the Hopkins Ultraviolet Telescope (HUT), of 77 astronomical sources throughout the far-UV (912-1850 A) at a resolution of about 3 A, and, for a small number of sources, in the extreme UV (415-912 A) beyond the Lyman limit at a resolution of about 1.5 A. The HUT instrument and its performance in orbit are described. A HUT observation of the DA white dwarf G191-B2B is presented, and the photometric calibration curve for the instrument is derived from a comparison of the observation with a model stellar atmosphere. The sensitivity reaches a maximum at 1050 A, where 1 photon/sq cm s A yields 9.5 counts/s A, and remains within a factor of 2 of this value from 912 to 1600 A. The instrumental dark count measured on orbit was less than 0.001 counts/s A.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
One of the most valuable unique characteristics of the PCA is the high count rates (100,000 c/s) it can record, and the resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle-1 work on Sco X-1 has shown that performing high count rate observations is very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of-the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-7 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-8 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-9 proposal. The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life og the satallire probably only one black-hole transient (if any) will reach 10^5 cps/5PCU levels. when this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-5 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2&3 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1-3 work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.