NASA Technical Reports Server (NTRS)
Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.;
2012-01-01
We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.
ERIC Educational Resources Information Center
Le Corre, Mathieu; Carey, Susan
2007-01-01
Since the publication of [Gelman, R., & Gallistel, C. R. (1978). "The child's understanding of number." Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present…
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.; Kohno, K.; Aretxaga, I.; Austermann, J. E.; Ezawa, H.; Hughes, D. H.; Ikarashi, S.; Iono, D.; Kawabe, R.; Matsuo, H.; Matsuura, S.; Nakanishi, K.; Oshima, T.; Perera, T.; Scott, K. S.; Shirahata, M.; Takeuchi, T. T.; Tamura, Y.; Tanaka, K.; Tosaki, T.; Wilson, G. W.; Yun, M. S.
2010-10-01
We present number counts and clustering properties of millimeter-bright galaxies uncovered by the AzTEC camera mounted on the Atacama Submillimeter Telescope Experiment (ASTE). We surveyed the AKARI Deep Field South (ADF-S), the Subaru/XMM Newton Deep Field (SXDF), and the SSA22 fields with an area of ~0.25 deg2 each with an rms noise level of ~0.4-1.0 mJy. We constructed differential and cumulative number counts, which provide currently the tightest constraints on the faint end. The integration of the best-fit number counts in the ADF-S find that the contribution of 1.1 mm sources with fluxes >=1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which the number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with >=1 mJy using the best-fit number counts in the ADF-S and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z~2-3. The average mass of dark halos hosting bright 1.1 mm sources was calculated to be 1013-1014 Msolar. Comparison of correlation lengths of 1.1 mm sources with other populations and with a bias evolution model suggests that dark halos hosting bright 1.1 mm sources evolve into systems of clusters at present universe and the 1.1 mm sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
Inconsistencies in authoritative national paediatric workforce data sources.
Allen, Amy R; Doherty, Richard; Hilton, Andrew M; Freed, Gary L
2017-12-01
Objective National health workforce data are used in workforce projections, policy and planning. If data to measure the current effective clinical medical workforce are not consistent, accurate and reliable, policy options pursued may not be aligned with Australia's actual needs. The aim of the present study was to identify any inconsistencies and contradictions in the numerical count of paediatric specialists in Australia, and discuss issues related to the accuracy of collection and analysis of medical workforce data. Methods This study compared respected national data sources regarding the number of medical practitioners in eight fields of paediatric speciality medical (non-surgical) practice. It also counted the number of doctors listed on the websites of speciality paediatric hospitals and clinics as practicing in these eight fields. Results Counts of medical practitioners varied markedly for all specialties across the data sources examined. In some fields examined, the range of variability across data sources exceeded 450%. Conclusions The national datasets currently available from federal and speciality sources do not provide consistent or reliable counts of the number of medical practitioners. The lack of an adequate baseline for the workforce prevents accurate predictions of future needs to provide the best possible care of children in Australia. What is known about the topic? Various national data sources contain counts of the number of medical practitioners in Australia. These data are used in health workforce projections, policy and planning. What does this paper add? The present study found that the current data sources do not provide consistent or reliable counts of the number of practitioners in eight selected fields of paediatric speciality practice. There are several potential issues in the way workforce data are collected or analysed that cause the variation between sources to occur. What are the implications for practitioners? Without accurate data on which to base decision making, policy options may not be aligned with the actual needs of children with various medical needs, in various geographic areas or the nation as a whole.
A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.; Ishii, Takako T.
2004-03-01
Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
The SWIFT AGN and Cluster Survey. I. Number Counts of AGNs and Galaxy Clusters
NASA Astrophysics Data System (ADS)
Dai, Xinyu; Griffin, Rhiannon D.; Kochanek, Christopher S.; Nugent, Jenna M.; Bregman, Joel N.
2015-05-01
The Swift active galactic nucleus (AGN) and Cluster Survey (SACS) uses 125 deg2 of Swift X-ray Telescope serendipitous fields with variable depths surrounding γ-ray bursts to provide a medium depth (4× {{10}-15} erg cm-2 s-1) and area survey filling the gap between deep, narrow Chandra/XMM-Newton surveys and wide, shallow ROSAT surveys. Here, we present a catalog of 22,563 point sources and 442 extended sources and examine the number counts of the AGN and galaxy cluster populations. SACS provides excellent constraints on the AGN number counts at the bright end with negligible uncertainties due to cosmic variance, and these constraints are consistent with previous measurements. We use Wide-field Infrared Survey Explorer mid-infrared (MIR) colors to classify the sources. For AGNs we can roughly separate the point sources into MIR-red and MIR-blue AGNs, finding roughly equal numbers of each type in the soft X-ray band (0.5-2 keV), but fewer MIR-blue sources in the hard X-ray band (2-8 keV). The cluster number counts, with 5% uncertainties from cosmic variance, are also consistent with previous surveys but span a much larger continuous flux range. Deep optical or IR follow-up observations of this cluster sample will significantly increase the number of higher-redshift (z\\gt 0.5) X-ray-selected clusters.
The 2-24 μm source counts from the AKARI North Ecliptic Pole survey
NASA Astrophysics Data System (ADS)
Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.
2014-11-01
We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.
2011-11-01
We present results of a 1.1 mm deep survey of the AKARI Deep Field South (ADF-S) with AzTEC mounted on the Atacama Submillimetre Telescope Experiment (ASTE). We obtained a map of 0.25 deg2 area with an rms noise level of 0.32-0.71 mJy. This is one of the deepest and widest maps thus far at millimetre and submillimetre wavelengths. We uncovered 198 sources with a significance of 3.5-15.6σ, providing the largest catalog of 1.1 mm sources in a contiguous region. Most of the sources are not detected in the far-infrared bands of the AKARI satellite, suggesting that they are mostly at z ≥ 1.5 given the detection limits. We construct differential and cumulative number counts of the ADF-S, the Subaru/XMM Newton Deep Field (SXDF), and the SSA 22 field surveyed by AzTEC/ASTE, which provide currently the tightest constraints on the faint end. The integration of the differential number counts of the ADF-S find that the contribution of 1.1 mm sources with ≥1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with ≥1 mJy using the differential number counts and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z ~ 2-3. Clustering analyses of AzTEC sources in the ADF-S and the SXDF find that bright (>3 mJy) AzTEC sources are more strongly clustered than faint (< 3 mJy) AzTEC sources and the average mass of dark halos hosting bright AzTEC sources was calculated to be 1013-1014M⊙. Comparison of correlation length of AzTEC sources with other populations and with a bias evolution model suggests that dark halos hosting bright AzTEC sources evolve into systems of clusters at present universe and the AzTEC sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.
2013-02-01
We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a large range of flux densities. We derive the multi-frequency Euclidean level - the plateau in the normalised differential counts at high flux-density - and compare it to WMAP, Spitzer and IRAS results. The submillimetre number counts are not well reproduced by current evolution models of dusty galaxies, whereas the millimetre part appears reasonably well fitted by the most recent model for synchrotron-dominated sources. Finally we provide estimates of the local luminosity density of dusty galaxies, providing the first such measurements at 545 and 857 GHz. Appendices are available in electronic form at http://www.aanda.orgCorresponding author: herve.dole@ias.u-psud.fr
DC KIDS COUNT e-Databook Indicators
ERIC Educational Resources Information Center
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
General relativistic corrections in density-shear correlations
NASA Astrophysics Data System (ADS)
Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena
2018-06-01
We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.
NASA Astrophysics Data System (ADS)
Scott, K. S.; Yun, M. S.; Wilson, G. W.; Austermann, J. E.; Aguilar, E.; Aretxaga, I.; Ezawa, H.; Ferrusca, D.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Giavalisco, M.; Kawabe, R.; Kohno, K.; Mauskopf, P. D.; Oshima, T.; Perera, T. A.; Rand, J.; Tamura, Y.; Tosaki, T.; Velazquez, M.; Williams, C. C.; Zeballos, M.
2010-07-01
We present the first results from a confusion-limited map of the Great Observatories Origins Deep Survey-South (GOODS-S) taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment. We imaged a field to a 1σ depth of 0.48-0.73 mJybeam-1, making this one of the deepest blank-field surveys at mm-wavelengths ever achieved. Although by traditional standards our GOODS-S map is extremely confused due to a sea of faint underlying sources, we demonstrate through simulations that our source identification and number counts analyses are robust, and the techniques discussed in this paper are relevant for other deeply confused surveys. We find a total of 41 dusty starburst galaxies with signal-to-noise ratios S/N >= 3. 5 within this uniformly covered region, where only two are expected to be false detections, and an additional seven robust source candidates located in the noisier (1σ ~ 1 mJybeam-1) outer region of the map. We derive the 1.1 mm number counts from this field using two different methods: a fluctuation or ``P(d)'' analysis and a semi-Bayesian technique and find that both methods give consistent results. Our data are well fit by a Schechter function model with . Given the depth of this survey, we put the first tight constraints on the 1.1 mm number counts at S1.1mm = 0.5 mJy, and we find evidence that the faint end of the number counts at from various SCUBA surveys towards lensing clusters are biased high. In contrast to the 870μm survey of this field with the LABOCA camera, we find no apparent underdensity of sources compared to previous surveys at 1.1mm the estimates of the number counts of SMGs at flux densities >1mJy determined here are consistent with those measured from the AzTEC/SHADES survey. Additionally, we find a significant number of SMGs not identified in the LABOCA catalogue. We find that in contrast to observations at λ <= 500μm, MIPS 24μm sources do not resolve the total energy density in the cosmic infrared background at 1.1 mm, demonstrating that a population of z >~ 3 dust-obscured galaxies that are unaccounted for at these shorter wavelengths potentially contribute to a large fraction (~2/3) of the infrared background at 1.1 mm.
Emission Features and Source Counts of Galaxies in Mid-Infrared
NASA Technical Reports Server (NTRS)
Xu, C.; Hacking, P. B.; Fang, F.; Shupe, D. L.; Lonsdale, C. J.; Lu, N. Y.; Helou, G.; Stacey, G. J.; Ashby, M. L. N.
1998-01-01
In this work we incorporate the newest ISO results on the mid-infrared spectral-energy-distributions (MIR SEDs) of galaxies into models for the number counts and redshift distributions of MIR surveys.
NASA Technical Reports Server (NTRS)
Elvis, Martin; Plummer, David; Schachter, Jonathan; Fabbiano, G.
1992-01-01
A catalog of 819 sources detected in the Einstein IPC Slew Survey of the X-ray sky is presented; 313 of the sources were not previously known as X-ray sources. Typical count rates are 0.1 IPC count/s, roughly equivalent to a flux of 3 x 10 exp -12 ergs/sq cm s. The sources have positional uncertainties of 1.2 arcmin (90 percent confidence) radius, based on a subset of 452 sources identified with previously known pointlike X-ray sources (i.e., extent less than 3 arcmin). Identifications based on a number of existing catalogs of X-ray and optical objects are proposed for 637 of the sources, 78 percent of the survey (within a 3-arcmin error radius) including 133 identifications of new X-ray sources. A public identification data base for the Slew Survey sources will be maintained at CfA, and contributions to this data base are invited.
The Hawaii SCUBA-2 Lensing Cluster Survey: Number Counts and Submillimeter Flux Ratios
NASA Astrophysics Data System (ADS)
Hsu, Li-Yen; Cowie, Lennox L.; Chen, Chian-Chou; Barger, Amy J.; Wang, Wei-Hao
2016-09-01
We present deep number counts at 450 and 850 μm using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μm and 850 μm, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength is contributed by faint sources with L IR < 1012 L ⊙, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μm stacking of K-selected sources from the literature, we conclude that the K-selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L IR < 1012 L ⊙. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μm and 850 μm selected sources. At 850 μm, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μm, we do not see a clear relation between the flux ratio and the observed flux.
Reply-frequency interference/jamming detector
NASA Astrophysics Data System (ADS)
Bishop, Walton B.
1995-01-01
Received IFF reply-frequency signals are examined to determine whether they are being interfered with by enemy sources and indication of the extent of detected interference is provided. The number of correct replies received from selected range bins surrounding and including the center one in which a target leading edge is first declared is counted and compared with the count of the number of friend-accept decisions made based on replies from the selected range bins. The level of interference is then indicated by the ratio between the two counts.
NASA Technical Reports Server (NTRS)
Siemiginowska, Aneta
2001-01-01
The predicted counts for ASCA observation was much higher than actually observed counts in the quasar. However, there are three weak hard x-ray sources in the GIS field. We are adding them to the source counts in modeling of hard x-ray background. The work is in progress. We have published a paper in Ap.J. on the luminosity function and the quasar evolution. Based on the theory described in this paper we are predicting a number of sources and their contribution to the x-ray background at different redshifts. These model predictions will be compared to the observed data in the final paper.
Explaining Variability: Numerical Representations in 4- to 8-Year-Old Children
ERIC Educational Resources Information Center
Friso-van den Bos, Ilona; Kolkman, Meijke E.; Kroesbergen, Evelyn H.; Leseman, Paul P. M.
2014-01-01
The present study aims to examine relations between number representations and various sources of individual differences within early stages of development of number representations. The mental number line has been found to develop from a logarithmic to a more linear representation. Sources under investigation are counting skills and executive…
Deep 3 GHz number counts from a P(D) fluctuation analysis
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
NASA Technical Reports Server (NTRS)
Broadaway, Susan C.; Barton, Stephanie A.; Pyle, Barry H.
2003-01-01
The nucleic acid stain SYBR Green I was evaluated for use with solid-phase laser cytometry to obtain total bacterial cell counts from several water sources with small bacterial numbers. Results were obtained within 30 min and exceeded or equaled counts on R2A agar plates incubated for 14 days at room temperature.
THE HAWAII SCUBA-2 LENSING CLUSTER SURVEY: NUMBER COUNTS AND SUBMILLIMETER FLUX RATIOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Li-Yen; Cowie, Lennox L.; Barger, Amy J.
2016-09-20
We present deep number counts at 450 and 850 μ m using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μ m and 850 μ m, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength ismore » contributed by faint sources with L {sub IR} < 10{sup 12} L {sub ⊙}, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μ m stacking of K -selected sources from the literature, we conclude that the K -selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L {sub IR} < 10{sup 12} L {sub ⊙}. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μ m and 850 μ m selected sources. At 850 μ m, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μ m, we do not see a clear relation between the flux ratio and the observed flux.« less
High-resolution SMA imaging of bright submillimetre sources from the SCUBA-2 Cosmology Legacy Survey
NASA Astrophysics Data System (ADS)
Hill, Ryley; Chapman, Scott C.; Scott, Douglas; Petitpas, Glen; Smail, Ian; Chapin, Edward L.; Gurwell, Mark A.; Perry, Ryan; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Dunlop, James S.; Farrah, Duncan; Fazio, Giovanni G.; Geach, James E.; Howson, Paul; Ivison, R. J.; Lacaille, Kevin; Michałowski, Michał J.; Simpson, James M.; Swinbank, A. M.; van der Werf, Paul P.; Wilner, David J.
2018-06-01
We have used the Submillimeter Array (SMA) at 860 μm to observe the brightest sources in the Submillimeter Common User Bolometer Array-2 (SCUBA-2) Cosmology Legacy Survey (S2CLS). The goal of this survey is to exploit the large field of the S2CLS along with the resolution and sensitivity of the SMA to construct a large sample of these rare sources and to study their statistical properties. We have targeted 70 of the brightest single-dish SCUBA-2 850 μm sources down to S850 ≈ 8 mJy, achieving an average synthesized beam of 2.4 arcsec and an average rms of σ860 = 1.5 mJy beam-1 in our primary beam-corrected maps. We searched our SMA maps for 4σ peaks, corresponding to S860 ≳ 6 mJy sources, and detected 62, galaxies, including three pairs. We include in our study 35 archival observations, bringing our sample size to 105 bright single-dish submillimetre sources with interferometric follow-up. We compute the cumulative and differential number counts, finding them to overlap with previous single-dish survey number counts within the uncertainties, although our cumulative number count is systematically lower than the parent S2CLS cumulative number count by 14 ± 6 per cent between 11 and 15 mJy. We estimate the probability that a ≳10 mJy single-dish submillimetre source resolves into two or more galaxies with similar flux densities to be less than 15 per cent. Assuming the remaining 85 per cent of the targets are ultraluminous starburst galaxies between z = 2 and 3, we find a likely volume density of ≳400 M⊙ yr-1 sources to be {˜ } 3^{+0.7}_{-0.6} {× } 10^{-7} Mpc-3. We show that the descendants of these galaxies could be ≳4 × 1011 M⊙ local quiescent galaxies, and that about 10 per cent of their total stellar mass would have formed during these short bursts of star formation.
NASA Astrophysics Data System (ADS)
Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.
2011-09-01
We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
9C spectral-index distributions and source-count estimates from 15 to 93 GHz - a re-assessment
NASA Astrophysics Data System (ADS)
Waldram, E. M.; Bolton, R. C.; Riley, J. M.; Pooley, G. G.
2018-01-01
In an earlier paper (2007), we used follow-up observations of a sample of sources from the 9C survey at 15.2 GHz to derive a set of spectral-index distributions up to a frequency of 90 GHz. These were based on simultaneous measurements made at 15.2 GHz with the Ryle telescope and at 22 and 43 GHz with the Karl G. Jansky Very Large Array (VLA). We used these distributions to make empirical estimates of source counts at 22, 30, 43, 70 and 90 GHz. In a later paper (2013), we took data at 15.7 GHz from the Arcminute Microkelvin Imager (AMI) and data at 93.2 GHz from the Combined Array for Research in Millimetre-wave Astronomy (CARMA) and estimated the source count at 93.2 GHz. In this paper, we re-examine the data used in both papers and now believe that the VLA flux densities we measured at 43 GHz were significantly in error, being on average only about 70 per cent of their correct values. Here, we present strong evidence for this conclusion and discuss the effect on the source-count estimates made in the 2007 paper. The source-count prediction in the 2013 paper is also revised. We make comparisons with spectral-index distributions and source counts from other telescopes, in particular with a recent deep 95 GHz source count measured by the South Pole Telescope. We investigate reasons for the problem of the low VLA 43-GHz values and find a number of possible contributory factors, but none is sufficient on its own to account for such a large deficit.
NASA Astrophysics Data System (ADS)
Béthermin, Matthieu; Wu, Hao-Yi; Lagache, Guilaine; Davidzon, Iary; Ponthieu, Nicolas; Cousin, Morgane; Wang, Lingyu; Doré, Olivier; Daddi, Emanuele; Lapi, Andrea
2017-11-01
Follow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model. Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor 2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only 60% of the Herschel flux density, but that this number will rise to 95% for future millimeter surveys on 30 m-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called Simulated Infrared Dusty Extragalactic Sky (SIDES), is publicly available. Our simulation Simulated Infrared Dusty Extragalactic Sky (SIDES) is available at http://cesam.lam.fr/sides.
The SCUBA-2 Cosmology Legacy Survey: 850 μm maps, catalogues and number counts
NASA Astrophysics Data System (ADS)
Geach, J. E.; Dunlop, J. S.; Halpern, M.; Smail, Ian; van der Werf, P.; Alexander, D. M.; Almaini, O.; Aretxaga, I.; Arumugam, V.; Asboth, V.; Banerji, M.; Beanlands, J.; Best, P. N.; Blain, A. W.; Birkinshaw, M.; Chapin, E. L.; Chapman, S. C.; Chen, C.-C.; Chrysostomou, A.; Clarke, C.; Clements, D. L.; Conselice, C.; Coppin, K. E. K.; Cowley, W. I.; Danielson, A. L. R.; Eales, S.; Edge, A. C.; Farrah, D.; Gibb, A.; Harrison, C. M.; Hine, N. K.; Hughes, D.; Ivison, R. J.; Jarvis, M.; Jenness, T.; Jones, S. F.; Karim, A.; Koprowski, M.; Knudsen, K. K.; Lacey, C. G.; Mackenzie, T.; Marsden, G.; McAlpine, K.; McMahon, R.; Meijerink, R.; Michałowski, M. J.; Oliver, S. J.; Page, M. J.; Peacock, J. A.; Rigopoulou, D.; Robson, E. I.; Roseboom, I.; Rotermund, K.; Scott, Douglas; Serjeant, S.; Simpson, C.; Simpson, J. M.; Smith, D. J. B.; Spaans, M.; Stanley, F.; Stevens, J. A.; Swinbank, A. M.; Targett, T.; Thomson, A. P.; Valiante, E.; Wake, D. A.; Webb, T. M. A.; Willott, C.; Zavala, J. A.; Zemcov, M.
2017-02-01
We present a catalogue of ˜3000 submillimetre sources detected (≥3.5σ) at 850 μm over ˜5 deg2 surveyed as part of the James Clerk Maxwell Telescope (JCMT) SCUBA-2 Cosmology Legacy Survey (S2CLS). This is the largest survey of its kind at 850 μm, increasing the sample size of 850 μm selected submillimetre galaxies by an order of magnitude. The wide 850 μm survey component of S2CLS covers the extragalactic fields: UKIDSS-UDS, COSMOS, Akari-NEP, Extended Groth Strip, Lockman Hole North, SSA22 and GOODS-North. The average 1σ depth of S2CLS is 1.2 mJy beam-1, approaching the SCUBA-2 850 μm confusion limit, which we determine to be σc ≈ 0.8 mJy beam-1. We measure the 850 μm number counts, reducing the Poisson errors on the differential counts to approximately 4 per cent at S850 ≈ 3 mJy. With several independent fields, we investigate field-to-field variance, finding that the number counts on 0.5°-1° scales are generally within 50 per cent of the S2CLS mean for S850 > 3 mJy, with scatter consistent with the Poisson and estimated cosmic variance uncertainties, although there is a marginal (2σ) density enhancement in GOODS-North. The observed counts are in reasonable agreement with recent phenomenological and semi-analytic models, although determining the shape of the faint-end slope (S850 < 3 mJy) remains a key test. The large solid angle of S2CLS allows us to measure the bright-end counts: at S850 > 10 mJy there are approximately 10 sources per square degree, and we detect the distinctive up-turn in the number counts indicative of the detection of local sources of 850 μm emission, and strongly lensed high-redshift galaxies. All calibrated maps and the catalogue are made publicly available.
A New Method for Calculating Counts in Cells
NASA Astrophysics Data System (ADS)
Szapudi, István
1998-04-01
In the near future, a new generation of CCD-based galaxy surveys will enable high-precision determination of the N-point correlation functions. The resulting information will help to resolve the ambiguities associated with two-point correlation functions, thus constraining theories of structure formation, biasing, and Gaussianity of initial conditions independently of the value of Ω. As one of the most successful methods of extracting the amplitude of higher order correlations is based on measuring the distribution of counts in cells, this work presents an advanced way of measuring it with unprecedented accuracy. Szapudi & Colombi identified the main sources of theoretical errors in extracting counts in cells from galaxy catalogs. One of these sources, termed as measurement error, stems from the fact that conventional methods use a finite number of sampling cells to estimate counts in cells. This effect can be circumvented by using an infinite number of cells. This paper presents an algorithm, which in practice achieves this goal; that is, it is equivalent to throwing an infinite number of sampling cells in finite time. The errors associated with sampling cells are completely eliminated by this procedure, which will be essential for the accurate analysis of future surveys.
Le Corre, Mathieu; Carey, Susan
2007-11-01
Since the publication of [Gelman, R., & Gallistel, C. R. (1978). The child's understanding of number. Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present experiments explore proposals according to which the verbal counting principles are acquired by mapping numerals in the count list onto systems of numerical representation for which there is evidence in infancy, namely, analog magnitudes, parallel individuation, and set-based quantification. By asking 3- and 4-year-olds to estimate the number of elements in sets without counting, we investigate whether the numerals that are assigned cardinal meaning as part of the acquisition process display the signatures of what we call "enriched parallel individuation" (which combines properties of parallel individuation and of set-based quantification) or analog magnitudes. Two experiments demonstrate that while "one" to "four" are mapped onto core representations of small sets prior to the acquisition of the counting principles, numerals beyond "four" are only mapped onto analog magnitudes about six months after the acquisition of the counting principles. Moreover, we show that children's numerical estimates of sets from 1 to 4 elements fail to show the signature of numeral use based on analog magnitudes - namely, scalar variability. We conclude that, while representations of small sets provided by parallel individuation, enriched by the resources of set-based quantification are recruited in the acquisition process to provide the first numerical meanings for "one" to "four", analog magnitudes play no role in this process.
NASA Astrophysics Data System (ADS)
Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia
2017-09-01
Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.
The Herschel-ATLAS: Extragalatic Number Counts from 250 to 500 Microns
NASA Technical Reports Server (NTRS)
Clements, D. L.; Rigby, E.; Maddox, S.; Dunne, L.; Mortier, A.; Amblard, A.; Auld, R.; Bonfield, D.; Cooray, A.; Dariush, A.;
2010-01-01
Aims.The Herschel-ATLAS survey (H-ATLAS) will be the largest area survey to be undertaken by the Herschel Space Observatory. It will cover 550 sq. deg. of extragalactic sky at wavelengths of 100, 160, 250, 350 and 500 microns when completed, reaching flux limits (50-) from 32 to 145mJy. We here present galaxy number counts obtained for SPIRE observations of the first -14 sq. deg. observed at 250, 350 and 500 m. Methods. Number counts are a fundamental tool in constraining models of galaxy evolution. We use source catalogs extracted from the H-ATLAS maps as the basis for such an analysis. Correction factors for completeness and flux boosting are derived by applying our extraction method to model catalogs and then applied to the raw observational counts. Results. We find a steep rise in the number counts at flux levels of 100-200mJy in all three SPIRE bands, consistent with results from BLAST. The counts are compared to a range of galaxy evolution models. None of the current models is an ideal fit to the data but all ascribe the steep rise to a population of luminous, rapidly evolving dusty galaxies at moderate to high redshift.
Probing cluster potentials through gravitational lensing of background X-ray sources
NASA Technical Reports Server (NTRS)
Refregier, A.; Loeb, A.
1996-01-01
The gravitational lensing effect of a foreground galaxy cluster, on the number count statistics of background X-ray sources, was examined. The lensing produces a deficit in the number of resolved sources in a ring close to the critical radius of the cluster. The cluster lens can be used as a natural telescope to study the faint end of the (log N)-(log S) relation for the sources which account for the X-ray background.
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
Gravitational wave source counts at high redshift and in models with extra dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es
2016-07-01
Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
Radio astronomy aspects of the NASA SETI Sky Survey
NASA Technical Reports Server (NTRS)
Klein, Michael J.
1986-01-01
The application of SETI data to radio astronomy is studied. The number of continuum radio sources in the 1-10 GHz region to be counted and cataloged is predicted. The radio luminosity functions for steep and flat spectrum sources at 2, 8, and 22 GHz are derived using the model of Peacock and Gull (1981). The relation between source number and flux density is analyzed and the sensitivity of the system is evaluated.
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Astrophysics Data System (ADS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonsdale, C.J.; Hacking, P.B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-01-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
Preverbal and verbal counting and computation.
Gallistel, C R; Gelman, R
1992-08-01
We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.
Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.
2006-01-01
Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.
ERIC Educational Resources Information Center
White, Bruce
2017-01-01
Studies of data-driven deselection overwhelmingly emphasise the importance of circulation counts and date-of-last-use in the weeding process. When applied to research collections, however, this approach fails to take account of highly influential and significant titles that have not been of interest to large numbers of borrowers but that have been…
The first Extreme Ultraviolet Explorer source catalog
NASA Technical Reports Server (NTRS)
Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.
1994-01-01
The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.
Extending pure luminosity evolution models into the mid-infrared, far-infrared and submillimetre
NASA Astrophysics Data System (ADS)
Hill, Michael D.; Shanks, Tom
2011-07-01
Simple pure luminosity evolution (PLE) models, in which galaxies brighten at high redshift due to increased star formation rates (SFRs), are known to provide a good fit to the colours and number counts of galaxies throughout the optical and near-infrared. We show that optically defined PLE models, where dust reradiates absorbed optical light into infrared spectra composed of local galaxy templates, fit galaxy counts and colours out to 8 μm and to at least z≈ 2.5. At 24-70 μm, the model is able to reproduce the observed source counts with reasonable success if 16 per cent of spiral galaxies show an excess in mid-IR flux due to a warmer dust component and a higher SFR, in line with observations of local starburst galaxies. There remains an underprediction of the number of faint-flux, high-z sources at 24 μm, so we explore how the evolution may be altered to correct this. At 160 μm and longer wavelengths, the model fails, with our model of normal galaxies accounting for only a few percent of sources in these bands. However, we show that a PLE model of obscured AGN, which we have previously shown to give a good fit to observations at 850 μm, also provides a reasonable fit to the Herschel/BLAST number counts and redshift distributions at 250-500 μm. In the context of a ΛCDM cosmology, an AGN contribution at 250-870 μm would remove the need to invoke a top-heavy IMF for high-redshift starburst galaxies.
Reading the World through Very Large Numbers
ERIC Educational Resources Information Center
Greer, Brian; Mukhopadhyay, Swapna
2010-01-01
One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…
Automatic measurements and computations for radiochemical analyses
Rosholt, J.N.; Dooley, J.R.
1960-01-01
In natural radioactive sources the most important radioactive daughter products useful for geochemical studies are protactinium-231, the alpha-emitting thorium isotopes, and the radium isotopes. To resolve the abundances of these thorium and radium isotopes by their characteristic decay and growth patterns, a large number of repeated alpha activity measurements on the two chemically separated elements were made over extended periods of time. Alpha scintillation counting with automatic measurements and sample changing is used to obtain the basic count data. Generation of the required theoretical decay and growth functions, varying with time, and the least squares solution of the overdetermined simultaneous count rate equations are done with a digital computer. Examples of the complex count rate equations which may be solved and results of a natural sample containing four ??-emitting isotopes of thorium are illustrated. These methods facilitate the determination of the radioactive sources on the large scale required for many geochemical investigations.
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Turner, J. W. (Inventor)
1973-01-01
A measurement system is described for providing an indication of a varying physical quantity represented by or converted to a variable frequency signal. Timing pulses are obtained marking the duration of a fixed number, or set, of cycles of the sampled signal and these timing pulses are employed to control the period of counting of cycles of a higher fixed and known frequency source. The counts of cycles obtained from the fixed frequency source provide a precise measurement of the average frequency of each set of cycles sampled, and thus successive discrete values of the quantity being measured. The frequency of the known frequency source is made such that each measurement is presented as a direct digital representation of the quantity measured.
Multisite study of particle number concentrations in urban air.
Harrison, Roy M; Jones, Alan M
2005-08-15
Particle number concentration data are reported from a total of eight urban site locations in the United Kingdom. Of these, six are central urban background sites, while one is an urban street canyon (Marylebone Road) and another is influenced by both a motorway and a steelworks (Port Talbot). The concentrations are generally of a similar order to those reported in the literature, although higher than those in some of the other studies. Highest concentrations are at the Marylebone Road site and lowest are at the Port Talbot site. The central urban background locations lie somewhere between with concentrations typically around 20 000 cm(-3). A seasonal pattern affects all sites, with highest concentrations in the winter months and lowest concentrations in the summer. Data from all sites show a diurnal variation with a morning rush hour peak typical of an anthropogenic pollutant. When the dilution effects of windspeed are accounted for, the data show little directionality at the central urban background sites indicating the influence of sources from all directions as might be expected if the major source were road traffic. At the London Marylebone Road site there is high directionality driven by the air circulation in the street canyon, and at the Port Talbot site different diurnal patterns are seen for particle number count and PM10 influenced by emissions from road traffic (particle number count) and the steelworks (PM10) and local meteorological factors. Hourly particle number concentrations are generally only weakly correlated to NO(x) and PM10, with the former showing a slightly closer relationship. Correlations between daily average particle number count and PM10 were also weak. Episodes of high PM10 concentration in summer typically show low particle number concentrations consistent with transport of accumulation mode secondary aerosol, while winter episodes are frequently associated with high PM10 and particle number count arising from poor dispersion of local primary emissions.
The Herschel Virgo Cluster Survey. XVII. SPIRE point-source catalogs and number counts
NASA Astrophysics Data System (ADS)
Pappalardo, Ciro; Bendo, George J.; Bianchi, Simone; Hunt, Leslie; Zibetti, Stefano; Corbelli, Edvige; di Serego Alighieri, Sperello; Grossi, Marco; Davies, Jonathan; Baes, Maarten; De Looze, Ilse; Fritz, Jacopo; Pohlen, Michael; Smith, Matthew W. L.; Verstappen, Joris; Boquien, Médéric; Boselli, Alessandro; Cortese, Luca; Hughes, Thomas; Viaene, Sebastien; Bizzocchi, Luca; Clemens, Marcel
2015-01-01
Aims: We present three independent catalogs of point-sources extracted from SPIRE images at 250, 350, and 500 μm, acquired with the Herschel Space Observatory as a part of the Herschel Virgo Cluster Survey (HeViCS). The catalogs have been cross-correlated to consistently extract the photometry at SPIRE wavelengths for each object. Methods: Sources have been detected using an iterative loop. The source positions are determined by estimating the likelihood to be a real source for each peak on the maps, according to the criterion defined in the sourceExtractorSussextractor task. The flux densities are estimated using the sourceExtractorTimeline, a timeline-based point source fitter that also determines the fitting procedure with the width of the Gaussian that best reproduces the source considered. Afterwards, each source is subtracted from the maps, removing a Gaussian function in every position with the full width half maximum equal to that estimated in sourceExtractorTimeline. This procedure improves the robustness of our algorithm in terms of source identification. We calculate the completeness and the flux accuracy by injecting artificial sources in the timeline and estimate the reliability of the catalog using a permutation method. Results: The HeViCS catalogs contain about 52 000, 42 200, and 18 700 sources selected at 250, 350, and 500 μm above 3σ and are ~75%, 62%, and 50% complete at flux densities of 20 mJy at 250, 350, 500 μm, respectively. We then measured source number counts at 250, 350, and 500 μm and compare them with previous data and semi-analytical models. We also cross-correlated the catalogs with the Sloan Digital Sky Survey to investigate the redshift distribution of the nearby sources. From this cross-correlation, we select ~2000 sources with reliable fluxes and a high signal-to-noise ratio, finding an average redshift z ~ 0.3 ± 0.22 and 0.25 (16-84 percentile). Conclusions: The number counts at 250, 350, and 500 μm show an increase in the slope below 200 mJy, indicating a strong evolution in number of density for galaxies at these fluxes. In general, models tend to overpredict the counts at brighter flux densities, underlying the importance of studying the Rayleigh-Jeans part of the spectral energy distribution to refine the theoretical recipes of the models. Our iterative method for source identification allowed the detection of a family of 500 μm sources that are not foreground objects belonging to Virgo and not found in other catalogs. Herschel is an ESA space observatory with science instruments provided by a European-led principal investigator consortia and with an important participation from NASA.The 250, 350, 500 μm, and the total catalogs are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/573/A129
van Sighem, Ard; Sabin, Caroline A.; Phillips, Andrew N.
2015-01-01
Background It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). Methods The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. Results For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150–199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150–199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29–100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. Conclusions The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART. PMID:25768925
Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N
2015-01-01
It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.
NASA Astrophysics Data System (ADS)
Aretxaga, I.; Wilson, G. W.; Aguilar, E.; Alberts, S.; Scott, K. S.; Scoville, N.; Yun, M. S.; Austermann, J.; Downes, T. P.; Ezawa, H.; Hatsukade, B.; Hughes, D. H.; Kawabe, R.; Kohno, K.; Oshima, T.; Perera, T. A.; Tamura, Y.; Zeballos, M.
2011-08-01
We present a 0.72 deg2 contiguous 1.1-mm survey in the central area of the Cosmological Evolution Survey field carried out to a 1σ≈ 1.26 mJy beam-1 depth with the AzTEC camera mounted on the 10-m Atacama Submillimeter Telescope Experiment. We have uncovered 189 candidate sources at a signal-to-noise ratio (S/N) ≥ 3.5, out of which 129, with S/N ≥ 4, can be considered to have little chance of being spurious (≲2 per cent). We present the number counts derived with this survey, which show a significant excess of sources when compared to the number counts derived from the ˜0.5 deg2 area sampled at similar depths in the Submillimetre Common-User Bolometer Array (SCUBA) HAlf Degree Extragalactic Survey (SHADES). They are, however, consistent with those derived from fields that were considered too small to characterize the overall blank-field population. We identify differences to be more significant in the S1.1mm≳ 5 mJy regime, and demonstrate that these excesses in number counts are related to the areas where galaxies at redshifts z≲ 1.1 are more densely clustered. The positions of optical-infrared galaxies in the redshift interval 0.6 ≲z≲ 0.75 are the ones that show the strongest correlation with the positions of the 1.1-mm bright population (S1.1mm≳ 5 mJy), a result which does not depend exclusively on the presence of rich clusters within the survey sampled area. The most likely explanation for the observed excess in number counts at 1.1-mm is galaxy-galaxy and galaxy-group lensing at moderate amplification levels, which increases in amplitude as one samples larger and larger flux densities. This effect should also be detectable in other high-redshift populations.
Support of selected X-ray studies to be performed using data from the Uhuru (SAS-A) satellite
NASA Technical Reports Server (NTRS)
Garmire, G. P.
1976-01-01
A new measurement of the diffuse X-ray emission sets more stringent upper limits on the fluctuations of the background and on the number counts of X-ray sources with absolute value of b 20 deg than previous measurements. A random sample of background data from the Uhuru satellite gives a relative fluctuation in excess of statistics of 2.0% between 2.4 and 6.9 keV. The hypothesis that the relative fluctuation exceeds 2.9% can be rejected at the 90% confidence level. No discernable energy dependence is evident in the fluctuations in the pulse height data, when separated into three energy channels of nearly equal width from 1.8 to 10.0 keV. The probability distribution of fluctuations was convolved with the photon noise and cosmic ray background deviation (obtained from the earth-viewing data) to yield the differential source count distribution for high latitude sources. Results imply that a maximum of 160 sources could be between 1.7 and 5.1 x 10 to the -11 power ergs/sq cm/sec (1-3 Uhuru counts).
Singh, Bismark; Meyers, Lauren Ancel
2017-05-08
We provide a methodology for estimating counts of single-year-of-age live-births, fetal-losses, abortions, and pregnant women from aggregated age-group counts. As a case study, we estimate counts for the 254 counties of Texas for the year 2010. We use interpolation to estimate counts of live-births, fetal-losses, and abortions by women of each single-year-of-age for all Texas counties. We then use these counts to estimate the numbers of pregnant women for each single-year-of-age, which were previously available only in aggregate. To support public health policy and planning, we provide single-year-of-age estimates of live-births, fetal-losses, abortions, and pregnant women for all Texas counties in the year 2010, as well as the estimation method source code.
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
NASA Astrophysics Data System (ADS)
Boutet, J.; Debourdeau, M.; Laidevant, A.; Hervé, L.; Dinten, J.-M.
2010-02-01
Finding a way to combine ultrasound and fluorescence optical imaging on an endorectal probe may improve early detection of prostate cancer. A trans-rectal probe adapted to fluorescence diffuse optical tomography measurements was developed by our team. This probe is based on a pulsed NIR laser source, an optical fiber network and a time-resolved detection system. A reconstruction algorithm was used to help locate and quantify fluorescent prostate tumors. In this study, two different kinds of time-resolved detectors are compared: High Rate Imaging system (HRI) and a photon counting system. The HRI is based on an intensified multichannel plate and a CCD Camera. The temporal resolution is obtained through a gating of the HRI. Despite a low temporal resolution (300ps), this system allows a simultaneous acquisition of the signal from a large number of detection fibers. In the photon counting setup, 4 photomultipliers are connected to a Time Correlated Single Photon Counting (TCSPC) board, providing a better temporal resolution (0.1 ps) at the expense of a limited number of detection fibers (4). At last, we show that the limited number of detection fibers of the photon counting setup is enough for a good localization and dramatically improves the overall acquisition time. The photon counting approach is then validated through the localization of fluorescent inclusions in a prostate-mimicking phantom.
Pile-up corrections in laser-driven pulsed X-ray sources
NASA Astrophysics Data System (ADS)
Hernández, G.; Fernández, F.
2018-06-01
A formalism for treating the pile-up produced in solid-state detectors by laser-driven pulsed X-ray sources has been developed. It allows the direct use of X-ray spectroscopy without artificially decreasing the number of counts in the detector, assuming the duration of a pulse is much shorter than the detector response time and the loss of counts from the energy window of the detector can be modeled or neglected. Experimental application shows that having a small amount of pile-up subsequently corrected improves the signal-to-noise ratio, which would be more beneficial than the strict single-hit condition usually imposed on this detectors.
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
AzTEC half square degree survey of the SHADES fields - I. Maps, catalogues and source counts
NASA Astrophysics Data System (ADS)
Austermann, J. E.; Dunlop, J. S.; Perera, T. A.; Scott, K. S.; Wilson, G. W.; Aretxaga, I.; Hughes, D. H.; Almaini, O.; Chapin, E. L.; Chapman, S. C.; Cirasuolo, M.; Clements, D. L.; Coppin, K. E. K.; Dunne, L.; Dye, S.; Eales, S. A.; Egami, E.; Farrah, D.; Ferrusca, D.; Flynn, S.; Haig, D.; Halpern, M.; Ibar, E.; Ivison, R. J.; van Kampen, E.; Kang, Y.; Kim, S.; Lacey, C.; Lowenthal, J. D.; Mauskopf, P. D.; McLure, R. J.; Mortier, A. M. J.; Negrello, M.; Oliver, S.; Peacock, J. A.; Pope, A.; Rawlings, S.; Rieke, G.; Roseboom, I.; Rowan-Robinson, M.; Scott, D.; Serjeant, S.; Smail, I.; Swinbank, A. M.; Stevens, J. A.; Velazquez, M.; Wagg, J.; Yun, M. S.
2010-01-01
We present the first results from the largest deep extragalactic mm-wavelength survey undertaken to date. These results are derived from maps covering over 0.7deg2, made at λ = 1.1mm, using the AzTEC continuum camera mounted on the James Clerk Maxwell Telescope. The maps were made in the two fields originally targeted at λ = 850μm with the Submillimetre Common-User Bolometer Array (SCUBA) in the SCUBA Half-Degree Extragalactic Survey (SHADES) project, namely the Lockman Hole East (mapped to a depth of 0.9-1.3 mJy rms) and the Subaru/XMM-Newton Deep Field (mapped to a depth of 1.0-1.7 mJy rms). The wealth of existing and forthcoming deep multifrequency data in these two fields will allow the bright mm source population revealed by these new wide-area 1.1mm images to be explored in detail in subsequent papers. Here, we present the maps themselves, a catalogue of 114 high-significance submillimetre galaxy detections, and a thorough statistical analysis leading to the most robust determination to date of the 1.1mm source number counts. These new maps, covering an area nearly three times greater than the SCUBA SHADES maps, currently provide the largest sample of cosmological volumes of the high-redshift Universe in the mm or sub-mm. Through careful comparison, we find that both the Cosmic Evolution Survey (COSMOS) and the Great Observatories Origins Deep Survey (GOODS) North fields, also imaged with AzTEC, contain an excess of mm sources over the new 1.1mm source-count baseline established here. In particular, our new AzTEC/SHADES results indicate that very luminous high-redshift dust enshrouded starbursts (S1.1mm > 3mJy) are 25-50 per cent less common than would have been inferred from these smaller surveys, thus highlighting the potential roles of cosmic variance and clustering in such measurements. We compare number count predictions from recent models of the evolving mm/sub-mm source population to these sub-mm bright galaxy surveys, which provide important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation, and find that all available models overpredict the number of bright submillimetre galaxies found in this survey.
Comparison of culture and qPCR methods in detection of mycobacteria from drinking waters.
Räsänen, Noora H J; Rintala, Helena; Miettinen, Ilkka T; Torvinen, Eila
2013-04-01
Environmental mycobacteria are common bacteria in man-made water systems and may cause infections and hypersensitivity pneumonitis via exposure to water. We compared a generally used cultivation method and a quantitative polymerase chain reaction (qPCR) method to detect mycobacteria in 3 types of drinking waters: surface water, ozone-treated surface water, and groundwater. There was a correlation between the numbers of mycobacteria obtained by cultivation and qPCR methods, but the ratio of the counts obtained by the 2 methods varied among the types of water. The qPCR counts in the drinking waters produced from surface or groundwater were 5 to 34 times higher than culturable counts. In ozone-treated surface waters, both methods gave similar counts. The ozone-treated drinking waters had the highest concentration of assimilable organic carbon, which may explain the good culturability. In warm tap waters, qPCR gave 43 times higher counts than cultivation, but both qPCR counts and culturable counts were lower than those in the drinking waters collected from the same sites. The TaqMan qPCR method is a rapid and sensitive tool for total quantitation of mycobacteria in different types of clean waters. The raw water source and treatments affect both culturability and total numbers of mycobacteria in drinking waters.
The Herschel-ATLAS data release 1 - I. Maps, catalogues and number counts
NASA Astrophysics Data System (ADS)
Valiante, E.; Smith, M. W. L.; Eales, S.; Maddox, S. J.; Ibar, E.; Hopwood, R.; Dunne, L.; Cigan, P. J.; Dye, S.; Pascale, E.; Rigby, E. E.; Bourne, N.; Furlanetto, C.; Ivison, R. J.
2016-11-01
We present the first major data release of the largest single key-project in area carried out in open time with the Herschel Space Observatory. The Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS) is a survey of 600 deg2 in five photometric bands - 100, 160, 250, 350 and 500 μm - with the Photoconductor Array Camera and Spectrometer and Spectral and Photometric Imaging Receiver (SPIRE) cameras. In this paper and the companion Paper II, we present the survey of three fields on the celestial equator, covering a total area of 161.6 deg2 and previously observed in the Galaxy and Mass Assembly (GAMA) spectroscopic survey. This paper describes the Herschel images and catalogues of the sources detected on the SPIRE 250 μm images. The 1σ noise for source detection, including both confusion and instrumental noise, is 7.4, 9.4 and 10.2 mJy at 250, 350 and 500 μm. Our catalogue includes 120 230 sources in total, with 113 995, 46 209 and 11 011 sources detected at >4σ at 250, 350 and 500 μm. The catalogue contains detections at >3σ at 100 and 160 μm for 4650 and 5685 sources, and the typical noise at these wavelengths is 44 and 49 mJy. We include estimates of the completeness of the survey and of the effects of flux bias and also describe a novel method for determining the true source counts. The H-ATLAS source counts are very similar to the source counts from the deeper HerMES survey at 250 and 350 μm, with a small difference at 500 μm. Appendix A provides a quick start in using the released data sets, including instructions and cautions on how to use them.
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.
The Hopkins Ultraviolet Telescope - Performance and calibration during the Astro-1 mission
NASA Technical Reports Server (NTRS)
Davidsen, Arthur F.; Long, Knox S.; Durrance, Samuel T.; Blair, William P.; Bowers, Charles W.; Conard, Steven J.; Feldman, Paul D.; Ferguson, Henry C.; Fountain, Glen H.; Kimble, Randy A.
1992-01-01
Results are reported of spectrophotometric observations, made with the Hopkins Ultraviolet Telescope (HUT), of 77 astronomical sources throughout the far-UV (912-1850 A) at a resolution of about 3 A, and, for a small number of sources, in the extreme UV (415-912 A) beyond the Lyman limit at a resolution of about 1.5 A. The HUT instrument and its performance in orbit are described. A HUT observation of the DA white dwarf G191-B2B is presented, and the photometric calibration curve for the instrument is derived from a comparison of the observation with a model stellar atmosphere. The sensitivity reaches a maximum at 1050 A, where 1 photon/sq cm s A yields 9.5 counts/s A, and remains within a factor of 2 of this value from 912 to 1600 A. The instrumental dark count measured on orbit was less than 0.001 counts/s A.
"Quality Counts" and the Chance-for-Success Index
ERIC Educational Resources Information Center
Raymond, Margaret
2010-01-01
From the moment of birth, Americans have a fascination with seeing how they measure up. They are a nation obsessed with the story told in numbers. The quality of public schools has been measured in innumerable ways, and stakeholders may draw on any number of sources for rankings to support a particular agenda. Each winter, "Education…
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Hagen, Nils T.
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
VizieR Online Data Catalog: Sample of faint X-ray pulsators (Israel+, 2016)
NASA Astrophysics Data System (ADS)
Israel, G. L.; Esposito, P.; Rodriguez Castillo, G. A.; Sidoli, L.
2018-04-01
As of 2015 December 31, we extracted about 430000 time series from sources with more than 10 counts (after background subtraction); ~190000 of them have more than 50 counts and their PSDs were searched for significant peaks. At the time of writing, the total number of searched Fourier frequencies was about 4.3x109. After a detailed screening, we obtained a final sample of 41 (42) new X-ray pulsators (signals), which are listed in Table 1. (1 data file).
Brooksbank, W.A. Jr.; Leddicotte, G.W.; Strain, J.E.; Hendon, H.H. Jr.
1961-11-14
A means was developed for continuously computing and indicating the isotopic assay of a process solution and for automatically controlling the process output of isotope separation equipment to provide a continuous output of the desired isotopic ratio. A counter tube is surrounded with a sample to be analyzed so that the tube is exactly in the center of the sample. A source of fast neutrons is provided and is spaced from the sample. The neutrons from the source are thermalized by causing them to pass through a neutron moderator, and the neutrons are allowed to diffuse radially through the sample to actuate the counter. A reference counter in a known sample of pure solvent is also actuated by the thermal neutrons from the neutron source. The number of neutrons which actuate the detectors is a function of a concentration of the elements in solution and their neutron absorption cross sections. The pulses produced by the detectors responsive to each neu tron passing therethrough are amplified and counted. The respective times required to accumulate a selected number of counts are measured by associated timing devices. The concentration of a particular element in solution may be determined by utilizing the following relation: T2/Ti = BCR, where B is a constant proportional to the absorption cross sections, T2 is the time of count collection for the unknown solution, Ti is the time of count collection for the pure solvent, R is the isotopic ratlo, and C is the molar concentration of the element to be determined. Knowing the slope constant B for any element and when the chemical concentration is known, the isotopic concentration may be readily determined, and conversely when the isotopic ratio is known, the chemical concentrations may be determined. (AEC)
Connectivity algorithm with depth first search (DFS) on simple graphs
NASA Astrophysics Data System (ADS)
Riansanti, O.; Ihsan, M.; Suhaimi, D.
2018-01-01
This paper discusses an algorithm to detect connectivity of a simple graph using Depth First Search (DFS). The DFS implementation in this paper differs than other research, that is, on counting the number of visited vertices. The algorithm obtains s from the number of vertices and visits source vertex, following by its adjacent vertices until the last vertex adjacent to the previous source vertex. Any simple graph is connected if s equals 0 and disconnected if s is greater than 0. The complexity of the algorithm is O(n2).
Space and power efficient hybrid counters array
Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY
2009-05-12
A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.
Space and power efficient hybrid counters array
Gara, Alan G.; Salapura, Valentina
2010-03-30
A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.
Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts
NASA Technical Reports Server (NTRS)
Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.
2010-01-01
We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX measurements, which is not attributed to cluster member galaxies. Our galaxy counts are a better match to deeper UV counts measured with HST.
Emery, R J; Valizadeh, F; Kennedy, V; Shelton, A J
2005-07-01
Sources of radiation are used in a variety of modern work settings, including industrial, medical, research, and agricultural applications. Although regulatory controls exist to limit radiation exposures in these different settings, instances of radiation doses in excess of acceptable limits (referred to as overexposures) do occur. A unique study examined overexposure events in Texas over a 45-y period from 1956 to 2001. The primary purpose of the study was to characterize the factors associated with overexposure events. As part of this characterization, an interesting trend in the number of overexposures by year was observed, but not completely explained. The data revealed a dramatic increase in the number of overexposure events, followed by three apparent phases of decline. These declines are of particular interest because, while the increase and subsequent decrease in overexposures occurred, the number of permits to possess radiation sources in Texas generally increased over the same time period. This study focused on the identification of the factors that led to the trends in overexposure events. Data describing the reported overexposure events in Texas from 1970 to 2000 were obtained from the Texas Department of Health Bureau of Radiation Control (TDH BRC) and entered into a computerized database. With the assistance of senior members of the TDH BRC, the three primary factors influencing the number of overexposures were identified. These included domestic oil and gas exploration and production from 1970 to 2000, wherein sources of radiation are employed in various operations; the establishment of a training and certification requirement for industrial radiographers during the period of 1986 to 1988; and modification of the applicable regulations between 1992 and 1994. The generally accepted indicator of oil and gas exploration and production activity, known as "rig count," is the measure of the number of active oil and gas exploration and production platforms at any given time. Rig count is a parameter of particular interest in Texas because the state's economy is significantly tied to the market value of this important natural resource. The rig count parameter was shown to have a strong correlation with overexposure events (Pearson correlation coefficient of 0.82, p < 0.0001). Interestingly, the sources causing the overexposures indicate that the events stem primarily not from the oil and gas exploration activity itself, but rather from support activities in the form of industrial radiographic procedures. The number of overexposure events was also determined to be influenced by the imposition of the training requirement for radiographers and the modification of the applicable regulations (e.g., the elimination of the quarterly dose limit). The relative magnitude of these influences, however, was far overshadowed by the identified predominant predictor of rig count. The determination of rig count as the significant influencing factor in overexposure events is useful in possibly recognizing the potential for future occurrences of the same nature. This assessment also serves to highlight an apparent significant public health success story, as the number of overexposures per radioactive material licensee is shown to have declined significantly over the 30-y period of study. The factors contributing to this phenomenon are described to serve as a model for use in other settings.
J-Plus: Morphological Classification Of Compact And Extended Sources By Pdf Analysis
NASA Astrophysics Data System (ADS)
López-Sanjuan, C.; Vázquez-Ramió, H.; Varela, J.; Spinoso, D.; Cristóbal-Hornillos, D.; Viironen, K.; Muniesa, D.; J-PLUS Collaboration
2017-10-01
We present a morphological classification of J-PLUS EDR sources into compact (i.e. stars) and extended (i.e. galaxies). Such classification is based on the Bayesian modelling of the concentration distribution, including observational errors and magnitude + sky position priors. We provide the star / galaxy probability of each source computed from the gri images. The comparison with the SDSS number counts support our classification up to r 21. The 31.7 deg² analised comprises 150k stars and 101k galaxies.
Harrison, F. A.; Aird, J.; Civano, F.; ...
2016-11-07
Here, we present the 3–8 keV and 8–24 keV number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33%–39% of the X-ray background in the 8–24 keV band, directly identifying AGNs with obscuring columns up tomore » $$\\sim {10}^{25}\\,{\\mathrm{cm}}^{-2}$$. In the softer 3–8 keV band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range $$5\\times {10}^{-15}\\,\\lesssim $$ S(3–8 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$ probed by NuSTAR. In the hard 8–24 keV band NuSTAR probes fluxes over the range $$2\\times {10}^{-14}\\,\\lesssim $$ S(8–24 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$, a factor ~100 fainter than previous measurements. The 8–24 keV number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT 15–55 keV number counts measured at higher fluxes (S(15–55 keV) gsim 10-11 $$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}$$), reflecting the evolution of the AGN population between the Swift/BAT local ($$z\\lt 0.1$$) sample and NuSTAR's $$z\\sim 1$$ sample. CXB synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts.« less
X-ray imaging with sub-micron resolution using large-area photon counting detectors Timepix
NASA Astrophysics Data System (ADS)
Dudak, J.; Karch, J.; Holcova, K.; Zemlicka, J.
2017-12-01
As X-ray micro-CT became a popular tool for scientific purposes a number of commercially available CT systems have emerged on the market. Micro-CT systems have, therefore, become widely accessible and the number of research laboratories using them constantly increases. However, even when CT scans with spatial resolution of several micrometers can be performed routinely, data acquisition with sub-micron precision remains a complicated task. Issues come mostly from prolongation of the scan time inevitably connected with the use of nano-focus X-ray sources. Long exposure time increases the noise level in the CT projections. Furthermore, considering the sub-micron resolution even effects like source-spot drift, rotation stage wobble or thermal expansion become significant and can negatively affect the data. The use of dark-current free photon counting detectors as X-ray cameras for such applications can limit the issue of increased image noise in the data, however the mechanical stability of the whole system still remains a problem and has to be considered. In this work we evaluate the performance of a micro-CT system equipped with nano-focus X-ray tube and a large area photon counting detector Timepix for scans with effective pixel size bellow one micrometer.
NASA Technical Reports Server (NTRS)
Eby, P. B.
1978-01-01
The construction of a clock based on the beta decay process is proposed to test for any violations by the weak interaction of the strong equivalence principle bu determining whether the weak interaction coupling constant beta is spatially constant or whether it is a function of gravitational potential (U). The clock can be constructed by simply counting the beta disintegrations of some suitable source. The total number of counts are to be taken a measure of elapsed time. The accuracy of the clock is limited by the statistical fluctuations in the number of counts, N, which is equal to the square root of N. Increasing N gives a corresponding increase in accuracy. A source based on the electron capture process can be used so as to avoid low energy electron discrimination problems. Solid state and gaseous detectors are being considered. While the accuracy of this type of beta decay clock is much less than clocks based on the electromagnetic interaction, there is a corresponding lack of knowledge of the behavior of beta as a function of gravitational potential. No predictions from nonmetric theories as to variations in beta are available as yet, but they may occur at the U/sg C level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
2016-06-01
This report describes different methodologies to calculate the effective neutron multiplication factor of subcritical assemblies by processing the neutron detector signals using MATLAB scripts. The subcritical assembly can be driven either by a spontaneous fission neutron source (e.g. californium) or by a neutron source generated from the interactions of accelerated particles with target materials. In the latter case, when the particle accelerator operates in a pulsed mode, the signals are typically stored into two files. One file contains the time when neutron reactions occur and the other contains the times when the neutron pulses start. In both files, the timemore » is given by an integer representing the number of time bins since the start of the counting. These signal files are used to construct the neutron count distribution from a single neutron pulse. The built-in functions of MATLAB are used to calculate the effective neutron multiplication factor through the application of the prompt decay fitting or the area method to the neutron count distribution. If the subcritical assembly is driven by a spontaneous fission neutron source, then the effective multiplication factor can be evaluated either using the prompt neutron decay constant obtained from Rossi or Feynman distributions or the Modified Source Multiplication (MSM) method.« less
Properties and Expected Number Counts of Active Galactic Nuclei and Their Hosts in the Far-infrared
NASA Astrophysics Data System (ADS)
Draper, A. R.; Ballantyne, D. R.
2011-03-01
Telescopes like Herschel and the Atacama Large Millimeter/submillimeter Array (ALMA) are creating new opportunities to study sources in the far-infrared (FIR), a wavelength region dominated by cold dust emission. Probing cold dust in active galaxies allows for study of the star formation history of active galactic nucleus (AGN) hosts. The FIR is also an important spectral region for observing AGNs which are heavily enshrouded by dust, such as Compton thick (CT) AGNs. By using information from deep X-ray surveys and cosmic X-ray background synthesis models, we compute Cloudy photoionization simulations which are used to predict the spectral energy distribution (SED) of AGNs in the FIR. Expected differential number counts of AGNs and their host galaxies are calculated in the Herschel bands. The expected contribution of AGNs and their hosts to the cosmic infrared background (CIRB) and the infrared luminosity density are also computed. Multiple star formation scenarios are investigated using a modified blackbody star formation SED. It is found that FIR observations at ~500 μm are an excellent tool in determining the star formation history of AGN hosts. Additionally, the AGN contribution to the CIRB can be used to determine whether star formation in AGN hosts evolves differently than in normal galaxies. The contribution of CT AGNs to the bright end differential number counts and to the bright source infrared luminosity density is a good test of AGN evolution models where quasars are triggered by major mergers.
NASA Astrophysics Data System (ADS)
Béthermin, M.; Dole, H.; Beelen, A.; Aussel, H.
2010-03-01
Aims: We aim to place stronger lower limits on the cosmic infrared background (CIB) brightness at 24 μm, 70 μm and 160 μm and measure the extragalactic number counts at these wavelengths in a homogeneous way from various surveys. Methods: Using Spitzer legacy data over 53.6 deg2 of various depths, we build catalogs with the same extraction method at each wavelength. Completeness and photometric accuracy are estimated with Monte-Carlo simulations. Number count uncertainties are estimated with a counts-in-cells moment method to take galaxy clustering into account. Furthermore, we use a stacking analysis to estimate number counts of sources not detected at 70 μm and 160 μm. This method is validated by simulations. The integration of the number counts gives new CIB lower limits. Results: Number counts reach 35 μJy, 3.5 mJy and 40 mJy at 24 μm, 70 μm, and 160 μm, respectively. We reach deeper flux densities of 0.38 mJy at 70, and 3.1 at 160 μm with a stacking analysis. We confirm the number count turnover at 24 μm and 70 μm, and observe it for the first time at 160 μm at about 20 mJy, together with a power-law behavior below 10 mJy. These mid- and far-infrared counts: 1) are homogeneously built by combining fields of different depths and sizes, providing a legacy over about three orders of magnitude in flux density; 2) are the deepest to date at 70 μm and 160 μm; 3) agree with previously published results in the common measured flux density range; 4) globally agree with the Lagache et al. (2004) model, except at 160 μm, where the model slightly overestimates the counts around 20 and 200 mJy. Conclusions: These counts are integrated to estimate new CIB firm lower limits of 2.29-0.09+0.09 nW m-2 sr-1, 5.4-0.4+0.4 nW m-2 sr-1, and 8.9-1.1+1.1 nW m-2 sr-1 at 24 μm, 70 μm, and 160 μm, respectively, and extrapolated to give new estimates of the CIB due to galaxies of 2.86-0.16+0.19 nW m-2 sr-1, 6.6-0.6+0.7 nW m-2 sr-1, and 14.6-2.9+7.1 nW m-2 sr-1, respectively. Products (point spread function, counts, CIB contributions, software) are publicly available for download at
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
Counting of oligomers in sequences generated by markov chains for DNA motif discovery.
Shan, Gao; Zheng, Wei-Mou
2009-02-01
By means of the technique of the imbedded Markov chain, an efficient algorithm is proposed to exactly calculate first, second moments of word counts and the probability for a word to occur at least once in random texts generated by a Markov chain. A generating function is introduced directly from the imbedded Markov chain to derive asymptotic approximations for the problem. Two Z-scores, one based on the number of sequences with hits and the other on the total number of word hits in a set of sequences, are examined for discovery of motifs on a set of promoter sequences extracted from A. thaliana genome. Source code is available at http://www.itp.ac.cn/zheng/oligo.c.
Key issues in the quality assurance of the One Number Census.
Diamond, Ian; Abbott, Owen; Jackson, Neil
2003-01-01
As part of the 2001 Census, the One Number Census project estimated and adjusted the Census database for underenumeration. As a result of the highly innovative One Number Census and the Quality Assurance process it encompassed, it was also ensured that robust results could be obtained for each local authority area. This article examines some of the issues and analyses that were undertaken as part of that assessment of the 2001 Census population counts for England and Wales. The article firstly highlights the key issues surrounding the implementation of the 2001 Census fieldwork. The article then explores the 2001 Census results through a series of demographic analyses to illustrate the sorts of issues investigated during the One Number Census Quality Assurance process itself. These analyses look at the patterns contained within the results, and comparisons with key alternative sources of population counts. Overall, these in-depth analyses and investigations provide further credence to the plausibility of the One Number Census results.
Population Census of a Large Common Tern Colony with a Small Unmanned Aircraft
Chabot, Dominique; Craik, Shawn R.; Bird, David M.
2015-01-01
Small unmanned aircraft systems (UAS) may be useful for conducting high-precision, low-disturbance waterbird surveys, but limited data exist on their effectiveness. We evaluated the capacity of a small UAS to census a large (>6,000 nests) coastal Common tern (Sterna hirundo) colony of which ground surveys are particularly disruptive and time-consuming. We compared aerial photographic tern counts to ground nest counts in 45 plots (5-m radius) throughout the colony at three intervals over a nine-day period in order to identify sources of variation and establish a coefficient to estimate nest numbers from UAS surveys. We also compared a full colony ground count to full counts from two UAS surveys conducted the following day. Finally, we compared colony disturbance levels over the course of UAS flights to matched control periods. Linear regressions between aerial and ground counts in plots had very strong correlations in all three comparison periods (R 2 = 0.972–0.989, P < 0.001) and regression coefficients ranged from 0.928–0.977 terns/nest. Full colony aerial counts were 93.6% and 94.0%, respectively, of the ground count. Varying visibility of terns with ground cover, weather conditions and image quality, and changing nest attendance rates throughout incubation were likely sources of variation in aerial detection rates. Optimally timed UAS surveys of Common tern colonies following our method should yield population estimates in the 93–96% range of ground counts. Although the terns were initially disturbed by the UAS flying overhead, they rapidly habituated to it. Overall, we found no evidence of sustained disturbance to the colony by the UAS. We encourage colonial waterbird researchers and managers to consider taking advantage of this burgeoning technology. PMID:25874997
Extension Scripts to caffe for Running COWC Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundhenk, T. Nathan; Konjevod, Goran; Sakla, Wesam A.
2016-07-14
These are scripts In python whlch extend the functlonallty of the open source software Caffe and allow 1t to scan an overhead scene of Images and detect or count cars from that scene. tt returns the number of cars or the location of cars as a marked scene lmase.
AIRBORNE MICROORGANISMS IN BROILER PROCESSING PLANTS.
KOTULA, A W; KINNER, J A
1964-05-01
Concentrations of total aerobic bacteria, molds, yeasts, coliforms, enterococci, and psychrophiles were determined in the air of two poultry processing plants with Andersen samplers and a mobile power supply. Total aerobic bacterial counts were highest in the dressing room, with diminishing numbers in the shackling, eviscerating, and holding rooms, when sampling was carried out during plant operation. The average counts per ft(3) of air in these four rooms were 2,200; 560; 230; and 62, respectively. (Each value is the average of 36 observations.) The number of organisms increased in the shackling and dressing rooms once processing was begun. Average total aerobic bacterial counts increased from 70 to 870 to 3,000 in the shackling room and from 310 to 4,900 to 7,000 in the dressing room when sampling was carried out at 5:00 am (before plant operations), 9:00 am, and 2:00 pm, respectively. (Each value is the mean of 12 observations.) Airborne molds might originate from a source other than the poultry being processed.
Airborne Microorganisms in Broiler Processing Plants
Kotula, Anthony W.; Kinner, Jack A.
1964-01-01
Concentrations of total aerobic bacteria, molds, yeasts, coliforms, enterococci, and psychrophiles were determined in the air of two poultry processing plants with Andersen samplers and a mobile power supply. Total aerobic bacterial counts were highest in the dressing room, with diminishing numbers in the shackling, eviscerating, and holding rooms, when sampling was carried out during plant operation. The average counts per ft3 of air in these four rooms were 2,200; 560; 230; and 62, respectively. (Each value is the average of 36 observations.) The number of organisms increased in the shackling and dressing rooms once processing was begun. Average total aerobic bacterial counts increased from 70 to 870 to 3,000 in the shackling room and from 310 to 4,900 to 7,000 in the dressing room when sampling was carried out at 5:00 am (before plant operations), 9:00 am, and 2:00 pm, respectively. (Each value is the mean of 12 observations.) Airborne molds might originate from a source other than the poultry being processed. Images FIG. 3 PMID:14170951
Plausible Boosting of Millimeter-Galaxies in the COSMOS Field by Intervening Large-Scale Structure
NASA Astrophysics Data System (ADS)
Aretxaga, I.; Wilson, G. W.; Aguilar, E.; Alberts, S.; Scott, K. S.; Scoville, N.; Yun, M. S.; Austermann, J.; Downes, T. D.; Ezawa, H.; Hatsukade, B.; Hughes, D. H.; Kawabe, R.; Kohno, K.; Oshima, T.; Perera, T. A.; Tamura, Y.; Zeballos, M.
2011-10-01
The 0.72 sq. deg. contiguous 1.1mm survey in the central area of the COSMOS field, carried out to a 1σ≍1.26 mJy beam-1 depth with the AzTEC camera mounted on the 10m Atacama Submillimeter Telescope Experiment (ASTE), shows number counts with a significant excess of sources when compared to the number counts derived from the ˜0.5 sq. deg. area sampled at similar depths in the Scuba HAlf Degree Extragalactic Survey (SHADES, Austermann et al. 2010). They are, however, consistent with those derived from fields that were considered too small to characterize the overall blank-field population. We identify differences to be more significant in the S1.1mm ˜> 5 mJy regime, and demonstrate that these excesses in number counts are related to the areas where galaxies at redshifts ˜< 1.1 are more densely clustered. The positions of optical-IR galaxies in the redshift interval 0.6 ˜< z ˜< 0.75 are the ones that show the strongest correlation with the positions of the 1.1mm bright population (S1.mm ˜>5 mJy), a result which does not depend exclusively on the presence of rich clusters within the survey sampled area. The most likely explanation for the observed excess in number counts at 1.1mm is galaxy-galaxy and galaxy-group lensing at moderate amplification levels, that increases in amplitude as one samples larger and larger flux densities.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Gruskin, Sofia; Coull, Brent A.
2017-01-01
Background Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to “legal intervention” in accordance with the International Classification of Diseases–10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. Methods and findings We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture–recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS only, 487 reported in both lists, and an estimated 44 (95% CI: 31, 62) not reported in either source. The NVSS documented 44.9% (95% CI: 44.2%, 45.4%) of the total number of deaths, and The Counted documented 93.1% (95% CI: 91.7%, 94.2%). In a multivariable mixed-effects logistic model that controlled for all individual- and county-level covariates, decedents injured by non-firearm mechanisms had higher odds of underreporting in the NVSS than those injured by firearms (odds ratio [OR]: 68.2; 95% CI: 15.7, 297.5; p < 0.01), and underreporting was also more likely outside of the highest-income-quintile counties (OR for the lowest versus highest income quintile: 10.1; 95% CI: 2.4, 42.8; p < 0.01). There was no statistically significant difference in the odds of underreporting in the NVSS for deaths certified by coroners compared to medical examiners, and the odds of underreporting did not vary by race/ethnicity. One limitation of our analyses is that we were unable to examine the characteristics of cases that were unreported in The Counted. Conclusions The media-based source, The Counted, reported a considerably higher proportion of law-enforcement-related deaths than the NVSS, which failed to report a majority of these incidents. For the NVSS, rates of underreporting were higher in lower income counties and for decedents killed by non-firearm mechanisms. There was no evidence suggesting that underreporting varied by death investigator type (medical examiner versus coroner) or race/ethnicity. PMID:29016598
Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy
2017-10-01
Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to "legal intervention" in accordance with the International Classification of Diseases-10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture-recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS only, 487 reported in both lists, and an estimated 44 (95% CI: 31, 62) not reported in either source. The NVSS documented 44.9% (95% CI: 44.2%, 45.4%) of the total number of deaths, and The Counted documented 93.1% (95% CI: 91.7%, 94.2%). In a multivariable mixed-effects logistic model that controlled for all individual- and county-level covariates, decedents injured by non-firearm mechanisms had higher odds of underreporting in the NVSS than those injured by firearms (odds ratio [OR]: 68.2; 95% CI: 15.7, 297.5; p < 0.01), and underreporting was also more likely outside of the highest-income-quintile counties (OR for the lowest versus highest income quintile: 10.1; 95% CI: 2.4, 42.8; p < 0.01). There was no statistically significant difference in the odds of underreporting in the NVSS for deaths certified by coroners compared to medical examiners, and the odds of underreporting did not vary by race/ethnicity. One limitation of our analyses is that we were unable to examine the characteristics of cases that were unreported in The Counted. The media-based source, The Counted, reported a considerably higher proportion of law-enforcement-related deaths than the NVSS, which failed to report a majority of these incidents. For the NVSS, rates of underreporting were higher in lower income counties and for decedents killed by non-firearm mechanisms. There was no evidence suggesting that underreporting varied by death investigator type (medical examiner versus coroner) or race/ethnicity.
Mukerjee, Shaibal; Norris, Gary A; Smith, Luther A; Noble, Christopher A; Neas, Lucas M; Ozkaynak, A Halûk; Gonzales, Melissa
2004-04-15
The relationship between continuous measurements of volatile organic compounds sources and particle number was evaluated at a Photochemical Assessment Monitoring Station Network (PAMS) site located near the U.S.-Mexico Border in central El Paso, TX. Sources of volatile organic compounds (VOCs) were investigated using the multivariate receptor model UNMIX and the effective variance least squares receptor model known as Chemical Mass Balance (CMB, Version 8.0). As expected from PAMS measurements, overall findings from data screening as well as both receptor models confirmed that mobile sources were the major source of VOCs. Comparison of hourly source contribution estimates (SCEs) from the two receptor models revealed significant differences in motor vehicle exhaust and evaporative gasoline contributions. However, the motor vehicle exhaust contributions were highly correlated with each other. Motor vehicle exhaust was also correlated with the ultrafine and accumulation mode particle count, which suggests that motor vehicle exhaust is a source of these particles at the measurement site. Wind sector analyses were performed using the SCE and pollutant data to assess source location of VOCs, particle count, and criteria pollutants. Results from this study have application to source apportionment studies and mobile source emission control strategies that are ongoing in this air shed.
Lensing corrections to features in the angular two-point correlation function and power spectrum
DOE Office of Scientific and Technical Information (OSTI.GOV)
LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam
2008-01-15
It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less
An Exploratory Analysis of Waterfront Force Protection Measures Using Simulation
2002-03-01
LEFT BLANK 75 APPENDIX B. DESIGN POINT DATA Table 16. Design Point One Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.002469 0.006237 27.63104 7144.875 0.155223 76 Table 17. Design Point Two Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.001163 4.67E-12 29.80891 6393.874 0.188209 77 Table 18. Design Point Three Data breach - count leakers- count numberAv ailablePBs- mean
How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology
NASA Astrophysics Data System (ADS)
Ekers, Ron
2012-09-01
In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
ISO deep far-infrared survey in the Lockman Hole
NASA Astrophysics Data System (ADS)
Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.
1999-03-01
Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.
Denwood, M J; Love, S; Innocent, G T; Matthews, L; McKendrick, I J; Hillary, N; Smith, A; Reid, S W J
2012-08-13
The faecal egg count (FEC) is the most widely used means of quantifying the nematode burden of horses, and is frequently used in clinical practice to inform treatment and prevention. The statistical process underlying the FEC is complex, comprising a Poisson counting error process for each sample, compounded with an underlying continuous distribution of means between samples. Being able to quantify the sources of variability contributing to this distribution of means is a necessary step towards providing estimates of statistical power for future FEC and FECRT studies, and may help to improve the usefulness of the FEC technique by identifying and minimising unwanted sources of variability. Obtaining such estimates require a hierarchical statistical model coupled with repeated FEC observations from a single animal over a short period of time. Here, we use this approach to provide the first comparative estimate of multiple sources of within-horse FEC variability. The results demonstrate that a substantial proportion of the observed variation in FEC between horses occurs as a result of variation in FEC within an animal, with the major sources being aggregation of eggs within faeces and variation in egg concentration between faecal piles. The McMaster procedure itself is associated with a comparatively small coefficient of variation, and is therefore highly repeatable when a sufficiently large number of eggs are observed to reduce the error associated with the counting process. We conclude that the variation between samples taken from the same animal is substantial, but can be reduced through the use of larger homogenised faecal samples. Estimates are provided for the coefficient of variation (cv) associated with each within animal source of variability in observed FEC, allowing the usefulness of individual FEC to be quantified, and providing a basis for future FEC and FECRT studies. Copyright © 2012 Elsevier B.V. All rights reserved.
Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H.; Pulvermüller, Friedemann
2012-01-01
The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. PMID:22133748
NASA Technical Reports Server (NTRS)
Harrison, F. A.; Aird, J.; Civano, F.; Lansbury, G.; Mullaney, J. R.; Ballentyne, D. R.; Alexander, D. M.; Stern, D.; Ajello, M.; Barret, D.;
2016-01-01
We present the 3-8 kiloelectronvolts and 8-24 kiloelectronvolts number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33 percent -39 percent of the X-ray background in the 8-24 kiloelectronvolts band, directly identifying AGNs with obscuring columns up to approximately 10 (exp 25) per square centimeter. In the softer 3-8 kiloelectronvolts band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range 5 times 10 (exp -15) less than or approximately equal to S (3-8 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12) probed by NuSTAR. In the hard 8-24 kiloelectronvolts band NuSTAR probes fluxes over the range 2 times 10 (exp -14) less than or approximately equal to S (8-24 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12), a factor approximately 100 times fainter than previous measurements. The 8-24 kiloelectronvolts number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT15-55 kiloelectronvolts number counts measured at higher fluxes (S (15-55 kiloelectronvolts) less than or approximately equal to 10 (exp -11) ergs per second per square centimeter), reflecting the evolution of the AGN population between the Swift/BAT local (redshift is less than 0.1) sample and NuSTAR's redshift approximately equal to 1 sample. CXB (Cosmic X-ray Background) synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts
Recovery and diversity of heterotrophic bacteria from chlorinated drinking waters.
Maki, J S; LaCroix, S J; Hopkins, B S; Staley, J T
1986-01-01
Heterotrophic bacteria were enumerated from the Seattle drinking water catchment basins and distribution system. The highest bacterial recoveries were obtained by using a very dilute medium containing 0.01% peptone as the primary carbon source. Other factors favoring high recovery were the use of incubation temperatures close to that of the habitat and an extended incubation (28 days or longer provided the highest counts). Total bacterial counts were determined by using acridine orange staining. With one exception, all acridine orange counts in chlorinated samples were lower than those in prechlorinated reservoir water, indicating that chlorination often reduces the number of acridine orange-detectable bacteria. Source waters had higher diversity index values than did samples examined following chlorination and storage in reservoirs. Shannon index values based upon colony morphology were in excess of 4.0 for prechlorinated source waters, whereas the values for final chlorinated tap waters were lower than 2.9. It is not known whether the reduction in diversity was due solely to chlorination or in part to other factors in the water treatment and distribution system. Based upon the results of this investigation, we provide a list of recommendations for changes in the procedures used for the enumeration of heterotrophic bacteria from drinking waters. Images PMID:3524453
NASA Astrophysics Data System (ADS)
Englander, J. G.; Brodrick, P. G.; Brandt, A. R.
2015-12-01
Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.
Quantum non-demolition phonon counter with a hybrid optomechnical system
NASA Astrophysics Data System (ADS)
Song, Qiao; Zhang, KeYe; Dong, Ying; Zhang, WeiPing
2018-05-01
A phonon counting scheme based on the control of polaritons in an optomechanical system is proposed. This approach permits us to measure the number of phonons in a quantum non-demolition (QND) manner for arbitrary modes not limited by the frequency matching condition as in usual photon-phonon scattering detections. The performance on phonon number transfer and quantum state transfer of the counter are analyzed and simulated numerically by taking into account all relevant sources of noise.
Air & Space Power Journal. Volume 27, Number 2, March-April 2013
2013-04-01
Executive Sum- mary, 18. 40. It is difficult to find a definitive source and data due to variations in what sources count as sorties (i.e., sorties...April 2013 Air & Space Power Journal | 67 Sundberg A Case for Air Force Reorganization Feature tribution and broad variations in spans of control for...at 465,000 in 1992, 24 years later. Air Force Personnel Center, “Air Force Strength from 1948 thru 2010.” 16. Alfred Goldberg , ed., A History of the
Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H; Pulvermüller, Friedemann
2012-02-15
The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. Copyright © 2011 Elsevier Inc. All rights reserved.
Rainard, P; Ducelliez, M; Poutrel, B
1990-01-01
Quarter foremilk samples were taken at 2-3 weekly intervals for several years in an experimental herd comprising about 45 cows. The samples were submitted to bacteriological analysis and somatic cell counting. The most prevalent quarter infections from 1982 to 1988 were by coagulase-negative staphylococci (15-20% of all the quarters sampled). Most of these (75.6%) persisted until drying-off. Dry cow therapy eliminated 86.5% of these infections. Comparison of udder quarters within cows, involving 775 samples from pairs of non-infected quarters and quarters infected by coagulase-negative staphylococci, yielded geometric means of somatic cell counts of 210,000 and 420,000 cells/ml, respectively. The correlation (r = 0.87) between the herd bulk milk somatic cell count (SCC) and its estimation from the quarter milk somatic cell count performed on the same day allowed us to evaluate the contribution of the different categories of quarters, according to their infection status, to the herd bulk milk SCC. Quarters infected by a major pathogen (8.5% of samples) gave rise to 46.6% of the total number of cells, while quarters infected by coagulase-negative staphylococci (17.8% of samples) gave rise to 18.1%. Although coagulase-negative staphylococci represented only a secondary source of somatic cells as compared to major pathogens, they were not a negligible source considering the threshold of 300,000 somatic cells advocated for herd milk of good quality.
Spectroscopic micro-tomography of metallic-organic composites by means of photon-counting detectors
NASA Astrophysics Data System (ADS)
Pichotka, M.; Jakubek, J.; Vavrik, D.
2015-12-01
The presumed capabilities of photon counting detectors have aroused major expectations in several fields of research. In the field of nuclear imaging ample benefits over standard detectors are to be expected from photon counting devices. First of all a very high contrast, as has by now been verified in numerous experiments. The spectroscopic capabilities of photon counting detectors further allow material decomposition in computed tomography and therefore inherently adequate beam hardening correction. For these reasons measurement setups featuring standard X-ray tubes combined with photon counting detectors constitute a possible replacement of the much more cost intensive tomographic setups at synchrotron light-sources. The actual application of photon counting detectors in radiographic setups in recent years has been impeded by a number of practical issues, above all by restrictions in the detectors size. Currently two tomographic setups in Czech Republic feature photon counting large-area detectors (LAD) fabricated in Prague. The employed large area hybrid pixel-detector assemblies [1] consisting of 10×10/10×5 Timepix devices have a surface area of 143×143 mm2 / 143×71,5 mm2 respectively, suitable for micro-tomographic applications. In the near future LAD devices featuring the Medipix3 readout chip as well as heavy sensors (CdTe, GaAs) will become available. Data analysis is obtained by a number of in house software tools including iterative multi-energy volume reconstruction.In this paper tomographic analysis of of metallic-organic composites is employed to illustrate the capabilities of our technology. Other than successful material decomposition by spectroscopic tomography we present a method to suppress metal artefacts under certain conditions.
Shahsavari, Esmaeil; Aburto-Medina, Arturo; Taha, Mohamed; Ball, Andrew S
2016-01-01
Polycyclic aromatic hydrocarbons (PAHs) are major pollutants globally and due to their carcinogenic and mutagenic properties their clean-up is paramount. Bioremediation or using PAH degrading microorganisms (mainly bacteria) to degrade the pollutants represents cheap, effective methods. These PAH degraders harbor functional genes which help microorganisms use PAHs as source of food and energy. Most probable number (MPN) and plate counting methods are widely used for counting PAHs degraders; however, as culture based methods only count a small fraction (<1%) of microorganisms capable of carrying out PAH degradation, the use of culture-independent methodologies is desirable.•This protocol presents a robust, rapid and sensitive qPCR method for the quantification of the functional genes involved in the degradation of PAHs in soil samples.•This protocol enables us to screen a vast number of PAH contaminated soil samples in few hours.•This protocol provides valuable information about the natural attenuation potential of contaminated soil and can be used to monitor the bioremediation process.
Reconstructing the metric of the local Universe from number counts observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vallejo, Sergio Andres; Romano, Antonio Enea, E-mail: antonio.enea.romano@cern.ch
Number counts observations available with new surveys such as the Euclid mission will be an important source of information about the metric of the Universe. We compute the low red-shift expansion for the energy density and the density contrast using an exact spherically symmetric solution in presence of a cosmological constant. At low red-shift the expansion is more precise than linear perturbation theory prediction. We then use the local expansion to reconstruct the metric from the monopole of the density contrast. We test the inversion method using numerical calculations and find a good agreement within the regime of validity ofmore » the red-shift expansion. The method could be applied to observational data to reconstruct the metric of the local Universe with a level of precision higher than the one achievable using perturbation theory.« less
MANTA--an open-source, high density electrophysiology recording suite for MATLAB.
Englitz, B; David, S V; Sorenson, M D; Shamma, S A
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point.
MANTA—an open-source, high density electrophysiology recording suite for MATLAB
Englitz, B.; David, S. V.; Sorenson, M. D.; Shamma, S. A.
2013-01-01
The distributed nature of nervous systems makes it necessary to record from a large number of sites in order to decipher the neural code, whether single cell, local field potential (LFP), micro-electrocorticograms (μECoG), electroencephalographic (EEG), magnetoencephalographic (MEG) or in vitro micro-electrode array (MEA) data are considered. High channel-count recordings also optimize the yield of a preparation and the efficiency of time invested by the researcher. Currently, data acquisition (DAQ) systems with high channel counts (>100) can be purchased from a limited number of companies at considerable prices. These systems are typically closed-source and thus prohibit custom extensions or improvements by end users. We have developed MANTA, an open-source MATLAB-based DAQ system, as an alternative to existing options. MANTA combines high channel counts (up to 1440 channels/PC), usage of analog or digital headstages, low per channel cost (<$90/channel), feature-rich display and filtering, a user-friendly interface, and a modular design permitting easy addition of new features. MANTA is licensed under the GPL and free of charge. The system has been tested by daily use in multiple setups for >1 year, recording reliably from 128 channels. It offers a growing list of features, including integrated spike sorting, PSTH and CSD display and fully customizable electrode array geometry (including 3D arrays), some of which are not available in commercial systems. MANTA runs on a typical PC and communicates via TCP/IP and can thus be easily integrated with existing stimulus generation/control systems in a lab at a fraction of the cost of commercial systems. With modern neuroscience developing rapidly, MANTA provides a flexible platform that can be rapidly adapted to the needs of new analyses and questions. Being open-source, the development of MANTA can outpace commercial solutions in functionality, while maintaining a low price-point. PMID:23653593
Modeling the evolution of infrared galaxies: a parametric backward evolution model
NASA Astrophysics Data System (ADS)
Béthermin, M.; Dole, H.; Lagache, G.; Le Borgne, D.; Penin, A.
2011-05-01
Aims: We attempt to model the infrared galaxy evolution in as simple a way as possible and reproduce statistical properties such as the number counts between 15 μm and 1.1 mm, the luminosity functions, and the redshift distributions. We then use the fitted model to interpret observations from Spitzer, AKARI, BLAST, LABOCA, AzTEC, SPT, and Herschel, and make predictions for Planck and future experiments such as CCAT or SPICA. Methods: This model uses an evolution in density and luminosity of the luminosity function parametrized by broken power-laws with two breaks at redshift ~0.9 and 2, and contains the two populations of the Lagache model: normal and starburst galaxies. We also take into account the effect of the strong lensing of high-redshift sub-millimeter galaxies. This effect is significant in the sub-mm and mm range near 50 mJy. It has 13 free parameters and eight additional calibration parameters. We fit the parameters to the IRAS, Spitzer, Herschel, and AzTEC measurements with a Monte Carlo Markov chain. Results: The model adjusted to deep counts at key wavelengths reproduces the counts from mid-infrared to millimeter wavelengths, as well as the mid-infrared luminosity functions. We discuss the contribution to both the cosmic infrared background (CIB) and the infrared luminosity density of the different populations. We also estimate the effect of the lensing on the number counts, and discuss the discovery by the South Pole Telescope (SPT) of a very bright population lying at high redshift. We predict the contribution of the lensed sources to the Planck number counts, the confusion level for future missions using a P(D) formalism, and the Universe opacity to TeV photons caused by the CIB. Material of the model (software, tables and predictions) is available online.
Bacteriological quality of drinking water in Nyala, South Darfur, Sudan.
Abdelrahman, Amira Ahmed; Eltahir, Yassir Mohammed
2011-04-01
The objective of this study was to determine the bacterial contaminations in drinking water in Nyala city, South Darfur, Sudan with special reference to the internally displaced people camps (IDPs). Two hundred and forty water samples from different sites and sources including bore holes, hand pumps, dug wells, water points, water reservoir and household storage containers were collected in 2009. The most probable number method was used to detect and count the total coliform, faecal coliform and faecal enterococci. Results revealed that the three indicators bacteria were abundant in all sources except water points. Percentages of the three indicators bacteria count above the permissible limits for drinking water in all samples were 46.4% total coliform, 45.2% faecal coliform and 25.4% faecal enterococci whereas the highest count of the indicators bacteria observed was 1,600 U/100 ml water. Enteric bacteria isolated were Escherichia coli (22.5%), Enterococcus faecalis (20.42%), Klebsiella (15.00%), Citrobacter (2.1%) and Enterobacter (3.33%). The highest contamination of water sources was observed in household storage containers (20%) followed by boreholes (11.25%), reservoirs (6.24%), hand pumps (5.42%) and dug wells (2.49%). Contamination varied from season to season with the highest level in autumn (18.33%) followed by winter (13.75%) and summer (13.32%), respectively. All sources of water in IDP camps except water points were contaminated. Data suggested the importance of greater attention for household contamination, environmental sanitation control and the raise of awareness about water contamination.
Reference analysis of the signal + background model in counting experiments
NASA Astrophysics Data System (ADS)
Casadei, D.
2012-01-01
The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.
Political Considerations in Developing Measures of Effectiveness for Strategic Defense.
1986-03-01
MONITORING ORGANIZATION REPORT NUMBER(S) 6a. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION CI aplicable ...Cerwonka, Robert J. 13a. TYPE OF REPORT 13b- TIME COVERED 14. DATE OF REPORT (Year, Month, Day) I15 PAGE COUNT Master’s Thesis FROM TO 1986 MarchI 79 16...This thesis develops a Measure of Effectivess (MOE) for strategic defenses from open source political literature. First, an examination of the doctrines
Estimating nest detection probabilities for white-winged dove nest transects in Tamaulipas, Mexico
Nichols, J.D.; Tomlinson, R.E.; Waggerman, G.
1986-01-01
Nest transects in nesting colonies provide one source of information on White-winged Dove (Zenaida asiatica asiatica) population status and reproduction. Nests are counted along transects using standardized field methods each year in Texas and northeastern Mexico by personnel associated with Mexico's Office of Flora and Fauna, the Texas Parks and Wildlife Department, and the U.S. Fish and Wildlife Service. Nest counts on transects are combined with information on the size of nesting colonies to estimate total numbers of nests in sampled colonies. Historically, these estimates have been based on the actual nest counts on transects and thus have required the assumption that all nests lying within transect boundaries are detected (seen) with a probability of one. Our objectives were to test the hypothesis that nest detection probability is one and, if rejected, to estimate this probability.
Far-Ultraviolet Number Counts of Field Galaxies
NASA Technical Reports Server (NTRS)
Voyer, Elysse N.; Gardner, Jonathan P.; Teplitz, Harry I.; Siana, Brian D.; deMello, Duilia F.
2010-01-01
The Number counts of far-ultraviolet (FUV) galaxies as a function of magnitude provide a direct statistical measure of the density and evolution of star-forming galaxies. We report on the results of measurements of the rest-frame FUV number counts computed from data of several fields including the Hubble Ultra Deep Field, the Hubble Deep Field North, and the GOODS-North and -South fields. These data were obtained from the Hubble Space Telescope Solar Blind Channel of the Advance Camera for Surveys. The number counts cover an AB magnitude range from 20-29 magnitudes, covering a total area of 15.9 arcmin'. We show that the number counts are lower than those in previous studies using smaller areas. The differences in the counts are likely the result of cosmic variance; our new data cover more area and more lines of sight than the previous studies. The slope of our number counts connects well with local FUV counts and they show good agreement with recent semi-analytical models based on dark matter "merger trees".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, S.E. Jr.; Chung, K.T.
Anaerobic bacteria were isolated from deep subsurface sediment samples taken at study sites in Idaho (INEL) and Washington (HR) by culturing on dilute and concentrated medium. Morphologically distinct colonies were purified, and their responses to 21 selected physiological tests were determined. Although the number of isolates was small (18 INEL, 27 HR) some general patterns could be determined. Most strains could utilize all the carbon sources, however the glycerol and melizitose utilization was positive for 50% or less of the HR isolates. Catalase activity (27.78% at INEL, 74.07% at HR) and tryptophan metabolism (11.12% at INEL, 40.74% at HR) weremore » significantly different between the two study sites. MPN and viable counts indicate that sediments near the water table yield the greatest numbers of anaerobes. Deeper sediments also appear to be more selective with the greatest number of viable counts on low-nutrient mediums. Likewise, only strictly obligate anaerobes were found in the deepest sediment samples. Selective media indicated the presence of methanogens, acetogens, and sulfate reducers at only the HR site.« less
2016-01-01
Regal Fritillary (Speyeria idalia) primarily inhabits prairie, a native grassland of central North America, and occurs rarely in nonprairie grasslands further east. This butterfly has experienced widespread decline and marked range contraction. We analyze Regal Fritillary incidence and abundance during 1977–2014 in 4th of July Butterfly Counts, an annual census of butterflies in North America. Volunteers count within the same 24 km diameter circle each year. Only 6% of counts in range reported a Regal, while 18% of counts in core range in the Midwest and Great Plains did. 99.9% of Regal individuals occurred in core range. Only four circles east of core range reported this species, and only during the first half of the study period. All individuals reported west of its main range occurred in two circles in Colorado in the second half of the study. The number of counts per year and survey effort per count increased during the study. During 1991–2014, >31 counts occurred per year in core Regal range, compared to 0–23 during 1975–1990. During 1991–2014, all measures of Regal presence and abundance declined, most significantly. These results agree with other sources that Regal Fritillary has contracted its range and declined in abundance. PMID:27239370
Walker, J T; Jhutty, A; Parks, S; Willis, C; Copley, V; Turton, J F; Hoffman, P N; Bennett, A M
2014-01-01
In December 2011 and early 2012 four neonates died from Pseudomonas aeruginosa bacteraemia in hospitals in Northern Ireland. To assess whether P. aeruginosa was associated with the neonatal unit taps and whether waterborne isolates were consistent with patient isolates. Thirty taps and eight flow straighteners from the relevant hospitals were categorized and dismantled into 494 components and assessed for aerobic colony and P. aeruginosa counts using non-selective and selective agars. P. aeruginosa isolates were typed by variable number tandem repeat (VNTR) analysis. Selected tap components were subjected to epifluorescence and scanning electron microscopy to visualize biofilm. The highest P. aeruginosa counts were from the flow straighteners, metal support collars and the tap bodies surrounding these two components. Complex flow straighteners had a significantly higher P. aeruginosa count than other types of flow straighteners (P < 0.05). Highest aerobic colony counts were associated with integrated mixers and solenoids (P < 0.05), but there was not a strong correlation (r = 0.33) between the aerobic colony counts and P. aeruginosa counts. Representative P. aeruginosa tap isolates from two hospital neonatal units had VNTR profiles consistent with strains from the tap water and infected neonates. P. aeruginosa was predominantly found in biofilms in flow straighteners and associated components in the tap outlets and was a possible source of the infections observed. Healthcare providers should be aware that water outlets can be a source of P. aeruginosa contamination and should take steps to reduce such contamination, monitor it and have strategies to minimize risk to susceptible patients. Copyright © 2013 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Li, Gang; Xu, Jiayun; Zhang, Jie
2015-01-01
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chiesa, María E; Rosenberg, Carolina E; Fink, Nilda E; Salibián, Alfredo
2006-04-01
Lead is a multiple-source pollutant, well known for its toxicity, of great risk both for the environment and human health. The main target organs of lead are the hematopoietic, nervous, and renal systems; there are also reports in support of its impairment effects on the reproductive and immune systems. It is well known that most of the metal is accumulated in the blood cells and that many of the deleterious effects are related to its circulating concentrations. These adverse effects have been described not only in humans but also in a number of other vertebrates such as fish and birds. The purpose of the present work was to evaluate the effects of weekly administration of sublethal Pb (as acetate, 50 mg x kg(-1)) during 6 weeks on the profile of the serum proteins and blood cell counts of the adult South American toad, Bufo arenarum (Anura: Bufonidae). The electrophoretic patterns of serum proteins pointed out the presence of four fractions; the metal provoked a significant decrease in both total proteins and albumin fraction; among the globulin fractions, the G3 resulted augmented. These findings may be related to the impact of lead on the toads' hepatic cells and immune system. The number of total red blood cells (RBC) showed a tendency to decrease after the injections of the metal, whereas the number of white blood cells (WBC) increased significantly; the differential leukocyte counts showed a statistically significant increase in the absolute number and in the relative percentage of blast-like cells. The decrease in RBC was attributed to the negative impact of the metals on the hemoglobin synthesis. The increasing of the WBC counts may be interpreted as a consequence of the induction of proliferation of pluripotential hematopoietic cells.
Number-counts slope estimation in the presence of Poisson noise
NASA Technical Reports Server (NTRS)
Schmitt, Juergen H. M. M.; Maccacaro, Tommaso
1986-01-01
The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.
Reducing gain shifts in photomultiplier tubes
Cohn, Charles E.
1976-01-01
A means is provided for reducing gain shifts in multiplier tubes due to varying event count rates. It includes means for limiting the number of cascaded, active dynodes of the multiplier tube to a predetermined number with the last of predetermined number of dynodes being the output terminal of the tube. This output is applied to an amplifier to make up for the gain sacrificed by not totally utilizing all available active stages of the tube. Further reduction is obtained by illuminating the predetermined number of dynodes with a light source of such intensity that noise appearing at the output dynode associated with the illumination is negligible.
1.5-μm band polarization entangled photon-pair source with variable Bell states.
Arahira, Shin; Kishimoto, Tadashi; Murai, Hitoshi
2012-04-23
In this paper we report a polarization-entangled photon-pair source in a 1.5-μm band which can generate arbitrary entangled states including four maximum entangled states (Bell states) by using cascaded optical second nonlinearities (second-harmonic generation and the following spontaneous parametric down conversion) in a periodically poled LiNbO(3) (PPLN) ridge-waveguide device. Exchange among the Bell states was achieved by using an optical phase bias compensator (OPBC) in a Sagnac loop interferometer and a half-wave plate outside the loop for polarization conversion. Quantitative evaluation was made on the performance of the photon-pair source through the experiments of two-photon interferences, quantum state tomography, and test of violation of Bell inequality. We observed high visibilities of 96%, fidelities of 97%, and 2.71 of the S parameter in inequality of Clauser, Horne, Shimony, and Holt (CHSH). The experimental values, including peak coincidence counts in the two-photon interference (approximately 170 counts per second), remained almost unchanged in despite of the exchange among the Bell states. They were also in good agreement with the theoretical assumption from the mean number of the photon-pairs under the test (0.04 per pulse). More detailed experimental studies on the dependence of the mean number of the photon-pairs revealed that the quantum states were well understood as the Werner state. © 2012 Optical Society of America
VizieR Online Data Catalog: Fermi-LAT flaring gamma-ray sources from FAVA (Ackermann+, 2013)
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Albert, A.; Allafort, A.; Antolini, E.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Bottacini, E.; Bouvier, A.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caraveo, P. A.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiang, J.; Chiaro, G.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; Dalton, M.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Dermer, C. D.; di Venere, L.; Drell, P. S.; Drlica-Wagner, A.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Hewitt, J.; Hill, A. B.; Horan, D.; Hou, X.; Hughes, R. E.; Inoue, Y.; Jackson, M. S.; Jogler, T.; Johannesson, G.; Johnson, W. N.; Kamae, T.; Kataoka, J.; Kawano, T.; Knodlseder, J.; Kuss, M.; Lande, J.; Larsson, S.; Latronico, L.; Lemoine-Goumard, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mayer, M.; Mazziotta, M. N.; McEnery, J. E.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nemmen, R.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Paneque, D.; Panetta, J. H.; Perkins, J. S.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Porter, T. A.; Raino, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Romoli, C.; Roth, M.; Sanchez-Conde, M.; Scargle, J. D.; Schulz, A.; Sgro, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Suson, D. J.; Takahashi, H.; Takeuchi, Y.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Tronconi, V.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.
2015-01-01
We applied FAVA (Fermi All-sky Variability Analysis) to the first 47 months of Fermi/LAT observations (2008 August 4 to 2012 July 16 UTC), in weekly time intervals. The total number of weeks is 206. We considered two ranges of gamma-ray energy, E>100MeV and E>800MeV, to increase the sensitivity for spectrally soft and hard flares, respectively. We generate measured and expected count maps with a resolution of 0.25deg2 per pixel. We found LAT counterparts for 192 of the 215 FAVA sources. Most of the associated sources, 177, are AGNs. (2 data files).
Joehanes, Roby; Johnson, Andrew D.; Barb, Jennifer J.; Raghavachari, Nalini; Liu, Poching; Woodhouse, Kimberly A.; O'Donnell, Christopher J.; Munson, Peter J.
2012-01-01
Despite a growing number of reports of gene expression analysis from blood-derived RNA sources, there have been few systematic comparisons of various RNA sources in transcriptomic analysis or for biomarker discovery in the context of cardiovascular disease (CVD). As a pilot study of the Systems Approach to Biomarker Research (SABRe) in CVD Initiative, this investigation used Affymetrix Exon arrays to characterize gene expression of three blood-derived RNA sources: lymphoblastoid cell lines (LCL), whole blood using PAXgene tubes (PAX), and peripheral blood mononuclear cells (PBMC). Their performance was compared in relation to identifying transcript associations with sex and CVD risk factors, such as age, high-density lipoprotein, and smoking status, and the differential blood cell count. We also identified a set of exons that vary substantially between participants, but consistently in each RNA source. Such exons are thus stable phenotypes of the participant and may potentially become useful fingerprinting biomarkers. In agreement with previous studies, we found that each of the RNA sources is distinct. Unlike PAX and PBMC, LCL gene expression showed little association with the differential blood count. LCL, however, was able to detect two genes related to smoking status. PAX and PBMC identified Y-chromosome probe sets similarly and slightly better than LCL. PMID:22045913
NASA Astrophysics Data System (ADS)
Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.
2017-12-01
We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ono, Yoshiaki; Ouchi, Masami; Momose, Rieko
2014-11-01
We present the statistics of faint submillimeter/millimeter galaxies (SMGs) and serendipitous detections of a submillimeter/millimeter line emitter (SLE) with no multi-wavelength continuum counterpart revealed by the deep ALMA observations. We identify faint SMGs with flux densities of 0.1-1.0 mJy in the deep Band-6 and Band-7 maps of 10 independent fields that reduce cosmic variance effects. The differential number counts at 1.2 mm are found to increase with decreasing flux density down to 0.1 mJy. Our number counts indicate that the faint (0.1-1.0 mJy, or SFR{sub IR} ∼ 30-300 M {sub ☉} yr{sup –1}) SMGs contribute nearly a half of themore » extragalactic background light (EBL), while the remaining half of the EBL is mostly contributed by very faint sources with flux densities of <0.1 mJy (SFR{sub IR} ≲ 30 M {sub ☉} yr{sup –1}). We conduct counts-in-cells analysis with multifield ALMA data for the faint SMGs, and obtain a coarse estimate of galaxy bias, b {sub g} < 4. The galaxy bias suggests that the dark halo masses of the faint SMGs are ≲ 7 × 10{sup 12} M {sub ☉}, which is smaller than those of bright (>1 mJy) SMGs, but consistent with abundant high-z star-forming populations, such as sBzKs, LBGs, and LAEs. Finally, we report the serendipitous detection of SLE-1, which has no continuum counterparts in our 1.2 mm-band or multi-wavelength images, including ultra deep HST/WFC3 and Spitzer data. The SLE has a significant line at 249.9 GHz with a signal-to-noise ratio of 7.1. If the SLE is not a spurious source made by the unknown systematic noise of ALMA, the strong upper limits of our multi-wavelength data suggest that the SLE would be a faint galaxy at z ≳ 6.« less
Statistical inference of static analysis rules
NASA Technical Reports Server (NTRS)
Engler, Dawson Richards (Inventor)
2009-01-01
Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Extending the Lincoln-Petersen estimator for multiple identifications in one source.
Köse, T; Orman, M; Ikiz, F; Baksh, M F; Gallagher, J; Böhning, D
2014-10-30
The Lincoln-Petersen estimator is one of the most popular estimators used in capture-recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11 ,f10 ,f01 ,f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln-Petersen estimator provides an estimate for f00 . In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln-Petersen's, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln-Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey. Copyright © 2014 John Wiley & Sons, Ltd.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Kłębukowska, Lucyna; Zadernowska, Anna; Chajęcka-Wierzchowska, Wioleta
2015-03-01
Garlic is valued more for its flavoring and used in a wide variety of foods. In food technology, fresh garlic is not used, but instead its processed forms, most often dried and lyophilized, are utilized. The quality and safety of the final product largely depends on their microbiological quality. This research has provided information about effect of garlic fixation methods and provided information about effect of microbiological contamination of garlic used as a spice for quality of garlic mayonnaise sauce. The authors decided to undertake studies following a report from one of the manufacturers of garlic sauces on product defects which originated in dried garlic used in the production process. Samples of garlic (n = 26) were examinated using standard cultural methods (counts of fungi, lactic acid bacteria-LAB, spore-producing Bacillus sp. and the presence of anaerobic saccharolytic and proteolytic clostridia), automated system TEMPO (total viable count, Enterobacteriaceae), immunoenzymatic method using VIDAS tests (occurrence of Salmonella sp. and Listeria monocytogenes). The number of total viable count was ranged from 3.51 to 6.85 log CFU/g. Enterobacteriaceae were detected only in one sample. Comparably low values were recorded for fungi (1.30 to 3.47 log CFU/g). The number of LAB was ranged from 2.34 to 5.49 log CFU/g. Clostridium sp. were detected in 22 samples. Salmonella sp. and Listeria monocytogenes were not detected. It was found that garlic, regardless of th preservation procedure, might be a source of contamination of garlic mayonnaise sauce especially with lactic acid bacteria and Clostridium sp. spores.
Africa Counts: Number and Pattern in African Culture.
ERIC Educational Resources Information Center
Zaslavsky, Claudia
This document describes the contributions of African peoples to the science of mathematics. The development of a number system is seen as related to need. Names of numbers, time reckoning, gesture counting, and counting materials are examined. Mystical beliefs about numbers and special meanings in pattern are presented. Reproductions of patterns,…
NASA Technical Reports Server (NTRS)
Hooke, A. J.
1979-01-01
A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.
Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Yu, Zhicong; Leng, Shuai; Jorgensen, Steven M; Li, Zhoubo; Gutjahr, Ralf; Chen, Baiyu; Halaweish, Ahmed F; Kappler, Steffen; Yu, Lifeng; Ritman, Erik L; McCollough, Cynthia H
2016-02-21
This study evaluated the conventional imaging performance of a research whole-body photon-counting CT system and investigated its feasibility for imaging using clinically realistic levels of x-ray photon flux. This research system was built on the platform of a 2nd generation dual-source CT system: one source coupled to an energy integrating detector (EID) and the other coupled to a photon-counting detector (PCD). Phantom studies were conducted to measure CT number accuracy and uniformity for water, CT number energy dependency for high-Z materials, spatial resolution, noise, and contrast-to-noise ratio. The results from the EID and PCD subsystems were compared. The impact of high photon flux, such as pulse pile-up, was assessed by studying the noise-to-tube-current relationship using a neonate water phantom and high x-ray photon flux. Finally, clinical feasibility of the PCD subsystem was investigated using anthropomorphic phantoms, a cadaveric head, and a whole-body cadaver, which were scanned at dose levels equivalent to or higher than those used clinically. Phantom measurements demonstrated that the PCD subsystem provided comparable image quality to the EID subsystem, except that the PCD subsystem provided slightly better longitudinal spatial resolution and about 25% improvement in contrast-to-noise ratio for iodine. The impact of high photon flux was found to be negligible for the PCD subsystem: only subtle high-flux effects were noticed for tube currents higher than 300 mA in images of the neonate water phantom. Results of the anthropomorphic phantom and cadaver scans demonstrated comparable image quality between the EID and PCD subsystems. There were no noticeable ring, streaking, or cupping/capping artifacts in the PCD images. In addition, the PCD subsystem provided spectral information. Our experiments demonstrated that the research whole-body photon-counting CT system is capable of providing clinical image quality at clinically realistic levels of x-ray photon flux.
NASA Astrophysics Data System (ADS)
Yu, Zhicong; Leng, Shuai; Jorgensen, Steven M.; Li, Zhoubo; Gutjahr, Ralf; Chen, Baiyu; Halaweish, Ahmed F.; Kappler, Steffen; Yu, Lifeng; Ritman, Erik L.; McCollough, Cynthia H.
2016-02-01
This study evaluated the conventional imaging performance of a research whole-body photon-counting CT system and investigated its feasibility for imaging using clinically realistic levels of x-ray photon flux. This research system was built on the platform of a 2nd generation dual-source CT system: one source coupled to an energy integrating detector (EID) and the other coupled to a photon-counting detector (PCD). Phantom studies were conducted to measure CT number accuracy and uniformity for water, CT number energy dependency for high-Z materials, spatial resolution, noise, and contrast-to-noise ratio. The results from the EID and PCD subsystems were compared. The impact of high photon flux, such as pulse pile-up, was assessed by studying the noise-to-tube-current relationship using a neonate water phantom and high x-ray photon flux. Finally, clinical feasibility of the PCD subsystem was investigated using anthropomorphic phantoms, a cadaveric head, and a whole-body cadaver, which were scanned at dose levels equivalent to or higher than those used clinically. Phantom measurements demonstrated that the PCD subsystem provided comparable image quality to the EID subsystem, except that the PCD subsystem provided slightly better longitudinal spatial resolution and about 25% improvement in contrast-to-noise ratio for iodine. The impact of high photon flux was found to be negligible for the PCD subsystem: only subtle high-flux effects were noticed for tube currents higher than 300 mA in images of the neonate water phantom. Results of the anthropomorphic phantom and cadaver scans demonstrated comparable image quality between the EID and PCD subsystems. There were no noticeable ring, streaking, or cupping/capping artifacts in the PCD images. In addition, the PCD subsystem provided spectral information. Our experiments demonstrated that the research whole-body photon-counting CT system is capable of providing clinical image quality at clinically realistic levels of x-ray photon flux.
Apparatus and method for compensating for clock drift in downhole drilling components
Hall, David R [Provo, UT; Pixton, David S [Lehi, UT; Johnson, Monte L [Orem, UT; Bartholomew, David B [Springville, UT; Hall, Jr., H. Tracy
2007-08-07
A precise downhole clock that compensates for drift includes a prescaler configured to receive electrical pulses from an oscillator. The prescaler is configured to output a series of clock pulses. The prescaler outputs each clock pulse after counting a preloaded number of electrical pulses from the oscillator. The prescaler is operably connected to a compensator module for adjusting the number loaded into the prescaler. By adjusting the number that is loaded into the prescaler, the timing may be advanced or retarded to more accurately synchronize the clock pulses with a reference time source. The compensator module is controlled by a counter-based trigger module configured to trigger the compensator module to load a value into the prescaler. Finally, a time-base logic module is configured to calculate the drift of the downhole clock by comparing the time of the downhole clock with a reference time source.
Counting statistics for genetic switches based on effective interaction approximation
NASA Astrophysics Data System (ADS)
Ohkubo, Jun
2012-09-01
Applicability of counting statistics for a system with an infinite number of states is investigated. The counting statistics has been studied a lot for a system with a finite number of states. While it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle, we have non-closed equations in general. A simple genetic switch can be described by a master equation with an infinite number of states, and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene. To avoid having the non-closed equations, an effective interaction approximation is employed. As a result, it is shown that the switching problem can be treated as a simple two-state model approximately, which immediately indicates that the switching obeys non-Poisson statistics.
NASA Astrophysics Data System (ADS)
Li, Zheng; Guan, Jun; Yang, Xudong; Lin, Chao-Hsin
2014-06-01
Airborne particles are an important type of air pollutants in aircraft cabin. Finding sources of particles is conducive to taking appropriate measures to remove them. In this study, measurements of concentration and size distribution of particles larger than 0.3 μm (PM>0.3) were made on nine short haul flights from September 2012 to March 2013. Particle counts in supply air and breathing zone air were both obtained. Results indicate that the number concentrations of particles ranged from 3.6 × 102 counts L-1 to 1.2 × 105 counts L-1 in supply air and breathing zone air, and they first decreased and then increased in general during the flight duration. Peaks of particle concentration were found at climbing, descending, and cruising phases in several flights. Percentages of particle concentration in breathing zone contributed by the bleed air (originated from outside) and cabin interior sources were calculated. The bleed air ratios, outside airflow rates and total airflow rates were calculated by using carbon dioxide as a ventilation tracer in five of the nine flights. The calculated results indicate that PM>0.3 in breathing zone mainly came from unfiltered bleed air, especially for particle sizes from 0.3 to 2.0 μm. And for particles larger than 2.0 μm, contributions from the bleed air and cabin interior were both important. The results would be useful for developing better cabin air quality control strategies.
Preschool children master the logic of number word meanings.
Lipton, Jennifer S; Spelke, Elizabeth S
2006-01-01
Although children take over a year to learn the meanings of the first three number words, they eventually master the logic of counting and the meanings of all the words in their count list. Here, we ask whether children's knowledge applies to number words beyond those they have mastered: Does a child who can only count to 20 infer that number words above 'twenty' refer to exact cardinal values? Three experiments provide evidence for this understanding in preschool children. Before beginning formal education or gaining counting skill, children possess a productive symbolic system for representing number.
AMI-LA observations of the SuperCLASS supercluster
NASA Astrophysics Data System (ADS)
Riseley, C. J.; Grainge, K. J. B.; Perrott, Y. C.; Scaife, A. M. M.; Battye, R. A.; Beswick, R. J.; Birkinshaw, M.; Brown, M. L.; Casey, C. M.; Demetroullas, C.; Hales, C. A.; Harrison, I.; Hung, C.-L.; Jackson, N. J.; Muxlow, T.; Watson, B.; Cantwell, T. M.; Carey, S. H.; Elwood, P. J.; Hickish, J.; Jin, T. Z.; Razavi-Ghods, N.; Scott, P. F.; Titterington, D. J.
2018-03-01
We present a deep survey of the Super-Cluster Assisted Shear Survey (SuperCLASS) supercluster - a region of sky known to contain five Abell clusters at redshift z ˜ 0.2 - performed using the Arcminute Microkelvin Imager (AMI) Large Array (LA) at 15.5 GHz. Our survey covers an area of approximately 0.9 deg2. We achieve a nominal sensitivity of 32.0 μJy beam-1 towards the field centre, finding 80 sources above a 5σ threshold. We derive the radio colour-colour distribution for sources common to three surveys that cover the field and identify three sources with strongly curved spectra - a high-frequency-peaked source and two GHz-peaked-spectrum sources. The differential source count (i) agrees well with previous deep radio source counts, (ii) exhibits no evidence of an emerging population of star-forming galaxies, down to a limit of 0.24 mJy, and (iii) disagrees with some models of the 15 GHz source population. However, our source count is in agreement with recent work that provides an analytical correction to the source count from the Square Kilometre Array Design Study (SKADS) Simulated Sky, supporting the suggestion that this discrepancy is caused by an abundance of flat-spectrum galaxy cores as yet not included in source population models.
Ultrabright femtosecond source of biphotons based on a spatial mode inverter.
Jarutis, Vygandas; Juodkazis, Saulius; Mizeikis, Vygantas; Sasaki, Keiji; Misawa, Hiroaki
2005-02-01
A method of enhancing the efficiency of entangled biphoton sources based on a type II femtosecond spontaneous parametric downconversion (SPDC) process is proposed and implemented experimentally. Enhancement is obtained by mode inversion of one of the SPDC output beams, which allows the beams to overlap completely, thus maximizing the number of SPDC photon pairs with optimum spatiotemporal overlap. By use of this method, biphoton count rates as high as 16 kHz from a single 0.5-mm-long beta-barium borate crystal pumped by second-harmonic radiation from a Ti:sapphire laser were obtained.
Quantitative NDA of isotopic neutron sources.
Lakosi, L; Nguyen, C T; Bagi, J
2005-01-01
A non-destructive method for assaying transuranic neutron sources was developed, using a combination of gamma-spectrometry and neutron correlation technique. Source strength or actinide content of a number of PuBe, AmBe, AmLi, (244)Cm, and (252)Cf sources was assessed, both as a safety issue and with respect to combating illicit trafficking. A passive neutron coincidence collar was designed with (3)He counters embedded in a polyethylene moderator (lined with Cd) surrounding the sources to be measured. The electronics consist of independent channels of pulse amplifiers and discriminators as well as a shift register for coincidence counting. The neutron output of the sources was determined by gross neutron counting, and the actinide content was found out by adopting specific spontaneous fission and (alpha,n) reaction yields of individual isotopes from the literature. Identification of an unknown source type and constituents can be made by gamma-spectrometry. The coincidences are due to spontaneous fission in the case of Cm and Cf sources, while they are mostly due to neutron-induced fission of the Pu isotopes (i.e. self-multiplication) and the (9)Be(n,2n)(8)Be reaction in Be-containing sources. Recording coincidence rate offers a potential for calibration, exploiting a correlation between the Pu amount and the coincidence-to-total ratio. The method and the equipment were tested in an in-field demonstration exercise, with participation of national public authorities and foreign observers. Seizure of the illicit transport of a PuBe source was simulated in the exercise, and the Pu content of the source was determined. It is expected that the method could be used for identification and assay of illicit, found, or not documented neutron sources.
Optical Communications With A Geiger Mode APD Array
2016-02-09
spurious fires from numerous sources, including crosstalk from other detectors in the same array . Additionally, after a 9 successful detection, the...be combined into arrays with large numbers of detectors , allowing for scaling of dynamic range with relatively little overhead on space and power...overall higher rate of dark counts than a single detector , this is more than compensated for by the extra detectors . A sufficiently large APD array could
Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper
1993-01-01
To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.
Estimate of main local sources to ambient ultrafine particle number concentrations in an urban area
NASA Astrophysics Data System (ADS)
Rahman, Md Mahmudur; Mazaheri, Mandana; Clifford, Sam; Morawska, Lidia
2017-09-01
Quantifying and apportioning the contribution of a range of sources to ultrafine particles (UFPs, D < 100 nm) is a challenge due to the complex nature of the urban environments. Although vehicular emissions have long been considered one of the major sources of ultrafine particles in urban areas, the contribution of other major urban sources is not yet fully understood. This paper aims to determine and quantify the contribution of local ground traffic, nucleated particle (NP) formation and distant non-traffic (e.g. airport, oil refineries, and seaport) sources to the total ambient particle number concentration (PNC) in a busy, inner-city area in Brisbane, Australia using Bayesian statistical modelling and other exploratory tools. The Bayesian model was trained on the PNC data on days where NP formations were known to have not occurred, hourly traffic counts, solar radiation data, and smooth daily trend. The model was applied to apportion and quantify the contribution of NP formations and local traffic and non-traffic sources to UFPs. The data analysis incorporated long-term measured time-series of total PNC (D ≥ 6 nm), particle number size distributions (PSD, D = 8 to 400 nm), PM2.5, PM10, NOx, CO, meteorological parameters and traffic counts at a stationary monitoring site. The developed Bayesian model showed reliable predictive performances in quantifying the contribution of NP formation events to UFPs (up to 4 × 104 particles cm- 3), with a significant day to day variability. The model identified potential NP formation and no-formations days based on PNC data and quantified the sources contribution to UFPs. Exploratory statistical analyses show that total mean PNC during the middle of the day was up to 32% higher than during peak morning and evening traffic periods, which were associated with NP formation events. The majority of UFPs measured during the peak traffic and NP formation periods were between 30-100 nm and smaller than 30 nm, respectively. To date, this is the first application of Bayesian model to apportion different sources contribution to UFPs, and therefore the importance of this study is not only in its modelling outcomes but in demonstrating the applicability and advantages of this statistical approach to air pollution studies.
VizieR Online Data Catalog: ROSAT detected quasars. I. (Brinkmann+ 1997)
NASA Astrophysics Data System (ADS)
Brinkmann, W.; Yuan, W.
1996-09-01
We have compiled a sample of all quasars with measured radio emission from the Veron-Cetty - Veron catalogue (1993, VV93
Singh, Nishtha; Singh, Udaiveer; Singh, Dimple; Daya, Mangal; Singh, Virendra
2017-01-01
Environmental pollens are known to cause exacerbation of symptoms of patients with allergic rhinitis (AR) and asthma. During pollen months, number of patients visiting hospital has been shown to increase in some studies. However, in India, such studies are lacking. Therefore, we aimed to study pollen counts and to find its correlation with number of new patients attending Asthma Bhawan for 2 years. Aerobiological sampling was done using Burkard 24 h spore trap system. The site selected for the entrapment of the air spore was the building of Asthma Bhawan situated at Vidhyadhar Nagar, Jaipur. New patients coming with problems of respiratory allergy such as AR or asthma were recruited in the study. Skin prick tests (SPTs) were carried out after obtaining consent in these patients. Monthly pollen counts of trees, weeds and grasses were correlated with the number of new patients. Pollen calendar was prepared for 2 years. Average annual pollen count during 2011 and 2012 were 14,460.5. In the analysis, 37 types of species or families were identified. Pollen count showed two seasonal peaks during March-April and from August to October. January and June showed the lowest pollen counts in 2 years. Average monthly count of grass pollens showed significant correlation with number of new patients ( r = 0.59). However, monthly pollen count of trees and weeds did not correlate. The correlation of the pollen count of individual pollen with the SPT positivity to that pollen showed significant correlation with Chenopodium album only. It can be concluded that there were two peaks of pollen count in a year during March-April and August-October. Average monthly pollen counts of grass were significantly correlated with the number of hospital visits of new patients.
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wold, Isak G. B.; Barger, Amy J.; Owen, Frazer N.
We present 1.4 GHz catalogs for the cluster fields A370 and A2390 observed with the Very Large Array. These are two of the deepest radio images of cluster fields ever taken. The A370 image covers an area of 40' Multiplication-Sign 40' with a synthesized beam of {approx}1.''7 and a noise level of {approx}5.7 {mu}Jy near the field center. The A2390 image covers an area of 34' Multiplication-Sign 34' with a synthesized beam of {approx}1.''4 and a noise level of {approx}5.6 {mu}Jy near the field center. We catalog 200 redshifts for the A370 field. We construct differential number counts for themore » central regions (radius < 16') of both clusters. We find that the faint (S{sub 1.4{sub GHz}} < 3 mJy) counts of A370 are roughly consistent with the highest blank field number counts, while the faint number counts of A2390 are roughly consistent with the lowest blank field number counts. Our analyses indicate that the number counts are primarily from field radio galaxies. We suggest that the disagreement of our number counts can be largely attributed to cosmic variance.« less
CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng
2014-12-01
Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.
NASA Astrophysics Data System (ADS)
Zavala, J. A.; Aretxaga, I.; Geach, J. E.; Hughes, D. H.; Birkinshaw, M.; Chapin, E.; Chapman, S.; Chen, Chian-Chou; Clements, D. L.; Dunlop, J. S.; Farrah, D.; Ivison, R. J.; Jenness, T.; Michałowski, M. J.; Robson, E. I.; Scott, Douglas; Simpson, J.; Spaans, M.; van der Werf, P.
2017-01-01
We present deep observations at 450 and 850 μm in the Extended Groth Strip field taken with the SCUBA-2 camera mounted on the James Clerk Maxwell Telescope as part of the deep SCUBA-2 Cosmology Legacy Survey (S2CLS), achieving a central instrumental depth of σ450 = 1.2 mJy beam-1 and σ850 = 0.2 mJy beam-1. We detect 57 sources at 450 μm and 90 at 850 μm with signal-to-noise ratio >3.5 over ˜70 arcmin2. From these detections, we derive the number counts at flux densities S450 > 4.0 mJy and S850 > 0.9 mJy, which represent the deepest number counts at these wavelengths derived using directly extracted sources from only blank-field observations with a single-dish telescope. Our measurements smoothly connect the gap between previous shallower blank-field single-dish observations and deep interferometric ALMA results. We estimate the contribution of our SCUBA-2 detected galaxies to the cosmic infrared background (CIB), as well as the contribution of 24 μm-selected galaxies through a stacking technique, which add a total of 0.26 ± 0.03 and 0.07 ± 0.01 MJy sr-1, at 450 and 850 μm, respectively. These surface brightnesses correspond to 60 ± 20 and 50 ± 20 per cent of the total CIB measurements, where the errors are dominated by those of the total CIB. Using the photometric redshifts of the 24 μm-selected sample and the redshift distributions of the submillimetre galaxies, we find that the redshift distribution of the recovered CIB is different at each wavelength, with a peak at z ˜ 1 for 450 μm and at z ˜ 2 for 850 μm, consistent with previous observations and theoretical models.
Chapter 11: Web-based Tools - VO Region Inventory Service
NASA Astrophysics Data System (ADS)
Good, J. C.
As the size and number of datasets available through the VO grows, it becomes increasingly critical to have services that aid in locating and characterizing data pertinent to a particular scientific problem. At the same time, this same increase makes that goal more and more difficult to achieve. With a small number of datasets, it is feasible to simply retrieve the data itself (as the NVO DataScope service does). At intermediate scales, "count" DBMS searches (searches of the actual datasets which return record counts rather than full data subsets) sent to each data provider will work. However, neither of these approaches scale as the number of datasets expands into the hundreds or thousands. Dealing with the same problem internally, IRSA developed a compact and extremely fast scheme for determining source counts for positional catalogs (and in some cases image metadata) over arbitrarily large regions for multiple catalogs in a fraction of a second. To show applicability to the VO in general, this service has been extended with indices for all 4000+ catalogs in CDS Vizier (essentially all published catalogs and source tables). In this chapter, we will briefly describe the architecture of this service, and then describe how this can be used in a distributed system to retrieve rapid inventories of all VO holdings in a way that places an insignificant load on any data supplier. Further, we show and this tool can be used in conjunction with VO Registries and catalog services to zero in on those datasets that are appropriate to the user's needs. The initial implementation of this service consolidates custom binary index file structures (external to any DBMS and therefore portable) at a single site to minimize search times and implements the search interface as a simple CGI program. However, the architecture is amenable to distribution. The next phase of development will focus on metadata harvesting from data archives through a standard program interface and distribution of the search processing across multiple service providers for redundancy and parallelization.
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Weichenthal, Scott; Dufresne, André; Infante-Rivard, Claire; Joseph, Lawrence
2008-03-01
School classrooms are potentially important micro-environments for childhood exposures owing to the large amount of time children spend in these locations. While a number of airborne contaminants may be present in schools, to date few studies have examined ultrafine particle (0.02-1 microm) (UFP) levels in classrooms. In this study, our objective was to characterize UFP counts (cm(-3)) in classrooms during the winter months and to develop a model to predict such exposures based on ambient weather conditions and outdoor UFPs, as well as classroom characteristics such as size, temperature, relative humidity, and carbon dioxide levels. In total, UFP count data were collected on 60 occasions in 37 occupied classrooms at one elementary school and one secondary school in Pembroke, Ontario. On average, outdoor UFP levels exceeded indoor measures by 8989 cm(-3) (95% confidence interval (CI): 6382, 11596), and classroom UFP counts were similar at both schools with a combined average of 5017 cm(-3) (95% CI: 4300, 5734). Of the variables examined only wind speed and outdoor UFPs were important determinants of classrooms UFP levels. Specifically, each 10 km/h increase in wind speed corresponded to an 1873 cm(-3) (95% CI: 825, 2920) decrease in classroom UFP counts, and each 10000 cm(-3) increase in outdoor UFPs corresponded to a 1550 cm(-3) (95% CI: 930, 2171) increase in classroom UFP levels. However, high correlations between these two predictors meant that the independent effects of wind speed and outdoor UFPs could not be separated in multivariable models, and only outdoor UFP counts were included in the final predictive model. To evaluate model performance, classroom UFP counts were collected for 8 days at two new schools and compared to predicted values based on outdoor UFP measures. A moderate correlation was observed between measured and predicted classroom UFP counts (r=0.63) for both schools combined, but this relationship was not valid on days in which a strong indoor UFP source (electric kitchen stove) was active in schools. In general, our findings suggest that reasonable estimates of classroom UFP counts may be obtained from outdoor UFP data but that the accuracy of such estimates are limited in the presence of indoor UFP sources.
Graczyk, Thaddeus K; Sunderland, Deirdre; Awantang, Grace N; Mashinski, Yessika; Lucy, Frances E; Graczyk, Zofi; Chomicz, Lidia; Breysse, Patrick N
2010-04-01
During summer months, samples of marine beach water were tested weekly for human waterborne pathogens in association with high and low bather numbers during weekends and weekdays, respectively. The numbers of bathers on weekends were significantly higher than on weekdays (P < 0.001), and this was associated with a significant (P < 0.04) increase in water turbidity. The proportion of water samples containing Cryptosporidium parvum, Giardia duodenalis, and Enterocytozoon bieneusi was significantly higher (P < 0.03) on weekends than on weekdays, and significantly (P < 0.01) correlated with enterococci counts. The concentration of all three waterborne pathogens was significantly correlated with bather density (P < 0.01). The study demonstrated that: (a) human pathogens were present in beach water on days deemed acceptable for bathing according to fecal bacterial standards; (b) enterococci count was a good indicator for the presence of Cryptosporidium, Giardia, and microsporidian spores in recreational marine beach water; (c) water should be tested for enterococci during times when bather numbers are high; (d) re-suspension of bottom sediments by bathers caused elevated levels of enterococci and waterborne parasites, thus bathers themselves can create a non-point source for water contamination; and (e) exposure to recreational bathing waters can play a role in epidemiology of microsporidiosis. In order to protect public health, it is recommended to: (a) prevent diapered children from entering beach water; (b) introduce bather number limits to recreational areas; (c) advise people with gastroenteritis to avoid bathing; and (d) use showers prior to and after bathing.
Bacteriological quality of drinking water from source to household in Ibadan, Nigeria.
Oloruntoba, E O; Sridhar, M K C
2007-06-01
The bacteriological quality of drinking water from well, spring, borehole, and tap sources and that stored in containers by urban households in Ibadan was assessed during wet and dry seasons. The MPN technique was used to detect and enumerate the number of coliforms in water samples. Results showed that majority of households relied on wells, which were found to be the most contaminated of all the sources. At the household level, water quality significantly deteriorated after collection and storage as a result of poor handling. Furthermore, there was significant seasonal variation in E. coli count at source (P=0.013) and household (P=0.001). The study concludes that there is a need to improve the microbial quality of drinking water at source and the household level through hygiene education, and provision of simple, acceptable, low-cost treatment methods.
Kassa, Lea; Young, Sera L.; Travis, Alexander J.
2018-01-01
Objective To investigate the association between livestock ownership and dietary diversity, animal-source food consumption, height-for-age z-score, and stunting among children living in wildlife “buffer zones” of Zambia’s Luangwa Valley using a novel livestock typology approach. Methods We conducted a cross-sectional study of 838 children aged 6–36 months. Households were categorized into typologies based on the types and numbers of animals owned, ranging from no livestock to large numbers of mixed livestock. We used multilevel mixed-effects linear and logistic regression to examine the association between livestock typologies and four nutrition-related outcomes of interest. Results were compared with analyses using more common binary and count measures of livestock ownership. Results No measure of livestock ownership was significantly associated with children’s odds of animal-source food consumption, child height-for-age z-score, or stunting odds. Livestock ownership Type 2 (having a small number of poultry) was surprisingly associated with decreased child dietary diversity (β = -0.477; p<0.01) relative to owning no livestock. Similarly, in comparison models, chicken ownership was negatively associated with dietary diversity (β = -0.320; p<0.01), but increasing numbers of chickens were positively associated with dietary diversity (β = 0.022; p<0.01). Notably, neither child dietary diversity nor animal-source food consumption was significantly associated with height, perhaps due to unusually high prevalences of morbidities. Conclusions Our novel typologies methodology allowed for an efficient and a more in-depth examination of the differential impact of livestock ownership patterns compared to typical binary or count measures of livestock ownership. We found that these patterns were not positively associated with child nutrition outcomes in this context. Development and conservation programs focusing on livestock must carefully consider the complex, context-specific relationship between livestock ownership and nutrition outcomes–including how livestock are utilized by the target population–when attempting to use livestock as a means of improving child nutrition. PMID:29408920
Dumas, Sarah E; Kassa, Lea; Young, Sera L; Travis, Alexander J
2018-01-01
To investigate the association between livestock ownership and dietary diversity, animal-source food consumption, height-for-age z-score, and stunting among children living in wildlife "buffer zones" of Zambia's Luangwa Valley using a novel livestock typology approach. We conducted a cross-sectional study of 838 children aged 6-36 months. Households were categorized into typologies based on the types and numbers of animals owned, ranging from no livestock to large numbers of mixed livestock. We used multilevel mixed-effects linear and logistic regression to examine the association between livestock typologies and four nutrition-related outcomes of interest. Results were compared with analyses using more common binary and count measures of livestock ownership. No measure of livestock ownership was significantly associated with children's odds of animal-source food consumption, child height-for-age z-score, or stunting odds. Livestock ownership Type 2 (having a small number of poultry) was surprisingly associated with decreased child dietary diversity (β = -0.477; p<0.01) relative to owning no livestock. Similarly, in comparison models, chicken ownership was negatively associated with dietary diversity (β = -0.320; p<0.01), but increasing numbers of chickens were positively associated with dietary diversity (β = 0.022; p<0.01). Notably, neither child dietary diversity nor animal-source food consumption was significantly associated with height, perhaps due to unusually high prevalences of morbidities. Our novel typologies methodology allowed for an efficient and a more in-depth examination of the differential impact of livestock ownership patterns compared to typical binary or count measures of livestock ownership. We found that these patterns were not positively associated with child nutrition outcomes in this context. Development and conservation programs focusing on livestock must carefully consider the complex, context-specific relationship between livestock ownership and nutrition outcomes-including how livestock are utilized by the target population-when attempting to use livestock as a means of improving child nutrition.
Ananias, Karla Rubia; de Melo, Adriane Alexandre Machado; de Moura, Celso José
2013-01-01
The development of mold of environmental origin in honey affects its quality and leads to its deterioration, so yeasts and molds counts have been used as an important indicator of hygiene levels during its processing, transportation and storage. The aim of this study was to evaluate the levels of yeasts and molds contamination and their correlation with moisture and acidity levels in Apis mellifera L. honey from central Brazil. In 20% of the samples, the yeasts and molds counts exceeded the limit established by legislation for the marketing of honey in the MERCOSUR, while 42.8% and 5.7% presented above-standard acidity and moisture levels, respectively. Although samples showed yeasts and molds counts over 1.0 × 102 UFC.g−1, there was no correlation between moisture content and the number of microorganisms, since, in part of the samples with above-standard counts, the moisture level was below 20%. In some samples the acidity level was higher than that established by legislation, but only one sample presented a yeasts and molds count above the limit established by MERCOSUR, which would suggest the influence of the floral source on this parameter. In general, of the 35 samples analyzed, the quality was considered inadequate in 45.7% of cases. PMID:24516434
Ananias, Karla Rubia; de Melo, Adriane Alexandre Machado; de Moura, Celso José
2013-01-01
The development of mold of environmental origin in honey affects its quality and leads to its deterioration, so yeasts and molds counts have been used as an important indicator of hygiene levels during its processing, transportation and storage. The aim of this study was to evaluate the levels of yeasts and molds contamination and their correlation with moisture and acidity levels in Apis mellifera L. honey from central Brazil. In 20% of the samples, the yeasts and molds counts exceeded the limit established by legislation for the marketing of honey in the MERCOSUR, while 42.8% and 5.7% presented above-standard acidity and moisture levels, respectively. Although samples showed yeasts and molds counts over 1.0 × 10(2) UFC.g(-1), there was no correlation between moisture content and the number of microorganisms, since, in part of the samples with above-standard counts, the moisture level was below 20%. In some samples the acidity level was higher than that established by legislation, but only one sample presented a yeasts and molds count above the limit established by MERCOSUR, which would suggest the influence of the floral source on this parameter. In general, of the 35 samples analyzed, the quality was considered inadequate in 45.7% of cases.
Mapping the acquisition of the number word sequence in the first year of school
NASA Astrophysics Data System (ADS)
Gould, Peter
2017-03-01
Learning to count and to produce the correct sequence of number words in English is not a simple process. In NSW government schools taking part in Early Action for Success, over 800 students in each of the first 3 years of school were assessed every 5 weeks over the school year to determine the highest correct oral count they could produce. Rather than displaying a steady increase in the accurate sequence of the number words produced, the kindergarten data reported here identified clear, substantial hurdles in the acquisition of the counting sequence. The large-scale, longitudinal data also provided evidence of learning to count through the teens being facilitated by the semi-regular structure of the number words in English. Instead of occurring as hurdles to starting the next counting sequence, number words corresponding to some multiples of ten (10, 20 and 100) acted as if they were rest points. These rest points appear to be artefacts of how the counting sequence is acquired.
Walker, R.S.; Novare, A.J.; Nichols, J.D.
2000-01-01
Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.
Tanaka, Junko; Tanaka, Masahiro
2010-01-01
The purpose of this study was to investigate the relationship between the number of missing teeth (MT) and the statuses of oral environmental factors (the stimulated salivary flow rate, buffering capacity, and the counts of mutans streptococci, lactobacilli, and Candida) in the elderly. The subjects were 64 elderly subjects with fixed prostheses and 49 who wore removable partial dentures aged over 65 years. We used one-way ANOVA to test for overall differences of the number of MT among 5 oral environmental factors. The significant differences were observed in the lactobacilli counts for different number of MT. The number of MT increased with an increase in the lactobacilli counts with removable denture. In conclusion, for the patients wearing removable dentures, increasing number of MT was associated with an increase in the lactobacilli counts in saliva. For the patients with crowns and fixed partial dentures, the number of MT was not significantly affected by salivary mutans streptococci, lactobacilli, and Candida counts.
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...
2017-04-29
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the effective system dead time parameter for correlated neutron counting
NASA Astrophysics Data System (ADS)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.
2017-11-01
Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.
Point counts from clustered populations: Lessons from an experiment with Hawaiian crows
Hayward, G.D.; Kepler, C.B.; Scott, J.M.
1991-01-01
We designed an experiment to identify factors contributing most to error in counts of Hawaiian Crow or Alala (Corvus hawaiiensis) groups that are detected aurally. Seven observers failed to detect calling Alala on 197 of 361 3-min point counts on four transects extending from cages with captive Alala. A detection curve describing the relation between frequency of flock detection and distance typified the distribution expected in transect or point counts. Failure to detect calling Alala was affected most by distance, observer, and Alala calling frequency. The number of individual Alala calling was not important in detection rate. Estimates of the number of Alala calling (flock size) were biased and imprecise: average difference between number of Alala calling and number heard was 3.24 (.+-. 0.277). Distance, observer, number of Alala calling, and Alala calling frequency all contributed to errors in estimates of group size (P < 0.0001). Multiple regression suggested that number of Alala calling contributed most to errors. These results suggest that well-designed point counts may be used to estimate the number of Alala flocks but cast doubt on attempts to estimate flock size when individuals are counted aurally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less
High-security communication by coherence modulation at the photon-counting level.
Rhodes, William T; Boughanmi, Abdellatif; Moreno, Yezid Torres
2016-05-20
We show that key-specified interferometer path-length difference modulation (often referred to as coherence modulation), operating in the photon-counting regime with a broadband source, can provide a quantifiably high level of physics-guaranteed security for binary signal transmission. Each signal bit is associated with many photocounts, perhaps numbering in the thousands. Of great importance, the presence of an eavesdropper can be quickly detected. We first review the operation of key-specified coherence modulation at high light levels, illustrating by means of an example its lack of security against attack. We then show, using the same example, that, through the reduction of light intensities to photon-counting levels, a high level of security can be attained. A particular attack on the system is analyzed to demonstrate the quantifiability of the scheme's security, and various remaining research issues are discussed. A potential weakness of the scheme lies in a possible vulnerability to light amplification by an attacker.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
Towards a census of high-redshift dusty galaxies with Herschel. A selection of "500 μm-risers"
NASA Astrophysics Data System (ADS)
Donevski, D.; Buat, V.; Boone, F.; Pappalardo, C.; Bethermin, M.; Schreiber, C.; Mazyed, F.; Alvarez-Marquez, J.; Duivenvoorden, S.
2018-06-01
Context. Over the last decade a large number of dusty star-forming galaxies has been discovered up to redshift z = 2 - 3 and recent studies have attempted to push the highly confused Herschel SPIRE surveys beyond that distance. To search for z ≥ 4 galaxies they often consider the sources with fluxes rising from 250 μm to 500 μm (so-called "500 μm-risers"). Herschel surveys offer a unique opportunity to efficiently select a large number of these rare objects, and thus gain insight into the prodigious star-forming activity that takes place in the very distant Universe. Aims: We aim to implement a novel method to obtain a statistical sample of 500 μm-risers and fully evaluate our selection inspecting different models of galaxy evolution. Methods: We consider one of the largest and deepest Herschel surveys, the Herschel Virgo Cluster Survey. We develop a novel selection algorithm which links the source extraction and spectral energy distribution fitting. To fully quantify selection biases we make end-to-end simulations including clustering and lensing. Results: We select 133 500 μm-risers over 55 deg2, imposing the criteria: S500 > S350 > S250, S250 > 13.2 mJy and S500 > 30 mJy. Differential number counts are in fairly good agreement with models, displaying a better match than other existing samples. The estimated fraction of strongly lensed sources is 24+6-5% based on models. Conclusions: We present the faintest sample of 500 μm-risers down to S250 = 13.2 mJy. We show that noise and strong lensing have an important impact on measured counts and redshift distribution of selected sources. We estimate the flux-corrected star formation rate density at 4 < z < 5 with the 500 μm-risers and find it to be close to the total value measured in far-infrared. This indicates that colour selection is not a limiting effect to search for the most massive, dusty z > 4 sources.
NASA Astrophysics Data System (ADS)
Steadman, Roger; Herrmann, Christoph; Livne, Amir
2017-08-01
Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.
Radio Sources Toward Galaxy Clusters at 30 GHz
NASA Technical Reports Server (NTRS)
Coble, K.; Bonamente, M.; Carlstrom, J. E.; Dawson, K.; Hasler, N.; Holzapfel, W.; Joy, M.; LaRoque, S.; Marrone, D. P.; Reese, E. D.
2007-01-01
Extra-galactic radio sources are a significant contaminant in cosmic microwave background and Sunyaev-Zeldovich effect experiments. Deep interferometric observations with the BIMA and OVRO arrays are used to characterize the spatial, spectral, and flux distributions of radio sources toward massive galaxy clusters at 28.5 GHz. We compute counts of mJy source fluxes from 89 fields centered on known massive galaxy clusters and 8 non-cluster fields. We find that source counts in the inner regions of the cluster fields (within 0.5 arcmin of the cluster center) are a factor of 8.9 (+4.2 to -3.8) times higher than counts in the outer regions of the cluster fields (radius greater than 0.5 arcmin). Counts in the outer regions of the cluster fields are in turn a factor of 3.3 (+4.1 -1.8) greater than those in the noncluster fields. Counts in the non-cluster fields are consistent with extrapolations from the results of other surveys. We compute spectral indices of mJy sources in cluster fields between 1.4 and 28.5 GHz and find a mean spectral index of al[ja = 0.66 with an rms dispersion of 0.36, where flux S varies as upsilon(sup -alpha). The distribution is skewed, with a median spectral index of 0.72 and 25th and 75th percentiles of 0.51 and 0.92, respectively. This is steeper than the spectral indices of stronger field sources measured by other surveys.
Increasing X-Ray Brightness of HBL Source 1ES 1727+650
NASA Astrophysics Data System (ADS)
Kapanadze, Bidzina
2017-02-01
The nearby TeV-detected HBL object 1ES 1727+502 (1Zw 187, z=0.055) has been targeted 111 times by X-ray Telescope (XRT) onboard Swift since 2010 April 2. During this monitoring, the 0.3-10 keV count rate varied by a factor of 17.4 (see http://www.swift.psu.edu/monitoring/source.php?source=QSOB1727+502) and showed a prolonged X-ray flaring activity during 2015 March - 2016 February, revealed mainly via the Target of Opportunity observations performed in the framework of our request of different urgencies (Request Number 6571, 6606, 6717, 6809, 6927, 7322, 7355, 7379, 7390, 7404, 7430, 7441, 7516, 7565; see Kapanadze et al. 2015, Atel #8224, #7342).
Okal, Michael N; Lindh, Jenny M; Torr, Steve J; Masinde, Elizabeth; Orindi, Benedict; Lindsay, Steve W; Fillinger, Ulrike
2015-06-20
Choice egg-count bioassays are a popular tool for analysing oviposition substrate preferences of gravid mosquitoes. This study aimed at improving the design of two-choice experiments for measuring oviposition substrates preferences of the malaria vector Anopheles gambiae senso lato, a mosquito that lays single eggs. In order to achieve high egg-laying success of female An. gambiae sensu stricto (s.s.) and Anopheles arabiensis mosquitoes in experiments, four factors were evaluated: (1) the time provided for mating; (2) the impact of cage size, mosquito age and female body size on insemination; (3) the peak oviposition time; and, (4) the host sources of blood meal. Choice bioassays, with one mosquito released in each cage containing two oviposition cups both with the same oviposition substrate (100 ml water), were used to measure and adjust for egg-laying characteristics of the species. Based on these characteristics an improved design for the egg-count bioassay is proposed. High oviposition rates [84%, 95% confidence interval (CI) 77-89%] were achieved when 300 male and 300 blood-fed female An. gambiae s.s. were held together in a cage for 4 days. The chances for oviposition dropped (odds ratio 0.30; 95% CI 0.14-0.66) when human host source of blood meal was substituted with a rabbit but egg numbers per female were not affected. The number of eggs laid by individual mosquitoes was overdispersed (median = 52, eggs, interquartile range 1-214) and the numbers of eggs laid differed widely between replicates, leading to a highly heterogeneous variance between groups and/or rounds of experiments. Moreover, one-third of mosquitoes laid eggs unequally in both cups with similar substrates giving the illusion of choice. Sample size estimations illustrate that it takes 165 individual mosquitoes to power bioassays sufficiently (power = 0.8, p = 0.05) to detect a 15% shift in comparative preferences of two treatments. Two-choice egg count bioassays with Anopheles are best done with a two-tier design that (1) implements a parallel series of experiments with mosquitoes given a choice of two identical substrates choices and, (2) uses a single mosquito in each test cage rather than groups of mosquitoes to assess the preference of a test or control solution. This approach, with sufficient replication, lowers the risk detecting pseudopreferences.
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
X-ray detection of Nova Del 2013 with Swift
NASA Astrophysics Data System (ADS)
Castro-Tirado, Alberto J.; Martin-Carrillo, Antonio; Hanlon, Lorraine
2013-08-01
Continuous X-ray monitoring by Swift of Nova Del 2013 (see CBET #3628) shows an increase of X-ray emission at the source location compared to previous observations (ATEL #5283, ATEL #5305) during a 3.9 ksec observation at UT 2013-08-22 12:05. With the XRT instrument operating in window timing mode, 744 counts were extracted from a 50 pixel long source region and 324 counts from a similar box for a background region, resulting in a 13-sigma detection with a net count rate of 0.11±0.008 counts/sec.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
NASA Astrophysics Data System (ADS)
Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger
2018-04-01
An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.
Dynamics of microorganism populations in recirculating nutrient solutions
NASA Technical Reports Server (NTRS)
Strayer, R. F.
1994-01-01
This overview covers the basic microbial ecology of recirculating hydroponic solutions. Examples from NASA and Soviet CELSS tests and the commercial hydroponic industry will be used. The sources of microorganisms in nutrient solutions include air, water, seeds, plant containers and plumbing, biological vectors, and personnel. Microbial fates include growth, death, and emigration. Important microbial habitats within nutrient delivery systems are root surfaces, hardware surfaces (biofilms), and solution suspension. Numbers of bacteria on root surfaces usually exceed those from the other habitats by several orders of magnitude. Gram negative bacteria dominate the microflora with fungal counts usually much lower. Trends typically show a decrease in counts with increasing time unless stressed plants increase root exudates. Important microbial activities include carbon mineralization and nitrogen transformations. Important detrimental interactions include competition with plants, and human and plant pathogenesis.
High virus-to-cell ratios indicate ongoing production of viruses in deep subsurface sediments.
Engelhardt, Tim; Kallmeyer, Jens; Cypionka, Heribert; Engelen, Bert
2014-07-01
Marine sediments cover two-thirds of our planet and harbor huge numbers of living prokaryotes. Long-term survival of indigenous microorganisms within the deep subsurface is still enigmatic, as sources of organic carbon are vanishingly small. To better understand controlling factors of microbial life, we have analyzed viral abundance within a comprehensive set of globally distributed subsurface sediments. Phages were detected by electron microscopy in deep (320 m below seafloor), ancient (∼14 Ma old) and the most oligotrophic subsurface sediments of the world's oceans (South Pacific Gyre (SPG)). The numbers of viruses (10(4)-10(9) cm(-3), counted by epifluorescence microscopy) generally decreased with sediment depth, but always exceeded the total cell counts. The enormous numbers of viruses indicate their impact as a controlling factor for prokaryotic mortality in the marine deep biosphere. The virus-to-cell ratios increased in deeper and more oligotrophic layers, exhibiting values of up to 225 in the deep subsurface of the SPG. High numbers of phages might be due to absorption onto the sediment matrix and a diminished degradation by exoenzymes. However, even in the oldest sediments, microbial communities are capable of maintaining viral populations, indicating an ongoing viral production and thus, viruses provide an independent indicator for microbial life in the marine deep biosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aima, M; Viscariello, N; Patton, T
Purpose: The aim of this work is to propose a method to optimize radioactive source localization (RSL) for non-palpable breast cancer surgery. RSL is commonly used as a guiding technique during surgery for excision of non-palpable tumors. A collimated hand-held detector is used to localize radioactive sources implanted in tumors. Incisions made by the surgeon are based on maximum observed detector counts, and tumors are subsequently resected based on an arbitrary estimate of the counts expected at the surgical margin boundary. This work focuses on building a framework to predict detector counts expected throughout the procedure to improve surgical margins.more » Methods: A gamma detection system called the Neoprobe GDS was used for this work. The probe consists of a cesium zinc telluride crystal and a collimator. For this work, an I-125 Best Medical model 2301 source was used. The source was placed in three different phantoms, a PMMA, a Breast (25%- glandular tissue/75%- adipose tissue) and a Breast (75-25) phantom with a backscatter thickness of 6 cm. Counts detected by the probe were recorded with varying amounts of phantom thicknesses placed on top of the source. A calibration curve was generated using MATLAB based on the counts recorded for the calibration dataset acquired with the PMMA phantom. Results: The observed detector counts data used as the validation set was accurately predicted to within ±3.2%, ±6.9%, ±8.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. The average difference between predicted and observed counts was −0.4%, 2.4%, 1.4% with a standard deviation of 1.2 %, 1.8%, 3.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. Conclusion: The results of this work provide a basis for characterization of a detector used for RSL. Counts were predicted to within ±9% for three different phantoms without the application of a density correction factor.« less
Kirton, Laurence G.; Yusoff, Norma-Rashid
2017-01-01
The Rajah Brooke's Birdwing, Trogonoptera brookiana, is a large, iconic butterfly that is facing heavy commercial exploitation and habitat loss. Males of some subspecies exhibit puddling behavior. A method of conservation monitoring was developed for subspecies albescens in Ulu Geroh, Peninsular Malaysia, where the males consistently puddle in single-species aggregations at stable geothermal springs, reaching well over 300 individuals when the population is at its highest. Digital photography was used to conduct counts of numbers of males puddling. The numbers of birdwings puddling were significantly correlated with counts of birdwings in flight, but were much higher. The numbers puddling during the peak hour were correlated with numbers puddling throughout the day and could be predicted using the numbers puddling at an alternative hour, enabling flexibility in the time of counts. Average counts for three images taken at each puddle at three peak hours between 1400–1600 hours over 2–3 days were used as a monthly population index. The numbers puddling were positively associated with higher relative humidity and brightness during monitoring hours. Monthly counts of birdwings from monitoring of puddles over a period of two years are presented. The minimum effort required for a monitoring program using counts of puddling males is discussed, as well as the potential of using the method to monitor other species of puddling butterflies. PMID:29232405
Phon, Chooi-Khim; Kirton, Laurence G; Norma-Rashid, Yusoff
2017-01-01
The Rajah Brooke's Birdwing, Trogonoptera brookiana, is a large, iconic butterfly that is facing heavy commercial exploitation and habitat loss. Males of some subspecies exhibit puddling behavior. A method of conservation monitoring was developed for subspecies albescens in Ulu Geroh, Peninsular Malaysia, where the males consistently puddle in single-species aggregations at stable geothermal springs, reaching well over 300 individuals when the population is at its highest. Digital photography was used to conduct counts of numbers of males puddling. The numbers of birdwings puddling were significantly correlated with counts of birdwings in flight, but were much higher. The numbers puddling during the peak hour were correlated with numbers puddling throughout the day and could be predicted using the numbers puddling at an alternative hour, enabling flexibility in the time of counts. Average counts for three images taken at each puddle at three peak hours between 1400-1600 hours over 2-3 days were used as a monthly population index. The numbers puddling were positively associated with higher relative humidity and brightness during monitoring hours. Monthly counts of birdwings from monitoring of puddles over a period of two years are presented. The minimum effort required for a monitoring program using counts of puddling males is discussed, as well as the potential of using the method to monitor other species of puddling butterflies.
Reliability of the Most-Probable-Number Technique for Enumerating Rhizobia in Tropical Soils †
Woomer, Paul L.; Singleton, Paul W.; Bohlool, B. Ben
1988-01-01
We used six rhizobium-legume systems to test the reliability of the most-probable-number (MPN) technique for enumerating rhizobia introduced into 14 sites representing four soil orders. The range-of-transition values (the number of dilution steps between the first not-entirely-positive and the last not-entirely-negative growth units) were compared for each species and for each soil. The probability that the observed data were significantly different from theoretical values varied with the species. The acceptability of MPN codes (P > 0.99) was the highest (97 to 99%) with Vicia sativa, Trifolium repens, and Glycine max and lowest (72%) with Leucaena leucocephala. Medicago sativa and Macroptilium atropurpureum yielded 87 and 75% acceptable MPN codes, respectively. The acceptability of the MPN data obtained for a host species was related to rooting habit and time to nodulation. Comparison of data for each soil indicated that, despite large differences in characteristics, the soil was not a major source of variability in the MPN counts. There was no significant interaction of the range of transition of rhizobium-legume plant infection count data between species and site. PMID:16347661
Using DNA to test the utility of pellet-group counts as an index of deer counts
T. J. Brinkman; D. K. Person; W. Smith; F. Stuart Chapin; K. McCoy; M. Leonawicz; K. Hundertmark
2013-01-01
Despite widespread use of fecal pellet-group counts as an index of ungulate density, techniques used to convert pellet-group numbers to ungulate numbers rarely are based on counts of known individuals, seldom evaluated across spatial and temporal scales, and precision is infrequently quantified. Using DNA from fecal pellets to identify individual deer, we evaluated the...
Code of Federal Regulations, 2011 CFR
2011-01-01
...] Factor Grades AL 2 Number of 50-count samples 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20...) For 21 through 40 Samples [See footnotes at end of Table I] Factor Grades AL 2 Number of 50-count... United States. 2 AL—Absolute limit permitted in individual 33-count sample. 3 Sample size—33-count. 4...
Code of Federal Regulations, 2012 CFR
2012-01-01
...] Factor Grades AL 2 Number of 50-count samples 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20...) For 21 through 40 Samples [See footnotes at end of Table I] Factor Grades AL 2 Number of 50-count... United States. 2 AL—Absolute limit permitted in individual 33-count sample. 3 Sample size—33-count. 4...
78 FR 77487 - Renewal of Agency Information Collection for IDEIA Part B and C Child Count
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... Education Improvement Act (IDEIA) Part B and C Child Count authorized by OMB Control Number 1076-0176. This.... Data OMB Control Number: 1076-0176. Title: IDEIA Part B and Part C Child Count. Brief Description of...
Probing the Cosmological Principle in the counts of radio galaxies at different frequencies
NASA Astrophysics Data System (ADS)
Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.
2018-04-01
According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.
Bondi Accretion and the Problem of the Missing Isolated Neutron Stars
NASA Technical Reports Server (NTRS)
Perna, Rosalba; Narayan, Ramesh; Rybicki, George; Stella, Luigi; Treves, Aldo
2003-01-01
A large number of neutron stars (NSs), approximately 10(exp 9), populate the Galaxy, but only a tiny fraction of them is observable during the short radio pulsar lifetime. The majority of these isolated NSs, too cold to be detectable by their own thermal emission, should be visible in X-rays as a result of accretion from the interstellar medium. The ROSAT All-Sky Survey has, however, shown that such accreting isolated NSs are very elusive: only a few tentative candidates have been identified, contrary to theoretical predictions that up to several thousand should be seen. We suggest that the fundamental reason for this discrepancy lies in the use of the standard Bondi formula to estimate the accretion rates. We compute the expected source counts using updated estimates of the pulsar velocity distribution, realistic hydrogen atmosphere spectra, and a modified expression for the Bondi accretion rate, as suggested by recent MHD simulations and supported by direct observations in the case of accretion around supermassive black holes in nearby galaxies and in our own. We find that, whereas the inclusion of atmospheric spectra partly compensates for the reduction in the counts due to the higher mean velocities of the new distribution, the modified Bondi formula dramatically suppresses the source counts. The new predictions are consistent with a null detection at the ROSAT sensitivity.
CrossTalk: The Journal of Defense Software Engineering. Volume 24, Number 4, July/August 2011
2011-07-01
Project Management Tool (SSPMT), JASMINE , and ALADDIN, respectively [11, 12]. SSPMT is a web-based Six Sigma project management sup- porting tool...PSP/TSP data gathered from JASMINE and ALADDIN, SSPMT performs each step of DMAIC and provides analytic results. JASMINE and ALADDIN are web-based...done by using JASMINE . JASMINE collects an individual developer’s work product information such as Source Lines of Code (SLOC), fault counts, and
ERIC Educational Resources Information Center
Jara-Ettinger, Julian; Piantadosi, Steve; Spelke, Elizabeth S.; Levy, Roger; Gibson, Edward
2017-01-01
To master the natural number system, children must understand both the concepts that number words capture and the counting procedure by which they are applied. These two types of knowledge develop in childhood, but their connection is poorly understood. Here we explore the relationship between the mastery of counting and the mastery of exact…
Higher order relativistic galaxy number counts: dominating terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Jeppe TrØst; Durrer, Ruth, E-mail: Jeppe.Trost@nbi.dk, E-mail: Ruth.Durrer@unige.ch
2017-03-01
We review the number counts to second order concentrating on the terms which dominate on sub horizon scales. We re-derive the result for these terms and compare it with the different versions found in the literature. We generalize our derivation to higher order terms, especially the third order number counts which are needed to compute the 1-loop contribution to the power spectrum.
High-redshift radio galaxies and divergence from the CMB dipole
NASA Astrophysics Data System (ADS)
Colin, Jacques; Mohayaee, Roya; Rameez, Mohamed; Sarkar, Subir
2017-10-01
Previous studies have found our velocity in the rest frame of radio galaxies at high redshift to be much larger than that inferred from the dipole anisotropy of the cosmic microwave background. We construct a full sky catalogue, NVSUMSS, by merging the NRAO VLA Sky Survey and the Sydney University Molonglo Sky Survey catalogues and removing local sources by various means including cross-correlating with the 2MASS Redshift Survey catalogue. We take into account both aberration and Doppler boost to deduce our velocity from the hemispheric number count asymmetry, as well as via a three-dimensional linear estimator. Both its magnitude and direction depend on cuts made to the catalogue, e.g. on the lowest source flux; however these effects are small. From the hemispheric number count asymmetry we obtain a velocity of 1729 ± 187 km s-1, I.e. about four times larger than that obtained from the cosmic microwave background dipole, but close in direction, towards RA=149° ± 2°, Dec. = -17° ± 12°. With the three-dimensional estimator, the derived velocity is 1355 ± 174 km s-1 towards RA = 141° ± 11°, Dec. = -9° ± 10°. We assess the statistical significance of these results by comparison with catalogues of random distributions, finding it to be 2.81σ (99.75 per cent confidence).
The degree of bacterial contamination while performing spine surgery.
Ahn, Dong Ki; Park, Hoon Seok; Kim, Tae Woo; Yang, Jong Hwa; Boo, Kyung Hwan; Kim, In Ja; Lee, Hye Jin
2013-03-01
Prospective experimental study. To evaluate bacterial contamination during surgery. The participants of surgery and ventilation system have been known as the most significant sources of contamination. Two pairs of air culture blood agar plate for G(+) bacteria and MacConkey agar plate for G(-) bacteria were placed at 3 different locations in a conventional operation room: in the surgical field, under the airflow of local air conditioner, and pathway to door while performing spine surgeries. One pair of culture plates was retrieved after one hour and the other pair was retrieved after 3 hours. The cultured bacteria were identified and number of colonies was counted. There was no G(-) bacteria identified. G(+) bacteria grew on all 90 air culture blood agar plates. The colony count of one hour group was 14.5±5.4 in the surgical field, 11.3±6.6 under the local air conditioner, and 13.1±8.7 at the pathway to the door. There was no difference among the 3 locations. The colony count of 3 hours group was 46.4±19.5, 30.3±12.9, and 39.7±15.2, respectively. It was more at the surgical field than under the air conditioner (p=0.03). The number of colonies of one hour group was 13.0±7.0 and 3 hours group was 38.8±17.1. There was positive correlation between the time and the number of colonies (r=0.76, p=0.000). Conventional operation room was contaminated by G(+) bacteria. The degree of contamination was most high at the surgical field. The number of bacteria increased right proportionally to the time.
The Degree of Bacterial Contamination While Performing Spine Surgery
Ahn, Dong Ki; Park, Hoon Seok; Yang, Jong Hwa; Boo, Kyung Hwan; Kim, In Ja; Lee, Hye Jin
2013-01-01
Study Design Prospective experimental study. Purpose To evaluate bacterial contamination during surgery. Overview of Literature The participants of surgery and ventilation system have been known as the most significant sources of contamination. Methods Two pairs of air culture blood agar plate for G(+) bacteria and MacConkey agar plate for G(-) bacteria were placed at 3 different locations in a conventional operation room: in the surgical field, under the airflow of local air conditioner, and pathway to door while performing spine surgeries. One pair of culture plates was retrieved after one hour and the other pair was retrieved after 3 hours. The cultured bacteria were identified and number of colonies was counted. Results There was no G(-) bacteria identified. G(+) bacteria grew on all 90 air culture blood agar plates. The colony count of one hour group was 14.5±5.4 in the surgical field, 11.3±6.6 under the local air conditioner, and 13.1±8.7 at the pathway to the door. There was no difference among the 3 locations. The colony count of 3 hours group was 46.4±19.5, 30.3±12.9, and 39.7±15.2, respectively. It was more at the surgical field than under the air conditioner (p=0.03). The number of colonies of one hour group was 13.0±7.0 and 3 hours group was 38.8±17.1. There was positive correlation between the time and the number of colonies (r=0.76, p=0.000). Conclusions Conventional operation room was contaminated by G(+) bacteria. The degree of contamination was most high at the surgical field. The number of bacteria increased right proportionally to the time. PMID:23508998
2008-01-01
Objective To determine if citation counts at two years could be predicted for clinical articles that pass basic criteria for critical appraisal using data within three weeks of publication from external sources and an online article rating service. Design Retrospective cohort study. Setting Online rating service, Canada. Participants 1274 articles from 105 journals published from January to June 2005, randomly divided into a 60:40 split to provide derivation and validation datasets. Main outcome measures 20 article and journal features, including ratings of clinical relevance and newsworthiness, routinely collected by the McMaster online rating of evidence system, compared with citation counts at two years. Results The derivation analysis showed that the regression equation accounted for 60% of the variation (R2=0.60, 95% confidence interval 0.538 to 0.629). This model applied to the validation dataset gave a similar prediction (R2=0.56, 0.476 to 0.596, shrinkage 0.04; shrinkage measures how well the derived equation matches data from the validation dataset). Cited articles in the top half and top third were predicted with 83% and 61% sensitivity and 72% and 82% specificity. Higher citations were predicted by indexing in numerous databases; number of authors; abstraction in synoptic journals; clinical relevance scores; number of cited references; and original, multicentred, and therapy articles from journals with a greater proportion of articles abstracted. Conclusion Citation counts can be reliably predicted at two years using data within three weeks of publication. PMID:18292132
Domark, Jeffrey D.; Hatton, John F.; Benison, Roxanne P.; Hildebolt, Charles F.
2014-01-01
Introduction The purpose of this study was to compare digital periapical and cone beam computed tomography (CBCT) images to determine the number of canals in the mesiobuccal root (MB) of maxillary molars and to compare these counts to micro CT (μCT), which was also used to determine canal configuration. Methods Digital periapical (RVG 6100), CBCT (9000 3D) and μCT images (the reference standard) were obtained of 18 hemi-maxillas. With periapical and CBCT images, 2 endodontists independently counted the number of canals in each molar and repeated counts 2 weeks later. Teeth were extracted, scanned with μCT, and 2 additional endodontists, by consensus, determined the number and configuration of canals. The Friedman test was used to test for differences. Results In mesiobuccal roots, 2 canals were present in 100% (13/13) of maxillary first and 57% (8/14) second molars, and 69% (9/13) and 100% (8/8) of these exited as two or more foramina. There was no difference in canal counts for original and repeat reads by the two observers with periapicals (P = 0.06) and with CBCT (P = 0.88) and no difference when CBCT counts were compared with μCT counts (P = 0.52); however, when periapical counts were compared with μCT counts there was a significant difference (P = 0.04). Conclusions For cadaver maxillary molars, μCT canal counts were significantly different from digital periapical radiograph counts but not different from Carestream 9000 3D CBCT counts. PMID:23791260
Prevalence of plagiarism among medical students.
Bilić-Zulle, Lidija; Frković, Vedran; Turk, Tamara; Azman, Josip; Petrovecki, Mladen
2005-02-01
To determine the prevalence of plagiarism among medical students in writing essays. During two academic years, 198 second year medical students attending Medical Informatics course wrote an essay on one of four offered articles. Two of the source articles were available in an electronic form and two in printed form. Two (one electronic and one paper article) were considered less complex and the other two more complex. The essays were examined using plagiarism detection software "WCopyfind," which counted the number of matching phrases with six or more words. Plagiarism rate, expressed as the percentage of the plagiarized text, was calculated as a ratio of the absolute number of matching words and the total number of words in the essay. Only 17 (9%) of students did not plagiarize at all and 68 (34%) plagiarized less than 10% of the text. The average plagiarism rate (% of plagiarized text) was 19% (5-95% percentile=0-88). Students who were strictly warned not to plagiarize had a higher total word count in their essays than students who were not warned (P=0.002) but there was no difference between them in the rate of plagiarism. Students with higher grades in Medical Informatics exam plagiarized less than those with lower grades (P=0.015). Gender, subject source, and complexity had no influence on the plagiarism rate. Plagiarism in writing essays is common among medical students. An explicit warning is not enough to deter students from plagiarism. Detection software can be used to trace and evaluate the rate of plagiarism in written student assays.
Fazio, B B
1994-04-01
This study examined the counting abilities of preschool children with specific language impairment compared to language-matched and mental-age-matched peers. In order to determine the nature of the difficulties SLI children exhibited in counting, the subjects participated in a series of oral counting tasks and a series of gestural tasks that used an invented counting system based on pointing to body parts. Despite demonstrating knowledge of many of the rules associated with counting, SLI preschool children displayed marked difficulty in counting objects. On oral counting tasks, they showed difficulty with rote counting, displayed a limited repertoire of number terms, and miscounted sets of objects. However, on gestural counting tasks, SLI children's performance was significantly better. These findings suggest that SLI children have a specific difficulty with the rote sequential aspect of learning number words.
A Case Study in Using Explicit Instruction to Teach Young Children Counting Skills
ERIC Educational Resources Information Center
Hinton, Vanessa; Stroizer, Shaunita; Flores, Margaret
2015-01-01
Number sense is one's ability to understand what numbers mean, perform mental mathematics, and look at the world and make comparisons. Researchers show instruction that teaches children how to classify numbers, put numbers in sequence, conserve numbers effectively, and count builds their number sense skills. Targeted instruction that teaches…
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.
2002-01-01
Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.
Guarner, Jeannette; Atuan, Maria Ana; Nix, Barbara; Mishak, Christopher; Vejjajiva, Connie; Curtis, Cheri; Park, Sunita; Mullins, Richard
2010-01-01
Each institution sets specific parameters obtained by automated hematology analyzers to trigger manual counts. We designed a process to decrease the number of manual differential cell counts without impacting patient care. We selected new criteria that prompt manual counts and studied the impact these changes had in 2 days of work and in samples of patients with newly diagnosed leukemia, sickle cell disease, and presence of left shift. By using fewer parameters and expanding our ranges we decreased the number of manual counts by 20%. The parameters that prompted manual counts most frequently were the presence of blast flags and nucleated red blood cells, 2 parameters that were not changed. The parameters that accounted for a decrease in the number of manual counts were the white blood cell count and large unstained cells. Eight of 32 patients with newly diagnosed leukemia did not show blast flags; however, other parameters triggered manual counts. In 47 patients with sickle cell disease, nucleated red cells and red cell variability prompted manual review. Bands were observed in 18% of the specimens and 4% would not have been counted manually with the new criteria, for the latter the mean band count was 2.6%. The process we followed to evaluate hematological parameters that reflex to manual differential cell counts increased efficiency without compromising patient care in our hospital system.
The microbial quality of drinking water in Manonyane community: Maseru District (Lesotho).
Gwimbi, P
2011-09-01
Provision of good quality household drinking water is an important means of improving public health in rural communities especially in Africa; and is the rationale behind protecting drinking water sources and promoting healthy practices at and around such sources. To examine the microbial content of drinking water from different types of drinking water sources in Manonyane community of Lesotho. The community's hygienic practices around the water sources are also assessed to establish their contribution to water quality. Water samples from thirty five water sources comprising 22 springs, 6 open wells, 6 boreholes and 1 open reservoir were assessed. Total coliform and Escherichia coli bacteria were analyzed in water sampled. Results of the tests were compared with the prescribed World Health Organization desirable limits. A household survey and field observations were conducted to assess the hygienic conditions and practices at and around the water sources. Total coliform were detected in 97% and Escherichia coli in 71% of the water samples. The concentration levels of Total coliform and Escherichia coli were above the permissible limits of the World Health Organization drinking water quality guidelines in each case. Protected sources had significantly less number of colony forming units (cfu) per 100 ml of water sample compared to unprotected sources (56% versus 95%, p < 0.05). Similarly in terms of Escherichia coli, protected sources had less counts (7% versus 40%, p < 0.05) compared with those from unprotected sources. Hygiene conditions and practices that seemed to potentially contribute increased total coliform and Escherichia coli counts included non protection of water sources from livestock faeces, laundry practices, and water sources being down slope of pit latrines in some cases. These findings suggest source water protection and good hygiene practices can improve the quality of household drinking water where disinfection is not available. The results also suggest important lines of inquiry and provide support and input for environmental and public health programmes, particularly those related to water and sanitation.
Development of a nematode offspring counting assay for rapid and simple soil toxicity assessment.
Kim, Shin Woong; Moon, Jongmin; Jeong, Seung-Woo; An, Youn-Joo
2018-05-01
Since the introduction of standardized nematode toxicity assays by the American Society for Testing and Materials (ASTM) and International Organization for Standardization (ISO), many studies have reported their use. Given that the currently used standardized nematode toxicity assays have certain limitations, in this study, we examined the use of a novel nematode offspring counting assay for evaluating soil ecotoxicity based on a previous soil-agar isolation method used to recover live adult nematodes. In this new assay, adult Caenorhabditis elegans were exposed to soil using a standardized toxicity assay procedure, and the resulting offspring in test soils attracted by a microbial food source in agar plates were counted. This method differs from previously used assays in terms of its endpoint, namely, the number of nematode offspring. The applicability of the bioassay was demonstrated using metal-spiked soils, which revealed metal concentration-dependent responses, and with 36 field soil samples characterized by different physicochemical properties and containing various metals. Principal component analysis revealed that texture fraction (clay, sand, and silt) and electrical conductivity values were the main factors influencing the nematode offspring counting assay, and these findings warrant further investigation. The nematode offspring counting assay is a rapid and simple process that can provide multi-directional toxicity assessment when used in conjunction with other standard methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kurosaki, Hiromu; Mueller, Rebecca J.; Lambert, Susan B.; ...
2016-07-15
An alternate method of preparing actinide alpha counting sources was developed in place of electrodeposition or lanthanide fluoride micro-precipitation. The method uses lanthanide hydroxide micro-precipitation to avoid the use of hazardous hydrofluoric acid. Lastly, it provides a quicker, simpler, and safer way of preparing actinide alpha counting sources in routine, production-type laboratories that process many samples daily.
State traffic volume systems council estimation process.
DOT National Transportation Integrated Search
2004-10-01
The Kentucky Transportation Cabinet has an immense traffic data collection program that is an essential source for many other programs. The Division of Planning processes traffic volume counts annually. These counts are maintained in the Counts Datab...
MODEL-FREE MULTI-PROBE LENSING RECONSTRUCTION OF CLUSTER MASS PROFILES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umetsu, Keiichi
2013-05-20
Lens magnification by galaxy clusters induces characteristic spatial variations in the number counts of background sources, amplifying their observed fluxes and expanding the area of sky, the net effect of which, known as magnification bias, depends on the intrinsic faint-end slope of the source luminosity function. The bias is strongly negative for red galaxies, dominated by the geometric area distortion, whereas it is mildly positive for blue galaxies, enhancing the blue counts toward the cluster center. We generalize the Bayesian approach of Umetsu et al. for reconstructing projected cluster mass profiles, by incorporating multiple populations of background sources for magnification-biasmore » measurements and combining them with complementary lens-distortion measurements, effectively breaking the mass-sheet degeneracy and improving the statistical precision of cluster mass measurements. The approach can be further extended to include strong-lensing projected mass estimates, thus allowing for non-parametric absolute mass determinations in both the weak and strong regimes. We apply this method to our recent CLASH lensing measurements of MACS J1206.2-0847, and demonstrate how combining multi-probe lensing constraints can improve the reconstruction of cluster mass profiles. This method will also be useful for a stacked lensing analysis, combining all lensing-related effects in the cluster regime, for a definitive determination of the averaged mass profile.« less
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
Sources and magnitude of sampling error in redd counts for bull trout
Jason B. Dunham; Bruce Rieman
2001-01-01
Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...
Number of discernible object colors is a conundrum.
Masaoka, Kenichiro; Berns, Roy S; Fairchild, Mark D; Moghareh Abed, Farhad
2013-02-01
Widely varying estimates of the number of discernible object colors have been made by using various methods over the past 100 years. To clarify the source of the discrepancies in the previous, inconsistent estimates, the number of discernible object colors is estimated over a wide range of color temperatures and illuminance levels using several chromatic adaptation models, color spaces, and color difference limens. Efficient and accurate models are used to compute optimal-color solids and count the number of discernible colors. A comprehensive simulation reveals limitations in the ability of current color appearance models to estimate the number of discernible colors even if the color solid is smaller than the optimal-color solid. The estimates depend on the color appearance model, color space, and color difference limen used. The fundamental problem lies in the von Kries-type chromatic adaptation transforms, which have an unknown effect on the ranking of the number of discernible colors at different color temperatures.
Grainsize evolution and differential comminution in an experimental regolith
NASA Technical Reports Server (NTRS)
Horz, F.; Cintala, M.; See, T.
1984-01-01
The comminution of planetary surfaces by exposure to continuous meteorite bombardment was simulated by impacting the same fragmental gabbro target 200 times. The role of comminution and in situ gardening of planetary regoliths was addressed. Mean grain size continuously decreased with increasing shot number. Initially it decreased linearly with accumulated energy, but at some stage comminution efficiency started to decrease gradually. Point counting techniques, aided by the electron microprobe for mineral identification, were performed on a number of comminution products. Bulk chemical analyses of specific grain size fractions were also carried out. The finest sizes ( 10 microns) display generally the strongest enrichment/depletion factors. Similar, if not exactly identical, trends are reported from lunar soils. It is, therefore, not necessarily correct to explain the chemical characteristics of various grain sizes via different admixtures of materials from distant source terrains. Differential comminution of local source rocks may be the dominating factor.
Rogers, Wendy A; Trey, Torsten; Fiatarone Singh, Maria; Bridgett, Madeleine; Bramstedt, Katrina A; Lavee, Jacob
2016-08-01
This response refutes the claim made in a recent article that organs for transplantation in China will no longer be sourced from executed prisoners. We identify ongoing ethical problems due to the lack of transparent data on current numbers of transplants in China; implausible and conflicting claims about voluntary donations; and obfuscation about who counts as a voluntary donor. The big unanswered question in Chinese transplant ethics is the source of organs, and until there is an open and independently audited system in China, legitimate concerns remain about organ harvesting from prisoners of conscience. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Taylor, Nicholas J.; Thomas, Nancy E.; Anton-Culver, Hoda; Armstrong, Bruce K.; Begg, Colin B.; Busam, Klaus J.; Cust, Anne E.; Dwyer, Terence; From, Lynn; Gallagher, Richard P.; Gruber, Stephen B.; Nishri, Diane E.; Orlow, Irene; Rosso, Stefano; Venn, Alison J.; Zanetti, Roberto; Berwick, Marianne; Kanetsky, Peter A.
2016-01-01
Although nevus count is an established risk factor for melanoma, relationships between nevus number and patient and tumor characteristics have not been well studied and the influence of nevus count on melanoma-specific survival is equivocal. Using data from the Genes, Environment, and Melanoma (GEM) study, a large population-based study of primary cutaneous melanoma, we evaluated associations between number of nevi and patient features, including sun-sensitivity summarized in a phenotypic index, and tumor characteristics, and we assessed the association of nevus count with melanoma-specific survival. Higher nevus counts were independently and positively associated with male gender and younger age at diagnosis and inversely associated with lentigo maligna histology. We observed a borderline significant trend of poorer melanoma-specific survival with increasing quartile of nevus count, but little or no association between number of nevi and pigmentary phenotypic characteristics or prognostic tumor features. PMID:27101944
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Unveiling high redshift structures with Planck
NASA Astrophysics Data System (ADS)
Welikala, Niraj
2012-07-01
The Planck satellite, with its large wavelength coverage and all-sky survey, has a unique potential of systematically detecting the brightest and rarest submillimetre sources on the sky. We present an original method based on a combination of Planck and IRAS data which we use to select the most luminous submillimetre high-redshift (z>1-2) cold sources over the sky. The majority of these sources are either individual, strongly lensed galaxies, or represent the combined emission of several submillimetre galaxies within the large beam of Planck. The latter includes, in particular, rapidly growing galaxy groups and clusters. We demonstrate our selection method on the first 5 confirmations that include a newly discovered over-density of 5 submillimetre-bright sources which has been confirmed with Herschel/SPIRE observations and followed up with ground-based observations including VLT/XSHOOTER spectroscopy. Using Planck, we also unveil the nature of 107 high-redshift dusty, lensed submillimetre galaxies that have been previously observed over 940 square degrees by the South Pole Telescope (SPT). We stack these galaxies in the Planck maps, obtaining mean SEDs for both the bright (SPT flux F _{220 GHz} > 20 mJy) and faint (F _{220 GHz} < 20 mJy) galaxy populations. These SEDs and the derived mean redshifts suggest that the bright and faint sources belong to the same population of submillimetre galaxies. Stacking the lensed submillimetre galaxies in Planck also enables us to probe the z~1 environments around the foreground lenses and we obtain estimates of their clustering. Finally, we use the stacks to extrapolate SPT source counts to the Planck HFI frequencies, thereby estimating the contribution of the SPT sources at 220 GHz to the galaxy number counts at 353 and 545 GHz.
The displacement of the sun from the galactic plane using IRAS and faust source counts
NASA Technical Reports Server (NTRS)
Cohen, Martin
1995-01-01
I determine the displacement of the Sun from the Galactic plane by interpreting IRAS point-source counts at 12 and 25 microns in the Galactic polar caps using the latest version of the SKY model for the point-source sky (Cohen 1994). A value of solar zenith = 15.5 +/- 0.7 pc north of the plane provides the best match to the ensemble of useful IRAS data. Shallow K counts in the north Galactic pole are also best fitted by this offset, while limited FAUST far-ultraviolet counts at 1660 A near the same pole favor a value near 14 pc. Combining the many IRAS determinations with the few FAUST values suggests that a value of solar zenith = 15.0 +/- 0.5 pc (internal error only) would satisfy these high-latitude sets of data in both wavelength regimes, within the context of the SKY model.
ERIC Educational Resources Information Center
Spolsky, Bernard; And Others
As part of a study of the feasibility and effect of teaching Navajo children to read their own language first, a word count collected by 22 Navajo adults interviewing over 200 Navajo 6-year-olds was undertaken. This report discusses the word count and the interview texts in terms of (1) number of sentences, (2) number of words, (3) number of…
Time Multiplexed Active Neural Probe with 1356 Parallel Recording Sites
Raducanu, Bogdan C.; Yazicioglu, Refet F.; Lopez, Carolina M.; Putzeys, Jan; Andrei, Alexandru; Rochus, Veronique; Welkenhuysen, Marleen; van Helleputte, Nick; Musa, Silke; Puers, Robert; Kloosterman, Fabian; Van Hoof, Chris; Mitra, Srinjoy
2017-01-01
We present a high electrode density and high channel count CMOS (complementary metal-oxide-semiconductor) active neural probe containing 1344 neuron sized recording pixels (20 µm × 20 µm) and 12 reference pixels (20 µm × 80 µm), densely packed on a 50 µm thick, 100 µm wide, and 8 mm long shank. The active electrodes or pixels consist of dedicated in-situ circuits for signal source amplification, which are directly located under each electrode. The probe supports the simultaneous recording of all 1356 electrodes with sufficient signal to noise ratio for typical neuroscience applications. For enhanced performance, further noise reduction can be achieved while using half of the electrodes (678). Both of these numbers considerably surpass the state-of-the art active neural probes in both electrode count and number of recording channels. The measured input referred noise in the action potential band is 12.4 µVrms, while using 678 electrodes, with just 3 µW power dissipation per pixel and 45 µW per read-out channel (including data transmission). PMID:29048396
76 FR 54807 - Notice of Proposed Information Collection: IMLS Museum Web Database: MuseumsCount.gov
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-02
...: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library Services, National..., and the general public. Information such as name, address, phone, e-mail, Web site, congressional...: IMLS Museum Web Database, MuseumsCount.gov . OMB Number: To be determined. Agency Number: 3137...
Code of Federal Regulations, 2013 CFR
2013-01-01
... package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts...
Code of Federal Regulations, 2014 CFR
2014-01-01
... package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth inch for counts...
Dynamics of microorganism populations in recirculating nutrient solutions
NASA Technical Reports Server (NTRS)
Strayer, R. F.
1994-01-01
This overview covers the basic microbial ecology of recirculating hydroponic solutions. Examples from NASA and Soviet Controlled Ecological Life Support Systems (CELSS) tests and the commercial hydroponic industry will be used. The sources of microorganisms in nutrient solutions include air, water, seeds, plant containers and plumbing, biological vectors, and personnel. Microbial fates include growth, death, and emigration. Important microbial habitats within nutrient delivery systems are root surfaces, hardware surfaces (biofilms), and solution suspension. Numbers of bacteria on root surfaces usually exceed those from the other habitats by several orders of magnitude. Gram negative bacteria dominate the microflora with fungal counts usually much lower. Trends typically show a decrease in counts with increasing time unless stressed plants increase root exudates. Important microbial activities include carbon mineralization and nitrogen transformations. Important detrimental interactions include competition with plants, and human and plant pathogenesis.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
Whitcher, R; Page, R D; Cole, P R
2014-06-01
The characteristics of alpha radiation have for decades been demonstrated in UK schools using small sealed (241)Am sources. There is a small but steady number of schools who report a considerable reduction in the alpha count rate detected by an end-window GM detector compared with when the source was new. This cannot be explained by incorrect apparatus or set-up, foil surface contamination, or degradation of the GM detector. The University of Liverpool and CLEAPSS collaborated to research the cause of this performance degradation. The aim was to find what was causing the performance degradation and the ramifications for both the useful and safe service life of the sources. The research shows that these foil sources have greater energy straggling with a corresponding reduction in spectral peak energy. A likely cause for this increase in straggling is a significant diffusion of the metals over time. There was no evidence to suggest the foils have become unsafe, but precautionary checks should be made on old sources.
Activity measurements of 55Fe by two different methods
NASA Astrophysics Data System (ADS)
da Cruz, Paulo A. L.; Iwahara, Akira; da Silva, Carlos J.; Poledna, Roberto; Loureiro, Jamir S.; da Silva, Monica A. L.; Ruzzarin, Anelise
2018-03-01
A calibrated germanium detector and CIEMAT/NIST liquid scintillation method were used in the standardization of solution of 55Fe coming from a key-comparison BIPM. Commercial cocktails were used in source preparation for activity measurements in CIEMAT/NIST method. Measurements were performed in Liquid Scintillation Counter. In the germanium counting method standard point sources were prepared for obtaining atomic number versus efficiency curve of the detector in order to obtain the efficiency of 5.9 keV KX-ray of 55Fe by interpolation. The activity concentrations obtained were 508.17 ± 3.56 and 509.95 ± 16.20 kBq/g for CIEMAT/NIST and germanium methods, respectively.
Children's Mappings of Large Number Words to Numerosities
ERIC Educational Resources Information Center
Barth, Hilary; Starr, Ariel; Sullivan, Jessica
2009-01-01
Previous studies have suggested that children's learning of the relation between number words and approximate numerosities depends on their verbal counting ability, and that children exhibit no knowledge of mappings between number words and approximate numerical magnitudes for number words outside their productive verbal counting range. In the…
Comparison of birds detected from roadside and off-road point counts in the Shenandoah National Park
Keller, C.M.E.; Fuller, M.R.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Roadside point counts are generally used for large surveys to increase the number of samples. We examined differences in species detected from roadside versus off-road (200-m and 400-ha) point counts in the Shenandoah National Park. We also compared the list of species detected in the first 3 minutes to those detected in 10 minutes for potential species biases. Results from 81 paired roadside and off-road counts indicated that roadside counts had higher numbers of several edge species but did not have lower numbers of nonedge forest species. More individuals and species were detected from roadside points because of this increase in edge species. Sixty-five percent of the species detected in 10 minutes were recorded in the first 3 minutes.
Search for optical bursts from the gamma ray burst source GBS 0526-66
NASA Astrophysics Data System (ADS)
Seetha, S.; Sreenivasaiah, K. V.; Marar, T. M. K.; Kasturirangan, K.; Rao, U. R.; Bhattacharyya, J. C.
1985-08-01
Attempts were made to detect optical bursts from the gamma-ray burst source GBS 0526-66 during Dec. 31, 1984 to Jan. 2, 1985 and Feb. 23 to Feb. 24, 1985, using the one meter reflector of the Kavalur Observatory. Jan. 1, 1985 coincided with the zero phase of the predicted 164 day period of burst activity from the source (Rothschild and Lingenfelter, 1984). A new optical burst photon counting system with adjustable trigger threshold was used in parallel with a high speed photometer for the observations. The best time resolution was 1 ms and maximum count rate capability was 255,000 counts s(-1). Details of the instrumentation and observational results are presented.
The Atacama Cosmology Telescope: Extragalactic Sources at 148 GHz in the 2008 Survey
NASA Technical Reports Server (NTRS)
Marriage, T. A.; Juin, J. B.; Lin, Y. T.; Marsden, D.; Nolta, M. R.; Partridge, B.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.;
2011-01-01
We report on extragalactic sources detected in a 455 square-degree map of the southern sky made with data at a frequency of 148 GHz from the Atacama Cosmology Telescope 2008 observing season. We provide a catalog of 157 sources with flux densities spanning two orders of magnitude: from 15 mJy to 1500 mJy. Comparison to other catalogs shows that 98% of the ACT detections correspond to sources detected at lower radio frequencies. Three of the sources appear to be associated with the brightest cluster galaxies of low redshift X-ray selected galaxy clusters. Estimates of the radio to mm-wave spectral indices and differential counts of the sources further bolster the hypothesis that they are nearly all radio sources, and that their emission is not dominated by re-emission from warm dust. In a bright (> 50 mJy) 148 GHz-selected sample with complete cross-identifications from the Australia Telescope 20 GHz survey, we observe an average steepening of the spectra between .5, 20, and 148 GHz with median spectral indices of alp[ha (sub 5-20) = -0.07 +/- 0.06, alpha (sub 20-148) -0.39 +/- 0.04, and alpha (sub 5-148) = -0.20 +/- 0.03. When the measured spectral indices are taken into account, the 148 GHz differential source counts are consistent with previous measurements at 30 GHz in the context of a source count model dominated by radio sources. Extrapolating with an appropriately rescaled model for the radio source counts, the Poisson contribution to the spatial power spectrum from synchrotron-dominated sources with flux density less than 20 mJy is C(sup Sync) = (2.8 +/- 0.3) x 1O (exp-6) micro K(exp 2).
Learning from number board games: you learn what you encode.
Laski, Elida V; Siegler, Robert S
2014-03-01
We tested the hypothesis that encoding the numerical-spatial relations in a number board game is a key process in promoting learning from playing such games. Experiment 1 used a microgenetic design to examine the effects on learning of the type of counting procedure that children use. As predicted, having kindergartners count-on from their current number on the board while playing a 0-100 number board game facilitated their encoding of the numerical-spatial relations on the game board and improved their number line estimates, numeral identification, and count-on skill. Playing the same game using the standard count-from-1 procedure led to considerably less learning. Experiment 2 demonstrated that comparable improvement in number line estimation does not occur with practice encoding the numerals 1-100 outside of the context of a number board game. The general importance of aligning learning activities and physical materials with desired mental representations is discussed. PsycINFO Database Record (c) 2014 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Aravena, M.; Decarli, R.; Walter, F.; Da Cunha, E.; Bauer, F. E.; Carilli, C. L.; Daddi, E.; Elbaz, D.; Ivison, R. J.; Riechers, D. A.; Smail, I.; Swinbank, A. M.; Weiss, A.; Anguita, T.; Assef, R. J.; Bell, E.; Bertoldi, F.; Bacon, R.; Bouwens, R.; Cortes, P.; Cox, P.; Gónzalez-López, J.; Hodge, J.; Ibar, E.; Inami, H.; Infante, L.; Karim, A.; Le Le Fèvre, O.; Magnelli, B.; Ota, K.; Popping, G.; Sheth, K.; van der Werf, P.; Wagg, J.
2016-12-01
We present an analysis of a deep (1σ = 13 μJy) cosmological 1.2 mm continuum map based on ASPECS, the ALMA Spectroscopic Survey in the Hubble Ultra Deep Field. In the 1 arcmin2 covered by ASPECS we detect nine sources at \\gt 3.5σ significance at 1.2 mm. Our ALMA-selected sample has a median redshift of z=1.6+/- 0.4, with only one galaxy detected at z > 2 within the survey area. This value is significantly lower than that found in millimeter samples selected at a higher flux density cutoff and similar frequencies. Most galaxies have specific star formation rates (SFRs) similar to that of main-sequence galaxies at the same epoch, and we find median values of stellar mass and SFRs of 4.0× {10}10 {M}⊙ and ˜ 40 {M}⊙ yr-1, respectively. Using the dust emission as a tracer for the interstellar medium (ISM) mass, we derive depletion times that are typically longer than 300 Myr, and we find molecular gas fractions ranging from ˜0.1 to 1.0. As noted by previous studies, these values are lower than those using CO-based ISM estimates by a factor of ˜2. The 1 mm number counts (corrected for fidelity and completeness) are in agreement with previous studies that were typically restricted to brighter sources. With our individual detections only, we recover 55% ± 4% of the extragalactic background light (EBL) at 1.2 mm measured by the Planck satellite, and we recover 80% ± 7% of this EBL if we include the bright end of the number counts and additional detections from stacking. The stacked contribution is dominated by galaxies at z˜ 1{--}2, with stellar masses of (1-3) × 1010 M {}⊙ . For the first time, we are able to characterize the population of galaxies that dominate the EBL at 1.2 mm.
42 CFR 493.1276 - Standard: Clinical cytogenetics.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of accessioning, cell preparation, photographing or other image reproduction technique, photographic... records that document the following: (1) The media used, reactions observed, number of cells counted, number of cells karyotyped, number of chromosomes counted for each metaphase spread, and the quality of...
Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.
2010-01-01
Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weichenthal, Scott; Dufresne, Andre; Infante-Rivard, Claire
School classrooms are potentially important micro-environments for childhood exposures owing to the large amount of time children spend in these locations. While a number of airborne contaminants may be present in schools, to date few studies have examined ultrafine particle (0.02-1 {mu}m) (UFP) levels in classrooms. In this study, our objective was to characterize UFP counts (cm{sup -3}) in classrooms during the winter months and to develop a model to predict such exposures based on ambient weather conditions and outdoor UFPs, as well as classroom characteristics such as size, temperature, relative humidity, and carbon dioxide levels. In total, UFP countmore » data were collected on 60 occasions in 37 occupied classrooms at one elementary school and one secondary school in Pembroke, Ontario. On average, outdoor UFP levels exceeded indoor measures by 8989 cm{sup -3} (95% confidence interval (CI): 6382, 11 596), and classroom UFP counts were similar at both schools with a combined average of 5017 cm{sup -3} (95% CI: 4300, 5734). Of the variables examined only wind speed and outdoor UFPs were important determinants of classrooms UFP levels. Specifically, each 10 km/h increase in wind speed corresponded to an 1873 cm{sup -3} (95% CI: 825, 2920) decrease in classroom UFP counts, and each 10 000 cm{sup -3} increase in outdoor UFPs corresponded to a 1550 cm{sup -3} (95% CI: 930, 2171) increase in classroom UFP levels. However, high correlations between these two predictors meant that the independent effects of wind speed and outdoor UFPs could not be separated in multivariable models, and only outdoor UFP counts were included in the final predictive model. To evaluate model performance, classroom UFP counts were collected for 8 days at two new schools and compared to predicted values based on outdoor UFP measures. A moderate correlation was observed between measured and predicted classroom UFP counts (r=0.63) for both schools combined, but this relationship was not valid on days in which a strong indoor UFP source (electric kitchen stove) was active in schools. In general, our findings suggest that reasonable estimates of classroom UFP counts may be obtained from outdoor UFP data but that the accuracy of such estimates are limited in the presence of indoor UFP sources.« less
WFIRST: Predicting the number density of Hα-emitting galaxies
NASA Astrophysics Data System (ADS)
Benson, Andrew; Merson, Alex; Wang, Yun; Faisst, Andreas; Masters, Daniel; Kiessling, Alina; Rhodes, Jason
2018-01-01
The WFIRST mission will measure the clustering of Hα-emitting galaxies to help probe the nature of dark energy. Knowledge of the number density of such galaxies is therefore vital for forecasting the precision of thesemeasurements and assessing the scientific impact of the WFIRST mission. In this poster we present predictions from a galaxy formation model, Galacticus, for the cumulative number counts of Hα-emitting galaxies. We couple Galacticus to three different dust attenuation methods and examine the counts using each method. A χ2 minimization approach is used to compare the model counts to observed galaxy counts and calibrate the dust parameters. With these calibrated dust methods, we find that the Hα luminosity function from Galacticus is broadly consistent with observed estimates. Finally we present forecasts for the redshift distributions and number counts for a WFIRST-like survey. We predict that over a redshift range of 1 ≤ z ≤ 2 and with a blended flux limit of 1×10-16 erg s-1cm-2 Galacticus predicts that WFIRST would expect to observe a number density between 10400-15200 Hα-emitting galaxies per square degree.
Harding, A.M.A.; Piatt, John F.; Byrd, G.V.; Hatch, Shyla A.; Konyukhov, N.B.; Golubova, E.U.; Williams, J.C.
2005-01-01
It is difficult to survey crevice-nesting seabirds because nest-sites are hard to identify and count, and the number of adult birds attending a colony can be extremely variable within and between days. There is no standardized method for surveying crevice-nesting horned puffins (Fratercula corniculata), and consequently little is known about abundance or changes in their numbers. We examined the variability in colony attendance of horned puffins at 5 breeding colonies in the North Pacific to assess whether variation in count data can be reduced to a level that would allow us to detect changes in the number of birds attending a colony. We used within-year measures of variation in attendance to examine the power to detect a change in numbers between 2 years, and we used measures of among-year variation to examine the power to detect trends over multiple years. Diurnal patterns of attendance differed among colonies, and among-day variation in attendance was generally lowest from mid- to late-incubation to early chick rearing. Within-year variation in water counts was lower than in land counts, and variation was lower using a daily index based on 5 counts per day than it was using 1 count per day. Measures of among-year variation in attendance also were higher for land-based than water-based counts, and they were higher when we used a 10-day survey period than when we used a 30-day period. The use of either 1 or 5 counts a day during the colony-specific diurnal peak of attendance had little influence on levels of among-year variation. Overall, our study suggests that variation in count data may be reduced to a level that allows detection of trends in numbers. However, more studies of interannual variability in horned puffin attendance are needed. Further, the relationship between count data and breeding population size needs more study before the number of birds present at the colony can be used with confidence as an index of population trend.
Early Concepts of Number and Counting
ERIC Educational Resources Information Center
Box, Katherine; Scott, Paul
2004-01-01
Before primitive man had grasped the concept of number, the written word or even speech, he was able to count. This was important for keeping track of food supplies, sending messages, trading between villages and even keeping track of how many animals were in their herd. Counting was done in various ways, but in all cases, the underlying principle…
ERIC Educational Resources Information Center
Shaki, Samuel; Fischer, Martin H.; Gobel, Silke M.
2012-01-01
Western adults associate small numbers with left space and large numbers with right space. Where does this pervasive spatial-numerical association come from? In this study, we first recorded directional counting preferences in adults with different reading experiences (left to right, right to left, mixed, and illiterate) and observed a clear…
Phillips, A C; Jiang, C Q; Thomas, G N; Lin, J M; Yue, X J; Cheng, K K; Jin, Y L; Zhang, W S; Lam, T H
2012-08-01
Cross-sectional associations between white blood cell (WBC) count, lymphocyte and granulocyte numbers, and carotid intima-media thickness (IMT) and brachial-ankle pulse wave velocity (PWV) were examined in a novel older Chinese community sample. A total of 817 men and 760 women from a sub-study of the Guangzhou Biobank Cohort Study had a full blood count measured by an automated hematology analyzer, carotid IMT by B-mode ultrasonography and brachial-ankle PWV by a non-invasive automatic waveform analyzer. Following adjustment for confounders, WBC count (β=0.07, P<0.001) and granulocyte (β=0.07, P<0.001) number were significantly positively related to PWV, but not lymphocyte number. Similarly, WBC count (β=0.08, P=0.03), lymphocyte (β=0.08, P=0.002) and granulocyte (β=0.03, P=0.04) number were significantly positively associated with carotid IMT, but only the association with lymphocyte count survived correction for other cardiovascular risk factors. In conclusion, higher WBC, particularly lymphocyte and granulocyte, count could be used, respectively, as markers of cardiovascular disease risk, measured through indicators of atherosclerosis and arterial stiffness. The associations for WBC count previously observed by others were likely driven by higher granulocytes; an index of systemic inflammation.
Tanaka, Naoaki; Papadelis, Christos; Tamilia, Eleonora; Madsen, Joseph R; Pearl, Phillip L; Stufflebeam, Steven M
2018-04-27
This study evaluates magnetoencephalographic (MEG) spike population as compared with intracranial electroencephalographic (IEEG) spikes using a quantitative method based on distributed source analysis. We retrospectively studied eight patients with medically intractable epilepsy who had an MEG and subsequent IEEG monitoring. Fifty MEG spikes were analyzed in each patient using minimum norm estimate. For individual spikes, each vertex in the source space was considered activated when its source amplitude at the peak latency was higher than a threshold, which was set at 50% of the maximum amplitude over all vertices. We mapped the total count of activation at each vertex. We also analyzed 50 IEEG spikes in the same manner over the intracranial electrodes and created the activation count map. The location of the electrodes was obtained in the MEG source space by coregistering postimplantation computed tomography to MRI. We estimated the MEG- and IEEG-active regions associated with the spike populations using the vertices/electrodes with a count over 25. The activation count maps of MEG spikes demonstrated the localization associated with the spike population by variable count values at each vertex. The MEG-active region overlapped with 65 to 85% of the IEEG-active region in our patient group. Mapping the MEG spike population is valid for demonstrating the trend of spikes clustering in patients with epilepsy. In addition, comparison of MEG and IEEG spikes quantitatively may be informative for understanding their relationship.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count frequency also improved precision. These results suggest that the current distribution of count timings available in lek count databases such as that of Wyoming (conducted up to 90 minutes after sunrise) can be used to estimate sage-grouse population trends without reducing precision or accuracy relative to trends from counts conducted within 30 minutes of sunrise. However, only 10% of all Wyoming counts in our sample (1995−2014) were conducted 61−90 minutes after sunrise, and further increasing this percentage may still bias trend estimates because of declining lek attendance.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
The association of trail use with weather-related factors on an urban greenway.
Burchfield, Ryan A; Fitzhugh, Eugene C; Bassett, David R
2012-02-01
To study the association between weather-related measures and objectively measured trail use across 3 seasons. Weather has been reported as a barrier to outdoor physical activity (PA), but previous studies have explained only a small amount of the variance in PA using weather-related measures. The dependent variable of this study was trail use measured as mean hourly trail counts by an infrared trail counter located on a greenway. Each trail count represents 1 person breaking the infrared beam of the trail counter. Two sources of weather-related measures were obtained by a site-specific weather station and a public domain weather source. Temperature, relative humidity, and precipitation were significantly correlated with trail counts recorded during daylight hours. More precise hourly weather-related measures explained 42% of the variance in trail counts, regardless of the weather data source with temperature alone explaining 18% of the variance in trail counts. After controlling for all seasonal and weekly factors, every 1°F increase in temperature was associated with an increase of 1.1 trail counts/hr up to 76°F, at which point trail use began to slightly decrease. Weather-related factors have a moderate association with trail use along an urban greenway.
STELLAR X-RAY SOURCES IN THE CHANDRA COSMOS SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, N. J.; Drake, J. J.; Civano, F., E-mail: nwright@cfa.harvard.ed
2010-12-10
We present an analysis of the X-ray properties of a sample of solar- and late-type field stars identified in the Chandra Cosmic Evolution Survey (COSMOS), a deep (160 ks) and wide ({approx}0.9 deg{sup 2}) extragalactic survey. The sample of 60 sources was identified using both morphological and photometric star/galaxy separation methods. We determine X-ray count rates, extract spectra and light curves, and perform spectral fits to determine fluxes and plasma temperatures. Complementary optical and near-IR photometry is also presented and combined with spectroscopy for 48 of the sources to determine spectral types and distances for the sample. We find distancesmore » ranging from 30 pc to {approx}12 kpc, including a number of the most distant and highly active stellar X-ray sources ever detected. This stellar sample extends the known coverage of the L{sub X}-distance plane to greater distances and higher luminosities, but we do not detect as many intrinsically faint X-ray sources compared to previous surveys. Overall the sample is typically more luminous than the active Sun, representing the high-luminosity end of the disk and halo X-ray luminosity functions. The halo population appears to include both low-activity spectrally hard sources that may be emitting through thermal bremsstrahlung, as well as a number of highly active sources in close binaries.« less
The Norma arm region Chandra survey catalog: X-ray populations in the spiral arms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fornasini, Francesca M.; Tomsick, John A.; Bodaghee, Arash
2014-12-01
We present a catalog of 1415 X-ray sources identified in the Norma Arm Region Chandra Survey (NARCS), which covers a 2° × 0.°8 region in the direction of the Norma spiral arm to a depth of ≈20 ks. Of these sources, 1130 are point-like sources detected with ≥3σ confidence in at least one of three energy bands (0.5-10, 0.5-2, and 2-10 keV), five have extended emission, and the remainder are detected at low significance. Since most sources have too few counts to permit individual classification, they are divided into five spectral groups defined by their quantile properties. We analyze stackedmore » spectra of X-ray sources within each group, in conjunction with their fluxes, variability, and infrared counterparts, to identify the dominant populations in our survey. We find that ∼50% of our sources are foreground sources located within 1-2 kpc, which is consistent with expectations from previous surveys. Approximately 20% of sources are likely located in the proximity of the Scutum-Crux and near Norma arm, while 30% are more distant, in the proximity of the far Norma arm or beyond. We argue that a mixture of magnetic and nonmagnetic cataclysmic variables dominates the Scutum-Crux and near Norma arms, while intermediate polars and high-mass stars (isolated or in binaries) dominate the far Norma arm. We also present the cumulative number count distribution for sources in our survey that are detected in the hard energy band. A population of very hard sources in the vicinity of the far Norma arm and active galactic nuclei dominate the hard X-ray emission down to f{sub X} ≈ 10{sup –14} erg cm{sup –2} s{sup –1}, but the distribution curve flattens at fainter fluxes. We find good agreement between the observed distribution and predictions based on other surveys.« less
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-12-01
The number of motor unit action potentials (MUAPs) appearing in the surface electromyogram (EMG) signal is directly related to motor unit recruitment and firing rates and therefore offers potentially valuable information about the level of activation of the motoneuron pool. In this paper, based on morphological features of the surface MUAPs, we try to estimate the number of MUAPs present in the surface EMG by counting the negative peaks in the signal. Several signal processing procedures are applied to the surface EMG to facilitate this peak counting process. The MUAP number estimation performance by this approach is first illustrated using the surface EMG simulations. Then, by evaluating the peak counting results from the EMG records detected by a very selective surface electrode, at different contraction levels of the first dorsal interosseous (FDI) muscles, the utility and limitations of such direct peak counts for MUAP number estimation in surface EMG are further explored.
Within-site variability in surveys of wildlife populations
Link, William A.; Barker, Richard J.; Sauer, John R.; Droege, Sam
1994-01-01
Most large-scale surveys of animal populations are based on counts of individuals observed during a sampling period, which are used as indexes to the population. The variability in these indexes not only reflects variability in population sizes among sites but also variability due to the inexactness of the counts. Repeated counts at survey sites can be used to document this additional source of variability and, in some applications, to mitigate its effects. We present models for evaluating the proportion of total variability in counts that is attributable to this within-site variability and apply them in the analysis of data from repeated counts on routes from the North American Breeding Bird Survey. We analyzed data on 98 species, obtaining estimates of these percentages, which ranged from 3.5 to 100% with a mean of 36.25%. For at least 14 of the species, more than half of the variation in counts was attributable to within-site sources. Counts for species with lower average counts had a higher percentage of within-site variability. We discuss the relative cost efficiency of replicating sites or initiating new sites for several objectives, concluding that it is frequently better to initiate new sites than to attempt to replicate existing sites.
AMAS: a fast tool for alignment manipulation and computing of summary statistics.
Borowiec, Marek L
2016-01-01
The amount of data used in phylogenetics has grown explosively in the recent years and many phylogenies are inferred with hundreds or even thousands of loci and many taxa. These modern phylogenomic studies often entail separate analyses of each of the loci in addition to multiple analyses of subsets of genes or concatenated sequences. Computationally efficient tools for handling and computing properties of thousands of single-locus or large concatenated alignments are needed. Here I present AMAS (Alignment Manipulation And Summary), a tool that can be used either as a stand-alone command-line utility or as a Python package. AMAS works on amino acid and nucleotide alignments and combines capabilities of sequence manipulation with a function that calculates basic statistics. The manipulation functions include conversions among popular formats, concatenation, extracting sites and splitting according to a pre-defined partitioning scheme, creation of replicate data sets, and removal of taxa. The statistics calculated include the number of taxa, alignment length, total count of matrix cells, overall number of undetermined characters, percent of missing data, AT and GC contents (for DNA alignments), count and proportion of variable sites, count and proportion of parsimony informative sites, and counts of all characters relevant for a nucleotide or amino acid alphabet. AMAS is particularly suitable for very large alignments with hundreds of taxa and thousands of loci. It is computationally efficient, utilizes parallel processing, and performs better at concatenation than other popular tools. AMAS is a Python 3 program that relies solely on Python's core modules and needs no additional dependencies. AMAS source code and manual can be downloaded from http://github.com/marekborowiec/AMAS/ under GNU General Public License.
A THUMBNAIL HISTORY OF HETEROTROPHIC PLATE COUNT (HPC) METHODOLOGY IN THE UNITED STATES
Over the past 100 years, the method of determining the number of bacteria in water, foods or other materials has been termed variously as: bacterial plate count, total plate count, total viable plate count, aerobic plate count, standard plate cound and more recently, heterotrophi...
Pre-Exposure Prophylaxis YouTube Videos: Content Evaluation
Basch, Corey; Basch, Charles; Kernan, William
2018-01-01
Background Antiretroviral (ARV) medicines reduce the risk of transmitting the HIV virus and are recommended as daily pre-exposure prophylaxis (PrEP) in combination with safer sex practices for HIV-negative individuals at a high risk for infection, but are underused in HIV prevention. Previous literature suggests that YouTube is extensively used to share health information. While pre-exposure prophylaxis (PrEP) is a novel and promising approach to HIV prevention, there is limited understanding of YouTube videos as a source of information on PrEP. Objective The objective of this study was to describe the sources, characteristics, and content of the most widely viewed PrEP YouTube videos published up to October 1, 2016. Methods The keywords “pre-exposure prophylaxis” and “Truvada” were used to find 217 videos with a view count >100. Videos were coded for source, view count, length, number of comments, and selected aspects of content. Videos were also assessed for the most likely target audience. Results The total cumulative number of views was >2.3 million, however, a single Centers for Disease Control and Prevention video accounted for >1.2 million of the total cumulative views. A great majority (181/217, 83.4%) of the videos promoted the use of PrEP, whereas 60.8% (132/217) identified the specific target audience. In contrast, only 35.9% (78/217) of the videos mentioned how to obtain PrEP, whereas less than one third addressed the costs, side effects, and safety aspects relating to PrEP. Medical and academic institutions were the sources of the largest number of videos (66/217, 30.4%), followed by consumers (63/217, 29.0%), community-based organizations (CBO; 48/217, 22.1%), and media (40/217, 18.4%). Videos uploaded by the media sources were more likely to discuss the cost of PrEP (P<.001), whereas the use of PrEP was less likely to be promoted in videos uploaded by individual consumers (P=.002) and more likely to be promoted in videos originated by CBOs (P=.009). The most common target audience for the videos was gay and bisexual men. Conclusions YouTube videos can be used to share reliable PrEP information with individuals. Further research is needed to identify the best practices for using this medium to promote and increase PrEP uptake. PMID:29467119
Cho, Hyo-Min; Barber, William C.; Ding, Huanjun; Iwanczyk, Jan S.; Molloi, Sabee
2014-01-01
Purpose: The possible clinical applications which can be performed using a newly developed detector depend on the detector's characteristic performance in a number of metrics including the dynamic range, resolution, uniformity, and stability. The authors have evaluated a prototype energy resolved fast photon counting x-ray detector based on a silicon (Si) strip sensor used in an edge-on geometry with an application specific integrated circuit to record the number of x-rays and their energies at high flux and fast frame rates. The investigated detector was integrated with a dedicated breast spectral computed tomography (CT) system to make use of the detector's high spatial and energy resolution and low noise performance under conditions suitable for clinical breast imaging. The aim of this article is to investigate the intrinsic characteristics of the detector, in terms of maximum output count rate, spatial and energy resolution, and noise performance of the imaging system. Methods: The maximum output count rate was obtained with a 50 W x-ray tube with a maximum continuous output of 50 kVp at 1.0 mA. A109Cd source, with a characteristic x-ray peak at 22 keV from Ag, was used to measure the energy resolution of the detector. The axial plane modulation transfer function (MTF) was measured using a 67 μm diameter tungsten wire. The two-dimensional (2D) noise power spectrum (NPS) was measured using flat field images and noise equivalent quanta (NEQ) were calculated using the MTF and NPS results. The image quality parameters were studied as a function of various radiation doses and reconstruction filters. The one-dimensional (1D) NPS was used to investigate the effect of electronic noise elimination by varying the minimum energy threshold. Results: A maximum output count rate of 100 million counts per second per square millimeter (cps/mm2) has been obtained (1 million cps per 100 × 100 μm pixel). The electrical noise floor was less than 4 keV. The energy resolution measured with the 22 keV photons from a 109Cd source was less than 9%. A reduction of image noise was shown in all the spatial frequencies in 1D NPS as a result of the elimination of the electronic noise. The spatial resolution was measured just above 5 line pairs per mm (lp/mm) where 10% of MTF corresponded to 5.4 mm−1. The 2D NPS and NEQ shows a low noise floor and a linear dependence on dose. The reconstruction filter choice affected both of the MTF and NPS results, but had a weak effect on the NEQ. Conclusions: The prototype energy resolved photon counting Si strip detector can offer superior imaging performance for dedicated breast CT as compared to a conventional energy-integrating detector due to its high output count rate, high spatial and energy resolution, and low noise characteristics, which are essential characteristics for spectral breast CT imaging. PMID:25186390
Dyscalculia: neuroscience and education
Kaufmann, Liane
2010-01-01
Background Developmental dyscalculia is a heterogeneous disorder with largely dissociable performance profiles. Though our current understanding of the neurofunctional foundations of (adult) numerical cognition has increased considerably during the past two decades, there are still many unanswered questions regarding the developmental pathways of numerical cognition. Most studies on developmental dyscalculia are based upon adult calculation models which may not provide an adequate theoretical framework for understanding and investigating developing calculation systems. Furthermore, the applicability of neuroscience research to pedagogy has, so far, been limited. Purpose After providing an overview of current conceptualisations of numerical cognition and developmental dyscalculia, the present paper (1) reviews recent research findings that are suggestive of a neurofunctional link between fingers (finger gnosis, finger-based counting and calculation) and number processing, and (2) takes the latter findings as an example to discuss how neuroscience findings may impact on educational understanding and classroom interventions. Sources of evidence Finger-based number representations and finger-based calculation have deep roots in human ontology and phylogeny. Recently, accumulating empirical evidence supporting the hypothesis of a neurofunctional link between fingers and numbers has emerged from both behavioural and brain imaging studies. Main argument Preliminary but converging research supports the notion that finger gnosis and finger use seem to be related to calculation proficiency in elementary school children. Finger-based counting and calculation may facilitate the establishment of mental number representations (possibly by fostering the mapping from concrete non-symbolic to abstract symbolic number magnitudes), which in turn seem to be the foundations for successful arithmetic achievement. Conclusions Based on the findings illustrated here, it is plausible to assume that finger use might be an important and complementary aid (to more traditional pedagogical methods) to establish mental number representations and/or to facilitate learning to count and calculate. Clearly, future prospective studies are needed to investigate whether the explicit use of fingers in early mathematics teaching might prove to be beneficial for typically developing children and/or might support the mapping from concrete to abstract number representations in children with and without developmental dyscalculia. PMID:21258625
Dyscalculia: neuroscience and education.
Kaufmann, Liane
2008-06-01
BACKGROUND: Developmental dyscalculia is a heterogeneous disorder with largely dissociable performance profiles. Though our current understanding of the neurofunctional foundations of (adult) numerical cognition has increased considerably during the past two decades, there are still many unanswered questions regarding the developmental pathways of numerical cognition. Most studies on developmental dyscalculia are based upon adult calculation models which may not provide an adequate theoretical framework for understanding and investigating developing calculation systems. Furthermore, the applicability of neuroscience research to pedagogy has, so far, been limited. PURPOSE: After providing an overview of current conceptualisations of numerical cognition and developmental dyscalculia, the present paper (1) reviews recent research findings that are suggestive of a neurofunctional link between fingers (finger gnosis, finger-based counting and calculation) and number processing, and (2) takes the latter findings as an example to discuss how neuroscience findings may impact on educational understanding and classroom interventions. SOURCES OF EVIDENCE: Finger-based number representations and finger-based calculation have deep roots in human ontology and phylogeny. Recently, accumulating empirical evidence supporting the hypothesis of a neurofunctional link between fingers and numbers has emerged from both behavioural and brain imaging studies. MAIN ARGUMENT: Preliminary but converging research supports the notion that finger gnosis and finger use seem to be related to calculation proficiency in elementary school children. Finger-based counting and calculation may facilitate the establishment of mental number representations (possibly by fostering the mapping from concrete non-symbolic to abstract symbolic number magnitudes), which in turn seem to be the foundations for successful arithmetic achievement. CONCLUSIONS: Based on the findings illustrated here, it is plausible to assume that finger use might be an important and complementary aid (to more traditional pedagogical methods) to establish mental number representations and/or to facilitate learning to count and calculate. Clearly, future prospective studies are needed to investigate whether the explicit use of fingers in early mathematics teaching might prove to be beneficial for typically developing children and/or might support the mapping from concrete to abstract number representations in children with and without developmental dyscalculia.
Smith, W.P.; Wiedenfeld, D.A.; Hanel, P.B.; Twedt, D.J.; Ford, R.P.; Cooper, R.J.; Smith, Winston Paul
1993-01-01
To quantify efficacy of point count sampling in bottomland hardwood forests, we examined the influence of point count duration on corresponding estimates of number of individuals and species recorded. To accomplish this we conducted a totalof 82 point counts 7 May-16 May 1992distributed among three habitats (Wet, Mesic, Dry) in each of three regions within the lower Mississippi Alluvial Valley (MAV). Each point count consisted of recording the number of individual birds (all species) seen or heard during the initial three minutes and per each minute thereafter for a period totaling ten minutes. In addition, we included 384 point counts recorded during an 8-week period in each of 3 years (1985-1987) among 56 randomly-selected forest patches within the bottomlands of western Tennessee. Each point count consisted of recording the number of individuals (excluding migrating species) during each of four, 5 minute intervals for a period totaling 20 minutes. To estimate minimum sample size, we determined sampling variation at each level (region, habitat, and locality) with the 82 point counts from the lower (MAV) and applied the procedures of Neter and Wasserman (1974:493; Applied linear statistical models). Neither the cumulative number of individuals nor number of species per sampling interval attained an asymptote after 10 or 20 minutes of sampling. For western Tennessee bottomlands, total individual and species counts relative to point count duration were similar among years and comparable to the pattern observed throughout the lower MAV. Across the MAV, we recorded a total of 1,62 1 birds distributed among 52 species with the majority (8721/1621) representing 8 species. More birds were recorded within 25-50 m than in either of the other distance categories. There was significant variation in numbers of individuals and species among point counts. For both, significant differences between region and patch (nested within region) occurred; neither habitat nor interaction between habitat and region was significant. For = 0.05 and L3 = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total birds (MSE = 9.28) and species (MSE = 3.79), respectively; 25 percent of the mean could be achieved with 5 counts per factor level. Corresponding sample sizes required to detect differences of rarer species (e.g., Wood Thrush) were 500; for common species (e.g., Northern Cardinal) this same level of precision could be achieved with 100 counts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieira, J. D.; Crawford, T. M.; Switzer, E. R.
2010-08-10
We report the results of an 87 deg{sup 2} point-source survey centered at R.A. 5{sup h}30{sup m}, decl. -55{sup 0} taken with the South Pole Telescope at 1.4 and 2.0 mm wavelengths with arcminute resolution and milli-Jansky depth. Based on the ratio of flux in the two bands, we separate the detected sources into two populations, one consistent with synchrotron emission from active galactic nuclei and the other consistent with thermal emission from dust. We present source counts for each population from 11 to 640 mJy at 1.4 mm and from 4.4 to 800 mJy at 2.0 mm. The 2.0more » mm counts are dominated by synchrotron-dominated sources across our reported flux range; the 1.4 mm counts are dominated by synchrotron-dominated sources above {approx}15 mJy and by dust-dominated sources below that flux level. We detect 141 synchrotron-dominated sources and 47 dust-dominated sources at signal-to-noise ratio S/N >4.5 in at least one band. All of the most significantly detected members of the synchrotron-dominated population are associated with sources in previously published radio catalogs. Some of the dust-dominated sources are associated with nearby (z << 1) galaxies whose dust emission is also detected by the Infrared Astronomy Satellite. However, most of the bright, dust-dominated sources have no counterparts in any existing catalogs. We argue that these sources represent the rarest and brightest members of the population commonly referred to as submillimeter galaxies (SMGs). Because these sources are selected at longer wavelengths than in typical SMG surveys, they are expected to have a higher mean redshift distribution and may provide a new window on galaxy formation in the early universe.« less
NASA Technical Reports Server (NTRS)
Chuang, Hsiao-Chi; Hsiao, Ta-Chih; Wang, Sheng-Hsiang; Tsay, Si-Chee; Lin, Neng-Huei
2016-01-01
Biomass burning (BB) frequently occurs in SouthEast Asia (SEA), which significantly affects the air quality and could consequently lead to adverse health effects. The aim of this study was to characterize particulate matter (PM) and black carbon (BC) emitted from BB source regions in SEA and their potential of deposition in the alveolar region of human lungs. A 31-day characterization of PM profiling was conducted at the Doi Ang Khang (DAK) meteorology station in northern Thailand in March 2013. Substantial numbers of PM (10147 +/- 5800 # per cubic centimeter) with a geometric mean diameter (GMD) of 114.4 +/- 9.2 nm were found at the study site. The PM of less than 2.5 micron in aerodynamic diameter (PM sub 2.5) hourly-average mass concentration was 78.0 +/- 34.5 per cubic microgram whereas the black carbon (BC) mass concentration was 4.4 +/- 2.6 micrograms per cubic meter. Notably, high concentrations of nanoparticle surface area (100.5 +/- 54.6 square micrometers per cubic centimeter) emitted from biomass burning can be inhaled into the human alveolar region. Significant correlations with fire counts within different ranges around DAK were found for particle number, the surface area concentration of alveolar deposition, and BC. In conclusion, biomass burning is an important PM source in SEA, particularly nanoparticles, which has high potency to be inhaled into the lung environment and interact with alveolar cells, leading to adverse respiratory effects. The fire counts within 100 to 150 km shows the highest Pearson's r for particle number and surface area concentration. It suggests 12 to 24 hr could be a fair time scale for initial aging process of BB aerosols. Importantly, the people lives in this region could have higher risk for PM exposure.
Zani, Carlos L; Carroll, Anthony R
2017-06-23
The discovery of novel and/or new bioactive natural products from biota sources is often confounded by the reisolation of known natural products. Dereplication strategies that involve the analysis of NMR and MS spectroscopic data to infer structural features present in purified natural products in combination with database searches of these substructures provide an efficient method to rapidly identify known natural products. Unfortunately this strategy has been hampered by the lack of publically available and comprehensive natural product databases and open source cheminformatics tools. A new platform, DEREP-NP, has been developed to help solve this problem. DEREP-NP uses the open source cheminformatics program DataWarrior to generate a database containing counts of 65 structural fragments present in 229 358 natural product structures derived from plants, animals, and microorganisms, published before 2013 and freely available in the nonproprietary Universal Natural Products Database (UNPD). By counting the number of times one or more of these structural features occurs in an unknown compound, as deduced from the analysis of its NMR ( 1 H, HSQC, and/or HMBC) and/or MS data, matching structures carrying the same numeric combination of searched structural features can be retrieved from the database. Confirmation that the matching structure is the same compound can then be verified through literature comparison of spectroscopic data. This methodology can be applied to both purified natural products and fractions containing a small number of individual compounds that are often generated as screening libraries. The utility of DEREP-NP has been verified through the analysis of spectra derived from compounds (and fractions containing two or three compounds) isolated from plant, marine invertebrate, and fungal sources. DEREP-NP is freely available at https://github.com/clzani/DEREP-NP and will help to streamline the natural product discovery process.
Sample size and allocation of effort in point count sampling of birds in bottomland hardwood forests
Smith, W.P.; Twedt, D.J.; Cooper, R.J.; Wiedenfeld, D.A.; Hamel, P.B.; Ford, R.P.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect of increasing the number of points or visits by comparing results of 150 four-minute point counts obtained from each of four stands on Delta Experimental Forest (DEF) during May 8-May 21, 1991 and May 30-June 12, 1992. For each stand, we obtained bootstrap estimates of mean cumulative number of species each year from all possible combinations of six points and six visits. ANOVA was used to model cumulative species as a function of number of points visited, number of visits to each point, and interaction of points and visits. There was significant variation in numbers of birds and species between regions and localities (nested within region); neither habitat, nor the interaction between region and habitat, was significant. For a = 0.05 and a = 0.10, minimum sample size estimates (per factor level) varied by orders of magnitude depending upon the observed or specified range of desired detectable difference. For observed regional variation, 20 and 40 point counts were required to accommodate variability in total individuals (MSE = 9.28) and species (MSE = 3.79), respectively, whereas ? 25 percent of the mean could be achieved with five counts per factor level. Sample size sufficient to detect actual differences of Wood Thrush (Hylocichla mustelina) was >200, whereas the Prothonotary Warbler (Protonotaria citrea) required <10 counts. Differences in mean cumulative species were detected among number of points visited and among number of visits to a point. In the lower MAV, mean cumulative species increased with each added point through five points and with each additional visit through four visits. Although no interaction was detected between number of points and number of visits, when paired reciprocals were compared, more points invariably yielded a significantly greater cumulative number of species than more visits to a point. Still, 36 point counts per stand during each of two breeding seasons detected only 52 percent of the known available species pool in DEF.
Active Galactic Nuclei, Host Star Formation, and the Far Infrared
NASA Astrophysics Data System (ADS)
Draper, Aden R.; Ballantyne, D. R.
2011-05-01
Telescopes like Herschel and the Atacama Large Millimeter/submillimeter Array (ALMA) are creating new opportunities to study sources in the far infrared (FIR), a wavelength region dominated by cold dust emission. Probing cold dust in active galaxies allows for study of the star formation history of active galactic nuclei (AGN) hosts. The FIR is also an important spectral region for observing AGN which are heavily enshrouded by dust, such as Compton thick (CT) AGN. By using information from deep X-ray surveys and cosmic X-ray background synthesis models, we compute Cloudy photoionization simulations which are used to predict the spectral energy distribution (SED) of AGN in the FIR. Expected differential number counts of AGN and their host galaxies are calculated in the Herschel bands. The expected contribution of AGN and their hosts to the cosmic infrared background (CIRB) is also computed. Multiple star formation scenarios are investigated using a modified blackbody star formation SED. It is found that FIR observations at 350 and 500 um are an excellent tool in determining the star formation history of AGN hosts. Additionally, the AGN contribution to the CIRB can be used to determine whether star formation in AGN hosts evolves differently than in normal galaxies. AGN and host differential number counts are dominated by CT AGN in the Herschel-SPIRE bands. Therefore, X-ray stacking of bright SPIRE sources is likely to disclose a large fraction of the CT AGN population.
Simulations of the Far-infrared Sky
NASA Astrophysics Data System (ADS)
Andreani, P.; Lutz, D.; Poglitsch, A.; Genzel, R.
2001-07-01
One of the main tasks of FIRST is to carry out shallow and deep surveys in the far-IR / submm spectral domain with unprecedented sensitivity. Selecting unbiased samples out of deep surveys will be crucial to determine the history of evolving dusty objects, and therefore of star-formation. However, the usual procedures to extract information from a survey, i.e. selection of sources, computing the number counts, the luminosity and the correlation functions, and so on, cannot lead to a fully satisfactory and rigorous determination of the source characteristics. This is expecially true in the far-IR where source identification and redshift determination are difficult. To check the reliability of results the simulation of a large number of mock surveys is mandatory. This provides information on the observational biases and instrumental effects introduced by the observing procedures and allows one to understand how the different parameters affect the source observation and detection. The project we are undertaking consists of (1) simulating the far-IR/submm surveys as PACS (and SPIRE) will observe, (2) extracting from these complete mock catalogues, (3) for the foreseen photometric bands selecting high-z candidates in colour-colour diagrams, and (4) testing different observing strategies to assess observational biases and understand how the different parameters affect source observation and detection.
Freezing Coherent Field Growth in a Cavity by the Quantum Zeno Effect
NASA Astrophysics Data System (ADS)
Bernu, J.; Deléglise, S.; Sayrin, C.; Kuhr, S.; Dotsenko, I.; Brune, M.; Raimond, J. M.; Haroche, S.
2008-10-01
We have frozen the coherent evolution of a field in a cavity by repeated measurements of its photon number. We use circular Rydberg atoms dispersively coupled to the cavity mode for an absorption-free photon counting. These measurements inhibit the growth of a field injected in the cavity by a classical source. This manifestation of the quantum Zeno effect illustrates the backaction of the photon number determination onto the field phase. The residual growth of the field can be seen as a random walk of its amplitude in the two-dimensional phase space. This experiment sheds light onto the measurement process and opens perspectives for active quantum feedback.
A Hidden Pitfall in the Preparation of Agar Media Undermines Microorganism Cultivability
Tanaka, Tomohiro; Kawasaki, Kosei; Daimon, Serina; Kitagawa, Wataru; Yamamoto, Kyosuke; Tamaki, Hideyuki; Tanaka, Michiko; Nakatsu, Cindy H.
2014-01-01
Microbiologists have been using agar growth medium for over 120 years. It revolutionized microbiology in the 1890s when microbiologists were seeking effective methods to isolate microorganisms, which led to the successful cultivation of microorganisms as single clones. But there has been a disparity between total cell counts and cultivable cell counts on plates, often referred to as the “great plate count anomaly,” that has long been a phenomenon that still remains unsolved. Here, we report that a common practice microbiologists have employed to prepare agar medium has a hidden pitfall: when phosphate was autoclaved together with agar to prepare solid growth media (PT medium), total colony counts were remarkably lower than those grown on agar plates in which phosphate and agar were separately autoclaved and mixed right before solidification (PS medium). We used a pure culture of Gemmatimonas aurantiaca T-27T and three representative sources of environmental samples, soil, sediment, and water, as inocula and compared colony counts between PT and PS agar plates. There were higher numbers of CFU on PS medium than on PT medium using G. aurantiaca or any of the environmental samples. Chemical analysis of PT agar plates suggested that hydrogen peroxide was contributing to growth inhibition. Comparison of 454 pyrosequences of the environmental samples to the isolates revealed that taxa grown on PS medium were more reflective of the original community structure than those grown on PT medium. Moreover, more hitherto-uncultivated microbes grew on PS than on PT medium. PMID:25281372
Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects
ERIC Educational Resources Information Center
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.
2018-01-01
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
On Using a Space Telescope to Detect Weak-lensing Shear
NASA Astrophysics Data System (ADS)
Tung, Nathan; Wright, Edward
2017-11-01
Ignoring redshift dependence, the statistical performance of a weak-lensing survey is set by two numbers: the effective shape noise of the sources, which includes the intrinsic ellipticity dispersion and the measurement noise, and the density of sources that are useful for weak-lensing measurements. In this paper, we provide some general guidance for weak-lensing shear measurements from a “generic” space telescope by looking for the optimum wavelength bands to maximize the galaxy flux signal-to-noise ratio (S/N) and minimize ellipticity measurement error. We also calculate an effective galaxy number per square degree across different wavelength bands, taking into account the density of sources that are useful for weak-lensing measurements and the effective shape noise of sources. Galaxy data collected from the ultra-deep UltraVISTA Ks-selected and R-selected photometric catalogs (Muzzin et al. 2013) are fitted to radially symmetric Sérsic galaxy light profiles. The Sérsic galaxy profiles are then stretched to impose an artificial weak-lensing shear, and then convolved with a pure Airy Disk PSF to simulate imaging of weak gravitationally lensed galaxies from a hypothetical diffraction-limited space telescope. For our model calculations and sets of galaxies, our results show that the peak in the average galaxy flux S/N, the minimum average ellipticity measurement error, and the highest effective galaxy number counts all lie around the K-band near 2.2 μm.
An asymptotic theory of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Envia, Edmane
1992-01-01
A theory for predicting the noise field of supersonic propellers with realistic blade geometries is presented. The theory, which utilizes a large-blade-count approximation, provides an efficient formula for predicting the radiation of sound from all three sources of propeller noise. Comparisons with a full numerical integration indicate that the levels predicted by this formula are quite accurate. Calculations also show that, for high speed propellers, the noise radiated by the Lighthill quadrupole source is rather substantial when compared with the noise radiated by the blade thickness and loading sources. Results from a preliminary application of the theory indicate that the peak noise level generated by a supersonic propeller initially increases with increasing tip helical Mach number, but is eventually reaches a plateau and does not increase further. The predicted trend shows qualitative agreement with the experimental observations.
Indirect measurement of three-photon correlation in nonclassical light sources
NASA Astrophysics Data System (ADS)
Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon
2016-06-01
We observe the three-photon correlation in nonclassical light sources by using an indirect measurement scheme based on the dead-time effect of photon-counting detectors. We first develop a general theory which enables us to extract the three-photon correlation from the two-photon correlation of an arbitrary light source measured with detectors with finite dead times. We then confirm the validity of our measurement scheme in experiments done with a cavity-QED microlaser operating with a large intracavity mean photon number exhibiting both sub- and super-Poissonian photon statistics. The experimental results are in good agreement with the theoretical expectation. Our measurement scheme provides an alternative approach for N -photon correlation measurement employing (N -1 ) detectors and thus a reduced measurement time for a given signal-to-noise ratio, compared to the usual scheme requiring N detectors.
A whole-system approach to x-ray spectroscopy in cargo inspection systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langeveld, Willem G. J.; Gozani, Tsahi; Ryge, Peter
The bremsstrahlung x-ray spectrum used in high-energy, high-intensity x-ray cargo inspection systems is attenuated and modified by the materials in the cargo in a Z-dependent way. Therefore, spectroscopy of the detected x rays yields information about the Z of the x-rayed cargo material. It has previously been shown that such ZSpectroscopy (Z-SPEC) is possible under certain circumstances. A statistical approach, Z-SCAN (Z-determination by Statistical Count-rate ANalysis), has also been shown to be effective, and it can be used either by itself or in conjunction with Z-SPEC when the x-ray count rate is too high for individual x-ray spectroscopy. Both techniquesmore » require fast x-ray detectors and fast digitization electronics. It is desirable (and possible) to combine all techniques, including x-ray imaging of the cargo, in a single detector array, to reduce costs, weight, and overall complexity. In this paper, we take a whole-system approach to x-ray spectroscopy in x-ray cargo inspection systems, and show how the various parts interact with one another. Faster detectors and read-out electronics are beneficial for both techniques. A higher duty-factor x-ray source allows lower instantaneous count rates at the same overall x-ray intensity, improving the range of applicability of Z-SPEC in particular. Using an intensity-modulated advanced x-ray source (IMAXS) allows reducing the x-ray count rate for cargoes with higher transmission, and a stacked-detector approach may help material discrimination for the lowest attenuations. Image processing and segmentation allow derivation of results for entire objects, and subtraction of backgrounds. We discuss R and D performed under a number of different programs, showing progress made in each of the interacting subsystems. We discuss results of studies into faster scintillation detectors, including ZnO, BaF{sub 2} and PbWO{sub 4}, as well as suitable photo-detectors, read-out and digitization electronics. We discuss high-duty-factor linear-accelerator x-ray sources and their associated requirements, and how such sources improve spectroscopic techniques. We further discuss how image processing techniques help in correcting for backgrounds and overlapping materials. In sum, we present an integrated picture of how to optimize a cargo inspection system for x-ray spectroscopy.« less
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovard R. Perry; David L. Georgeson
2011-07-01
This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J).more » Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.« less
Estimating and comparing microbial diversity in the presence of sequencing errors
Chiu, Chun-Huo
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q ≥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
XMM-Newton 13H deep field - I. X-ray sources
NASA Astrophysics Data System (ADS)
Loaring, N. S.; Dwelly, T.; Page, M. J.; Mason, K.; McHardy, I.; Gunn, K.; Moss, D.; Seymour, N.; Newsam, A. M.; Takata, T.; Sekguchi, K.; Sasseen, T.; Cordova, F.
2005-10-01
We present the results of a deep X-ray survey conducted with XMM-Newton, centred on the UK ROSAT13H deep field area. This region covers 0.18 deg2, and is the first of the two areas covered with XMM-Newton as part of an extensive multiwavelength survey designed to study the nature and evolution of the faint X-ray source population. We have produced detailed Monte Carlo simulations to obtain a quantitative characterization of the source detection procedure and to assess the reliability of the resultant sourcelist. We use the simulations to establish a likelihood threshold, above which we expect less than seven (3 per cent) of our sources to be spurious. We present the final catalogue of 225 sources. Within the central 9 arcmin, 68 per cent of source positions are accurate to 2 arcsec, making optical follow-up relatively straightforward. We construct the N(>S) relation in four energy bands: 0.2-0.5, 0.5-2, 2-5 and 5-10 keV. In all but our highest energy band we find that the source counts can be represented by a double power law with a bright-end slope consistent with the Euclidean case and a break around 10-14yergcm-2s-1. Below this flux, the counts exhibit a flattening. Our source counts reach densities of 700, 1300, 900 and 300 deg-2 at fluxes of 4.1 × 10-16,4.5 × 10-16,1.1 × 10-15 and 5.3 × 10-15ergcm-2s-1 in the 0.2-0.5, 0.5-2, 2-5 and 5-10 keV energy bands, respectively. We have compared our source counts with those in the two Chandra deep fields and Lockman hole, and found our source counts to be amongst the highest of these fields in all energy bands. We resolve >51 per cent (>50 per cent) of the X-ray background emission in the 1-2 keV (2-5 keV) energy bands.
HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorat, K.; Subrahmanyan, R.; Saripalli, L.
2013-01-01
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less
Recommended methods for monitoring change in bird populations by counting and capture of migrants
David J. T. Hussell; C. John Ralph
2005-01-01
Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...
Monitoring trends in bird populations: addressing background levels of annual variability in counts
Jared Verner; Kathryn L. Purcell; Jennifer G. Turner
1996-01-01
Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...
NASA Astrophysics Data System (ADS)
Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.
2018-04-01
The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.
SPITZER 70 AND 160 {mu}m OBSERVATIONS OF THE COSMOS FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayer, D. T.; Huynh, M. T.; Bhattacharya, B.
2009-11-15
We present Spitzer 70 and 160 {mu}m observations of the COSMOS Spitzer survey (S-COSMOS). The data processing techniques are discussed for the publicly released products consisting of images and source catalogs. We present accurate 70 and 160 {mu}m source counts of the COSMOS field and find reasonable agreement with measurements in other fields and with model predictions. The previously reported counts for GOODS-North and the extragalactic First Look Survey are updated with the latest calibration, and counts are measured based on the large area SWIRE survey to constrain the bright source counts. We measure an extragalactic confusion noise level ofmore » {sigma} {sub c} = 9.4 {+-} 3.3 mJy (q = 5) for the MIPS 160 {mu}m band based on the deep S-COSMOS data and report an updated confusion noise level of {sigma} {sub c} = 0.35 {+-} 0.15 mJy (q = 5) for the MIPS 70 {mu}m band.« less
2013 Kids Count in Colorado! Community Matters
ERIC Educational Resources Information Center
Colorado Children's Campaign, 2013
2013-01-01
"Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…
Activity patterns and monitoring numbers of Horned Puffins and Parakeet Auklets
Hatch, Shyla A.
2002-01-01
Nearshore counts of birds on the water and time-lapse photography were used to monitor seasonal activity patterns and interannual variation in numbers of Horned Puffins (Fratercula corniculata) and Parakeet Auklets (Aethia psittacula) at the Semidi Islands, Alaska. The best period for over-water counts was mid egg-laying through hatching in auklets and late prelaying through early hatching in puffins. Daily counts (07.00 h-09.30 h) varied widely, with peak numbers and days with few or no birds present occurring throughout the census period. Variation among annual means in four years amounted to 26% and 72% of total count variation in puffins and auklets, respectively. Time-lapse photography of nesting habitat in early incubation revealed a morning (08.00 h-12.00 h) peak in the number of puffins loitering on study plots. Birds recorded in time-lapse images never comprised more than a third of the estimated breeding population on a plot. Components of variance in the time-lapse study were 29% within hours, 9% among hours (08.00 h-12.00 h), and 62% among days (8-29 June). Variability of overwater and land-based counts is reduced by standardizing the time of day when counts are made, but weather conditions had little influence on either type of count. High international variation of population indices implies low power to detect numerical trends in crevice-nesting auklets and puffins.
Using Pinochle to motivate the restricted combinations with repetitions problem
NASA Astrophysics Data System (ADS)
Gorman, Patrick S.; Kunkel, Jeffrey D.; Vasko, Francis J.
2011-07-01
A standard example used in introductory combinatoric courses is to count the number of five-card poker hands possible from a straight deck of 52 distinct cards. A more interesting problem is to count the number of distinct hands possible from a Pinochle deck in which there are multiple, but obviously limited, copies of each type of card (two copies for single-deck, four for double deck). This problem is more interesting because our only concern is to count the number of distinguishable hands that can be dealt. In this note, under various scenarios, we will discuss two combinatoric techniques for counting these hands; namely, the inclusion-exclusion principle and generating functions. We will then show that these Pinochle examples motivate a general counting formula for what are called 'regular' combinations by Riordan. Finally, we prove the correctness of this formula using generating functions.
NASA Astrophysics Data System (ADS)
Austermann, Jason Edward
One of the primary drivers in the development of large format millimeter detector arrays is the study of sub-millimeter galaxies (SMGs) - a population of very luminous high-redshift dust-obscured starbursts that are widely believed to be the dominant contributor to the Far-Infrared Background (FIB). The characterization of such a population requires the ability to map large patches of the (sub-)millimeter sky to high sensitivity within a feasible amount of time. I present this dissertation on the design, integration, and characterization of the 144-pixel AzTEC millimeter-wave camera and its application to the study of the sub-millimeter galaxy population. In particular, I present an unprecedented characterization of the "blank-field" (fields with no known mass bias) SMG number counts by mapping over 0.5 deg^2 to 1.1mm depths of ~1mJy - a previously unattained depth on these scales. This survey provides the tightest SMG number counts available, particularly for the brightest and rarest SMGs that require large survey areas for a significant number of detections. These counts are compared to the predictions of various models of the evolving mm/sub-mm source population, providing important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation. I also present the results of an AzTEC 0.15 deg^2 survey of the COSMOS field, which uncovers a significant over-density of bright SMGs that are spatially correlated to foreground mass structures, presumably as a result of gravitational lensing. Finally, I compare the results of the available SMG surveys completed to date and explore the effects of cosmic variance on the interpretation of individual surveys.
An Embodiment Perspective on Number-Space Mapping in 3.5-Year-Old Dutch Children.
van 't Noordende, Jaccoline E; Volman, M Chiel J M; Leseman, Paul P M; Kroesbergen, Evelyn H
2017-01-01
Previous research suggests that block adding, subtracting and counting direction are early forms of number-space mapping. In this study, an embodiment perspective on these skills was taken. Embodiment theory assumes that cognition emerges through sensory-motor interaction with the environment. In line with this assumption, it was investigated if counting and adding/subtracting direction in young children is related to the hand they use during task performance. Forty-eight 3.5-year-old children completed a block adding, subtracting and counting task. They had to add and remove a block from a row of three blocks and count a row of five blocks. Adding, subtracting and counting direction were related to the hand the children used for task performance. Most children who used their right hand added, removed and started counting the blocks at the right side of the row. Most children who used their left hand added, removed and started counting the blocks at the left side of the row. It can be concluded that number-space mapping, as measured by direction of adding, subtracting and counting blocks, in young children is embodied: It is not fixed, but is related to the situation. © 2016 The Authors Infant and Child Development Published by John Wiley & Sons, Ltd.
Integrable optical-fiber source of polarization-entangled photon pairs in the telecom band
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Xiaoying; Liang Chuang; Fook Lee, Kim
We demonstrate an optical-fiber-based source of polarization-entangled photon pairs with improved quality and efficiency, which has been integrated with off-the-shelf telecom components and is, therefore, well suited for quantum communication applications in the 1550-nm telecom band. Polarization entanglement is produced by simultaneously pumping a loop of standard dispersion-shifted fiber with two orthogonally polarized pump pulses, one propagating in the clockwise and the other in the counterclockwise direction. We characterize this source by investigating two-photon interference between the generated signal-idler photon pairs under various conditions. The experimental parameters are carefully optimized to maximize the generated photon-pair correlation and to minimize contaminationmore » of the entangled photon pairs from extraneously scattered background photons that are produced by the pump pulses for two reasons: (i) spontaneous Raman scattering causes uncorrelated photons to be emitted in the signal and idler bands and (ii) broadening of the pump-pulse spectrum due to self-phase modulation causes pump photons to leak into the signal and idler bands. We obtain two-photon interference with visibility >90% without subtracting counts caused by the background photons (only dark counts of the detectors are subtracted), when the mean photon number in the signal (idler) channel is about 0.02/pulse, while no interference is observed in direct detection of either the signal or idler photons.« less
A conceptual guide to detection probability for point counts and other count-based survey methods
D. Archibald McCallum
2005-01-01
Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...
Are the birch trees in Southern England a source of Betula pollen for North London?
Skjøth, C A; Smith, M; Brandt, J; Emberlin, J
2009-01-01
Birch pollen is highly allergenic. Knowledge of daily variations, atmospheric transport and source areas of birch pollen is important for exposure studies and for warnings to the public, especially for large cities such as London. Our results show that broad-leaved forests with high birch tree densities are located to the south and west of London. Bi-hourly Betula pollen concentrations for all the days included in the study, and for all available days with high birch pollen counts (daily average birch pollen counts>80 grains/m3), show that, on average, there is a peak between 1400 hours and 1600 hours. Back-trajectory analysis showed that, on days with high birch pollen counts (n=60), 80% of air masses arriving at the time of peak diurnal birch pollen count approached North London from the south in a 180 degree arc from due east to due west. Detailed investigations of three Betula pollen episodes, with distinctly different diurnal patterns compared to the mean daily cycle, were used to illustrate how night-time maxima (2200-0400 hours) in Betula pollen counts could be the result of transport from distant sources or long transport times caused by slow moving air masses. We conclude that the Betula pollen recorded in North London could originate from sources found to the west and south of the city and not just trees within London itself. Possible sources outside the city include Continental Europe and the Betula trees within the broad-leaved forests of Southern England.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael
2015-03-01
Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10-40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies.
Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael
2015-01-01
Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10–40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies. PMID:25582709
Isospectral discrete and quantum graphs with the same flip counts and nodal counts
NASA Astrophysics Data System (ADS)
Juul, Jonas S.; Joyner, Christopher H.
2018-06-01
The existence of non-isomorphic graphs which share the same Laplace spectrum (to be referred to as isospectral graphs) leads naturally to the following question: what additional information is required in order to resolve isospectral graphs? It was suggested by Band, Shapira and Smilansky that this might be achieved by either counting the number of nodal domains or the number of times the eigenfunctions change sign (the so-called flip count) (Band et al 2006 J. Phys. A: Math. Gen. 39 13999–4014 Band and Smilansky 2007 Eur. Phys. J. Spec. Top. 145 171–9). Recent examples of (discrete) isospectral graphs with the same flip count and nodal count have been constructed by Ammann by utilising Godsil–McKay switching (Ammann private communication). Here, we provide a simple alternative mechanism that produces systematic examples of both discrete and quantum isospectral graphs with the same flip and nodal counts.
Data-based Considerations in Portal Radiation Monitoring of Cargo Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weier, Dennis R.; O'Brien, Robert F.; Ely, James H.
2004-07-01
Radiation portal monitoring of cargo vehicles often includes a configuration of four-panel monitors that record gamma and neutron counts from vehicles transporting cargo. As vehicles pass the portal monitors, they generate a count profile over time that can be compared to the average panel background counts obtained just prior to the time the vehicle entered the area of the monitors. Pacific Northwest National Laboratory has accumulated considerable data regarding such background radiation and vehicle profiles from portal installations, as well as in experimental settings using known sources and cargos. Several considerations have a bearing on how alarm thresholds are setmore » in order to maintain sensitivity to radioactive sources while also controlling to a manageable level the rate of false or nuisance alarms. False alarms are statistical anomalies while nuisance alarms occur due to the presence of naturally occurring radioactive material (NORM) in cargo, for example, kitty litter. Considerations to be discussed include: • Background radiation suppression due to the shadow shielding from the vehicle. • The impact of the relative placement of the four panels on alarm decision criteria. • Use of plastic scintillators to separate gamma counts into energy windows. • The utility of using ratio criteria for the energy window counts rather than simply using total window counts. • Detection likelihood for these various decision criteria based on computer simulated injections of sources into vehicle profiles.« less
Garment Counting in a Textile Warehouse by Means of a Laser Imaging System
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-01-01
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%. PMID:23628760
Garment counting in a textile warehouse by means of a laser imaging system.
Martínez-Sala, Alejandro Santos; Sánchez-Aartnoutse, Juan Carlos; Egea-López, Esteban
2013-04-29
Textile logistic warehouses are highly automated mechanized places where control points are needed to count and validate the number of garments in each batch. This paper proposes and describes a low cost and small size automated system designed to count the number of garments by processing an image of the corresponding hanger hooks generated using an array of phototransistors sensors and a linear laser beam. The generated image is processed using computer vision techniques to infer the number of garment units. The system has been tested on two logistic warehouses with a mean error in the estimated number of hangers of 0.13%.
Count-doubling time safety circuit
Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.
1981-01-01
There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.
1989-01-30
D-007 & Hazardous Materials Agency 1 CETHA-TE Task No. 11 . ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS ATTN: CETHA-TE-D...PROGRAM IPROJECT TASK IWORK UNIT Aberdeen Proving Ground, MD 21010-5401 ELEMENT NO. NO. NO. ACCESSION NO. 1 . TITLE (Include SecurityClassification...Peter J. Marks 3a. TYPE OF REPORT 13b. TIME COVERED T14. DATE OF REPORT (Year, Month, Day) 115. PAGE COUNT Final FROM 1 /87 TO_1/89 11989 January 3 6
A Moiré Pattern-Based Thread Counter
NASA Astrophysics Data System (ADS)
Reich, Gary
2017-10-01
Thread count is a term used in the textile industry as a measure of how closely woven a fabric is. It is usually defined as the sum of the number of warp threads per inch (or cm) and the number of weft threads per inch. (It is sometimes confusingly described as the number of threads per square inch.) In recent years it has also become a subject of considerable interest and some controversy among consumers. Many consumers consider thread count to be a key measure of the quality or fineness of a fabric, especially bed sheets, and they seek out fabrics that advertise high counts. Manufacturers in turn have responded to this interest by offering fabrics with ever higher claimed thread counts (sold at ever higher prices), sometime achieving the higher counts by distorting the definition of the term with some "creative math." In 2005 the Federal Trade Commission noted the growing use of thread count in advertising at the retail level and warned of the potential for consumers to be misled by distortions of the definition.
For Mole Problems, Call Avogadro: 602-1023.
ERIC Educational Resources Information Center
Uthe, R. E.
2002-01-01
Describes techniques to help introductory students become familiar with Avogadro's number and mole calculations. Techniques involve estimating numbers of common objects then calculating the length of time needed to count large numbers of them. For example, the immense amount of time required to count a mole of sand grains at one grain per second…
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Heesch, Kristiann C; Langdon, Michael
2016-02-01
Issue addressed A key strategy to increase active travel is the construction of bicycle infrastructure. Tools to evaluate this strategy are limited. This study assessed the usefulness of a smartphone GPS tracking system for evaluating the impact of this strategy on cycling behaviour. Methods Cycling usage data were collected from Queenslanders who used a GPS tracking app on their smartphone from 2013-2014. 'Heat' and volume maps of the data were reviewed, and GPS bicycle counts were compared with surveillance data and bicycle counts from automatic traffic-monitoring devices. Results Heat maps broadly indicated that changes in cycling occurred near infrastructure improvements. Volume maps provided changes in counts of cyclists due to these improvements although errors were noted in geographic information system (GIS) geo-coding of some GPS data. Large variations were evident in the number of cyclists using the app in different locations. These variations limited the usefulness of GPS data for assessing differences in cycling across locations. Conclusion Smartphone GPS data are useful in evaluating the impact of improved bicycle infrastructure in one location. Using GPS data to evaluate differential changes in cycling across multiple locations is problematic when there is insufficient traffic-monitoring devices available to triangulate GPS data with bicycle traffic count data. So what? The use of smartphone GPS data with other data sources is recommended for assessing how infrastructure improvements influence cycling behaviour.
High event rate ROICs (HEROICs) for astronomical UV photon counting detectors
NASA Astrophysics Data System (ADS)
Harwit, Alex; France, Kevin; Argabright, Vic; Franka, Steve; Freymiller, Ed; Ebbets, Dennis
2014-07-01
The next generation of astronomical photocathode / microchannel plate based UV photon counting detectors will overcome existing count rate limitations by replacing the anode arrays and external cabled electronics with anode arrays integrated into imaging Read Out Integrated Circuits (ROICs). We have fabricated a High Event Rate ROIC (HEROIC) consisting of a 32 by 32 array of 55 μm square pixels on a 60 μm pitch. The pixel sensitivity (threshold) has been designed to be globally programmable between 1 × 103 and 1 × 106 electrons. To achieve the sensitivity of 1 × 103 electrons, parasitic capacitances had to be minimized and this was achieved by fabricating the ROIC in a 65 nm CMOS process. The ROIC has been designed to support pixel counts up to 4096 events per integration period at rates up to 1 MHz per pixel. Integration time periods can be controlled via an external signal with a time resolution of less than 1 microsecond enabling temporally resolved imaging and spectroscopy of astronomical sources. An electrical injection port is provided to verify functionality and performance of each ROIC prior to vacuum integration with a photocathode and microchannel plate amplifier. Test results on the first ROICs using the electrical injection port demonstrate sensitivities between 3 × 103 and 4 × 105 electrons are achieved. A number of fixes are identified for a re-spin of this ROIC.
Microbial response to triepthylphosphate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazen, T.C.; Santo Domingo, J.W.; Berry, C.J.
1997-05-01
The effect of triethylphosphate (TEP) on the activity of a landfill aquifer microbial community was evaluated using standard techniques and in situ hybridizations with phylogenetic probes. Benzene was used as an external carbon source to monitor degradation of an aromatic compound in TEP amended microcosms. Microscopical and viable counts were higher in TEP containing microcosms when compared to unamended controls. A significant increase in metabolic activity was also observed for TEP amended samples as determined by the number of cells hybridizing to an eubacterial probe. In addition, the number of beta and gamma Proteobacteria increased from undetectable levels prior tomore » the study to 15-29% of the total bacteria in microcosms containing TEP and benzene. In these microcosms, nearly 40% of the benzene was degraded during the incubation period compared to less than 5% in unamended microcosms. While TEP has previously been used as an alternate phosphate source in the bioremediation of chlorinated aliphatics, this study shows that it can also stimulate the microbial degradation of aromatics in phosphate limited aquifers.« less
Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey
NASA Astrophysics Data System (ADS)
Caditz, David M.
2017-12-01
Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Kinoshita, T; Nakamura, T; Umemoto, Y; Kojima, D; Moriki, T; Mitsui, T; Goto, M; Ishida, Y; Tajima, F
2013-06-01
Case series. To investigate the effects of wheelchair basketball game on plasma interleukin-6 (IL-6), tumor necrosis factor-α (TNF-α), C-reactive protein (CRP) and blood cell counts in persons with spinal cord injury (SCI). The 2009 Mei-shin League of Wheelchair Basketball Games held at Wakayama, Japan. Five wheelchair basketball players with SCI voluntarily participated in this study. Blood samples were taken approximately 1 h before the player warm-up for the game and immediately after the game. IL-6, TNF-α, CRP and blood cell count were measured. Plasma IL-6 level and number of monocytes were significantly increased after the game, compared with pre-game measurements (P<0.05). No changes were observed in other measurements. There was a significant relationship between increased IL-6 levels and accumulated play duration. The lack of change in TNF-α and CRP levels suggested that the exercise-induced rise in IL-6 was not related to exercise-induced inflammatory response. Furthermore, the associated increase in the number of monocytes did not correlate with exercise-induced IL-6 changes, negating monocytes as the source of IL-6.
Optimizing the duration of point counts for monitoring trends in bird populations
Jared Verner
1988-01-01
Minute-by-minute analysis of point counts of birds in mixed-conifer forests in the Sierra National Forest, central California, showed that cumulative counts of species and individuals increased in a curvilinear fashion but did not reach asymptotes after 10 minutes of counting. Comparison of the expected number of individuals counted per hour with various combinations...
ERIC Educational Resources Information Center
Lara, Lindsi M.; Spradlin, Terry E.; Wodicka, Christopher Y.
2012-01-01
This Education Policy Brief provides an overview of the student count mechanisms that are currently employed by states. It then reviews Indiana's outgoing count mechanism, the Single Count Date, and compares it with the newly enacted Multiple Count Dates mechanism. To conclude the discussion, the brief examines how other states use the Multiple…
How to Learn the Natural Numbers: Inductive Inference and the Acquisition of Number Concepts
ERIC Educational Resources Information Center
Margolis, Eric; Laurence, Stephen
2008-01-01
Theories of number concepts often suppose that the natural numbers are acquired as children learn to count and as they draw an induction based on their interpretation of the first few count words. In a bold critique of this general approach, Rips, Asmuth, Bloomfield [Rips, L., Asmuth, J. & Bloomfield, A. (2006). Giving the boot to the bootstrap:…
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...
2018-03-29
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S
2016-04-15
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12} ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}] ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope
Ackermann, M.; Ajello, M.; Albert, A.; ...
2016-04-14
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less
Local Group dSph radio survey with ATCA (I): observations and background sources
NASA Astrophysics Data System (ADS)
Regis, Marco; Richter, Laura; Colafrancesco, Sergio; Massardi, Marcella; de Blok, W. J. G.; Profumo, Stefano; Orford, Nicola
2015-04-01
Dwarf spheroidal (dSph) galaxies are key objects in near-field cosmology, especially in connection to the study of galaxy formation and evolution at small scales. In addition, dSphs are optimal targets to investigate the nature of dark matter. However, while we begin to have deep optical photometric observations of the stellar population in these objects, little is known so far about their diffuse emission at any observing frequency, and hence on thermal and non-thermal plasma possibly residing within dSphs. In this paper, we present deep radio observations of six local dSphs performed with the Australia Telescope Compact Array (ATCA) at 16 cm wavelength. We mosaicked a region of radius of about 1 deg around three `classical' dSphs, Carina, Fornax, and Sculptor, and of about half of degree around three `ultrafaint' dSphs, BootesII, Segue2, and Hercules. The rms noise level is below 0.05 mJy for all the maps. The restoring beams full width at half-maximum ranged from 4.2 arcsec × 2.5 arcsec to 30.0 arcsec × 2.1 arcsec in the most elongated case. A catalogue including the 1392 sources detected in the six dSph fields is reported. The main properties of the background sources are discussed, with positions and fluxes of brightest objects compared with the FIRST, NVSS, and SUMSS observations of the same fields. The observed population of radio emitters in these fields is dominated by synchrotron sources. We compute the associated source number counts at 2 GHz down to fluxes of 0.25 mJy, which prove to be in agreement with AGN count models.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
The Fermi Large Area Telescope Thrid Gamma-ray Source Catalog
NASA Astrophysics Data System (ADS)
Stephens, Thomas E.; Ballet, Jean; Burnett, Toby; Cavazzuti, Elisabetta; Digel, Seth William; Fermi LAT Collaboration
2015-01-01
We present an overview of the third Fermi Large Area Telescope source catalog (3FGL) of sources in the 100 MeV - 300 GeV range. Based on the first four years of science data from the Fermi Gamma-ray Space Telescope mission, it is the deepest yet in this energy range. Relative to the 2FGL catalog (Nolan et al. 2012, ApJS 199, 31), the 3FGL catalog incorporates twice as much data as well as a number of analysis improvements, including improved calibrations at the event reconstruction level, an updated model for Galactic diffuse gamma-ray emission, a refined procedure for source detection, and improved methods for associating LAT sources with potential counterparts at other wavelengths. The 3FGL catalog includes 3033 sources, with source location regions, spectral properties, and monthly light curves for each. For approximately one-third of the sources we have not found counterparts at other wavelengths. More than 1100 of the identified or associated sources are active galaxies of the blazar class; several other classes of non-blazar active galaxies are also represented in the 3FGL. Pulsars represent the largest Galactic source class. From source counts of Galactic sources we estimate the contribution of unresolved sources to the Galactic diffuse emission.
USDA-ARS?s Scientific Manuscript database
Genetic parameters for ewe reproductive traits [number of lambs born (NLB) and number of lambs weaned (NLW)] and ewe peri-parturient rise (PPR) fecal egg counts (FEC) at lambing (PPR0) and at 30-d post lambing (PPR30), and their genetic relationships with lamb BW and FEC in Katahdin sheep were estim...
Code of Federal Regulations, 2013 CFR
2013-01-01
... at end of Table I] Factor Grades AL 2 Number of 50-count samples 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14... Grades AL 2 Number of 50-count samples 3 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40... port of entry into the United States. 2 AL—Absolute limit permitted in individual 33-count sample. 3...
Code of Federal Regulations, 2014 CFR
2014-01-01
... at end of Table I] Factor Grades AL 2 Number of 50-count samples 3 1 2 3 4 5 6 7 8 9 10 11 12 13 14... Grades AL 2 Number of 50-count samples 3 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40... port of entry into the United States. 2 AL—Absolute limit permitted in individual 33-count sample. 3...
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
ERIC Educational Resources Information Center
Fuadiah, Nyiayu Fahriza; Suryadi, Didi; Turmudi
2017-01-01
This study revealed how students' understanding of negative numbers and identified their difficulties related with the concept of integer and its counting operation as part of identifying epistemological obstacles about negative numbers. Even though teachers have explained counting operation procedure of integer, but there was concept…
On Counting the Rational Numbers
ERIC Educational Resources Information Center
Almada, Carlos
2010-01-01
In this study, we show how to construct a function from the set N of natural numbers that explicitly counts the set Q[superscript +] of all positive rational numbers using a very intuitive approach. The function has the appeal of Cantor's function and it has the advantage that any high school student can understand the main idea at a glance…
Two and Two Make Zero: The Counting Numbers, Their Conceptualization, Symbolization, and Acquisition
ERIC Educational Resources Information Center
Yaseen, H. S.
2011-01-01
"Two and Two Make Zero" considers children's acquisition of numerical concepts from a wide range of perspectives including topics that are often overlooked, most notably: the principal properties of the counting numbers in and of themselves; the role that numerical symbols play in number acquisition; the underlying conceptual structure of number…
Evaluating call-count procedures for measuring local mourning dove populations
Armbruster, M.J.; Baskett, T.S.; Goforth, W.R.; Sadler, K.C.
1978-01-01
Seventy-nine mourning dove call-count runs were made on a 32-km route in Osage County, Missouri, May 1-August 31, 1971 and 1972. Circular study areas, each 61 ha, surrounding stop numbers 4 and 5, were delineated for intensive nest searches and population estimates. Tallies of cooing male doves along the entire call-count route were quite variable in repeated runs, fluctuating as much as 50 percent on consecutive days. There were no consistent relationships between numbers of cooing males tallied at stops 4 and 5 and the numbers of current nests or doves estimated to be present in the surrounding study areas. We doubt the suitability of call-count procedures to estimate precisely the densities of breeding pairs, nests or production of doves on small areas. Our findings do not dispute the usefulness of the national call-count survey as an index to relative densities of mourning doves during the breeding season over large portions of the United States, or as an index to annual population trends.
10C survey of radio sources at 15.7 GHz - II. First results
NASA Astrophysics Data System (ADS)
AMI Consortium; Davies, Mathhew L.; Franzen, Thomas M. O.; Waldram, Elizabeth M.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony; Olamaie, Malak; Pooley, Guy G.; Riley, Julia M.; Rodríguez-Gonzálvez, Carmen; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel P.; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Zwart, Jonathan T. L.
2011-08-01
In a previous paper (Paper I), the observational, mapping and source-extraction techniques used for the Tenth Cambridge (10C) Survey of Radio Sources were described. Here, the first results from the survey, carried out using the Arcminute Microkelvin Imager Large Array (LA) at an observing frequency of 15.7 GHz, are presented. The survey fields cover an area of ≈27 deg2 to a flux-density completeness of 1 mJy. Results for some deeper areas, covering ≈12 deg2, wholly contained within the total areas and complete to 0.5 mJy, are also presented. The completeness for both areas is estimated to be at least 93 per cent. The 10C survey is the deepest radio survey of any significant extent (≳0.2 deg2) above 1.4 GHz. The 10C source catalogue contains 1897 entries and is available online. The source catalogue has been combined with that of the Ninth Cambridge Survey to calculate the 15.7-GHz source counts. A broken power law is found to provide a good parametrization of the differential count between 0.5 mJy and 1 Jy. The measured source count has been compared with that predicted by de Zotti et al. - the model is found to display good agreement with the data at the highest flux densities. However, over the entire flux-density range of the measured count (0.5 mJy to 1 Jy), the model is found to underpredict the integrated count by ≈30 per cent. Entries from the source catalogue have been matched with those contained in the catalogues of the NRAO VLA Sky Survey and the Faint Images of the Radio Sky at Twenty-cm survey (both of which have observing frequencies of 1.4 GHz). This matching provides evidence for a shift in the typical 1.4-GHz spectral index to 15.7-GHz spectral index of the 15.7-GHz-selected source population with decreasing flux density towards sub-mJy levels - the spectra tend to become less steep. Automated methods for detecting extended sources, developed in Paper I, have been applied to the data; ≈5 per cent of the sources are found to be extended relative to the LA-synthesized beam of ≈30 arcsec. Investigations using higher resolution data showed that most of the genuinely extended sources at 15.7 GHz are classical doubles, although some nearby galaxies and twin-jet sources were also identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinohara, K., E-mail: shinohara.koji@jaea.go.jp; Ochiai, K.; Sukegawa, A.
In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons.more » The results suggested that the concept of a multi-stage detection system will work in practice.« less
Point count length and detection of forest neotropical migrant birds
Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.
Ahmad, S; Srivastava, P K
2007-04-01
Investigations were carried to study the effect of heart incorporation (0%, 15% and 20%) and increasing levels of fat (20% and 25%) on physicochemical (pH, moisture content and thiobarbituric acid, TBA number) and microbiological (total plate count and yeast and mold count) quality and shelf life of semi dry sausages of buffalo meat during refrigerated storage (4°C). Different levels of fat significantly (p<0.05) increased the pH of the sausage samples. However different levels of heart incorporation did not significantly (p<0.05) affect pH, moisture content and TBA number of sausage samples. Fresh samples had pH, moisture content and TBA number in the range of 5.15-5.28, 42.4-47.4% and 0.073-0.134 respectively. Refrigerated storage significantly (p<0.05) increased TBA number of control samples while storage did not significantly (p<0.05) increase the TBA number of sodium ascorbate (SA) treated samples. Total plate counts of twelve sausage samples were f under the TFTC (too few to count) limit at the initial stage. Incorporation of different levels of heart and also increasing levels of fat did not significantly (p<0.05) increase the log TPC/g values. Yeast and molds were not detected in twelve samples of semi dry fermented sausages in their fresh condition. Storage revealed that there was a consistent decrease in pH, and moisture content. Refrigerated storage significantly (p<0.05) reduced both pH and moisture contents. TBA number and total plate counts and yeast and mold counts of controls were found to increase significantly (p<0.05) during refrigerated storage. However, in SA treated sausage, only TPC and yeast and mold count significantly (p<0.05) increased during refrigerated storage. Shelf life of the sausages was found to be 60 days under refrigerated storage (4°C).
Asha, Stephen Edward; Higham, Matthew; Child, Peter
2015-05-01
If package counts on abdominal CTs of body-packers were known to be accurate, follow-up CTs could be avoided. The objective was to determine the accuracy of CT for the number of concealed packages in body-packers, and the reliability of package counts reported by body-packers who admit to concealing drugs. Suspected body-packers were identified from the emergency departments (ED) database. The medical record and radiology reports were reviewed for package counts determined by CT, patient-reported and physically retrieved. The last method was used as the reference standard. Sensitivity, specificity, positive predictive values (PPV) and negative predictive values (NPV) were calculated for CT package count accuracy. Reliability of patient-reported package counts was assessed using Pearson's correlation coefficient. There were 50 confirmed body-packers on whom 104 CT scans were performed. Data for the index and reference tests were available for 84 scans. The sensitivity, specificity, PPV and NPV for CT package count were 63% (95% CI 46% to 77%), 82% (95% CI 67% to 92%), 76% (95% CI 58% to 89%) and 71% (95% CI 56% to 83%) respectively. For CTs with a package count<15, the sensitivity, specificity, PPV and NPV for CT package count were 96% (95% CI 80% to 99%), 95% (95% CI 82% to 99%), 93% (95% CI 76% to 99%) and 97% (95% CI 86% to 100%), respectively. Correlation between patient-reported package counts and the number of packages retrieved was high (r=0.90, p<0.001, R2=81%). The accuracy of CT for determining the number of concealed packages is poor, although when applied to patients with few concealed packages accuracy is high and is useful as a rule-out test. Among patients who have admitted to drug concealment, the number of packages reported to be concealed is reliable. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
A statistical treatment of bioassay pour fractions
NASA Astrophysics Data System (ADS)
Barengoltz, Jack; Hughes, David
A bioassay is a method for estimating the number of bacterial spores on a spacecraft surface for the purpose of demonstrating compliance with planetary protection (PP) requirements (Ref. 1). The details of the process may be seen in the appropriate PP document (e.g., for NASA, Ref. 2). In general, the surface is mechanically sampled with a damp sterile swab or wipe. The completion of the process is colony formation in a growth medium in a plate (Petri dish); the colonies are counted. Consider a set of samples from randomly selected, known areas of one spacecraft surface, for simplicity. One may calculate the mean and standard deviation of the bioburden density, which is the ratio of counts to area sampled. The standard deviation represents an estimate of the variation from place to place of the true bioburden density commingled with the precision of the individual sample counts. The accuracy of individual sample results depends on the equipment used, the collection method, and the culturing method. One aspect that greatly influences the result is the pour fraction, which is the quantity of fluid added to the plates divided by the total fluid used in extracting spores from the sampling equipment. In an analysis of a single sample’s counts due to the pour fraction, one seeks to answer the question: What is the probability that if a certain number of spores are counted with a known pour fraction, that there are an additional number of spores in the part of the rinse not poured. This is given for specific values by the binomial distribution density, where detection (of culturable spores) is success and the probability of success is the pour fraction. A special summation over the binomial distribution, equivalent to adding for all possible values of the true total number of spores, is performed. This distribution when normalized will almost yield the desired quantity. It is the probability that the additional number of spores does not exceed a certain value. Of course, for a desired value of uncertainty, one must invert the calculation. However, this probability of finding exactly the number of spores in the poured part is correct only in the case where all values of the true number of spores greater than or equal to the adjusted count are equally probable. This is not realistic, of course, but the result can only overestimate the uncertainty. So it is useful. In probability speak, one has the conditional probability given any true total number of spores. Therefore one must multiply it by the probability of each possible true count, before the summation. If the counts for a sample set (of which this is one sample) are available, one may use the calculated variance and the normal probability distribution. In this approach, one assumes a normal distribution and neglects the contribution from spatial variation. The former is a common assumption. The latter can only add to the conservatism (over estimate the number of spores at some level of confidence). A more straightforward approach is to assume a Poisson probability distribution for the measured total sample set counts, and use the product of the number of samples and the mean number of counts per sample as the mean of the Poisson distribution. It is necessary to set the total count to 1 in the Poisson distribution when actual total count is zero. Finally, even when the planetary protection requirements for spore burden refer only to the mean values, they require an adjustment for pour fraction and method efficiency (a PP specification based on independent data). The adjusted mean values are a 50/50 proposition (e.g., the probability of the true total counts in the sample set exceeding the estimate is 0.50). However, this is highly unconservative when the total counts are zero. No adjustment to the mean values occurs for either pour fraction or efficiency. The recommended approach is once again to set the total counts to 1, but now applied to the mean values. Then one may apply the corrections to the revised counts. It can be shown by the methods developed in this work that this change is usually conservative enough to increase the level of confidence in the estimate to 0.5. 1. NASA. (2005) Planetary protection provisions for robotic extraterrestrial missions. NPR 8020.12C, April 2005, National Aeronautics and Space Administration, Washington, DC. 2. NASA. (2010) Handbook for the Microbiological Examination of Space Hardware, NASA-HDBK-6022, National Aeronautics and Space Administration, Washington, DC.
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY ...
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY COUNTING MEASURES RADIOACTIVE MATERIAL IN THE BODY. DESIGNED TO MINIMIZE EXTERNAL SOURCES OF RADIATION, BODY COUNTING ROOMS ARE CONSTRUCTED OF PRE-WORLD WAR II (WWII) STEEL. PRE-WWII STEEL, WHICH HAS NOT BEEN AFFECTED BY NUCLEAR FALLOUT, IS LOWER IS RADIOACTIVITY THAN STEEL CREATED AFTER WWII. (10/25/85) - Rocky Flats Plant, Emergency Medical Services Facility, Southwest corner of Central & Third Avenues, Golden, Jefferson County, CO
A Vacuum-Aspirator for Counting Termites
Susan C. Jones; Joe K. Mauldin
1983-01-01
An aspirator-system powered by a vacuum cleaner is described for manually counting termites. It is significantly faster and termite survival is at least as high as when using a mouth-aspirator for counting large numbers of termites.
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
Skoruppa, M.K.; Woodin, M.C.; Blacklock, G.
2009-01-01
The segment of the Rio Grande between International Falcon Reservoir and Del Rio, Texas (distance ca. 350 km), remains largely unexplored ornithologically. We surveyed nocturnal birds monthly during February-June 1998 at 19 stations along the Rio Grande (n = 6) and at upland stock ponds (n = 13) in Webb County, Texas. We conducted 10-min point counts (n = 89) after sunset and before moonset. Four species of owls and five species of nightjars were detected. Nightjars, as a group, were nearly five limes more abundant (mean number/count = 2.63) than owls (mean number = 0.55). The most, common owl, the great horned owl (Bubo virginianus), had a mean number of 0.25/point count. The mean for elf owls (Micrathene whitneyi) was 0.16/point count. The most common nightjars were the common poorwill (Phalaenoptilus nuttallii; 1.21/point count) and lesser nighthawk (Chordeiles acutipennir, 1.16/point count). Survey sites on the river supported more species (mean = 2.2) than did upland stock ponds (mean = 1.4). However, only one species (common pauraque, Nyctidromus albicollis) showed a preference for the river sites. Our results establish this segment of the Rio Grande in southern Texas as an area of high diversity of nightjars in the United States, matched (in numbers of species) only by southeastern Arizona and southwestern New Mexico.
Kaur, S; Nieuwenhuijsen, M J
2009-07-01
Short-term human exposure concentrations to PM2.5, ultrafine particle counts (particle range: 0.02-1 microm), and carbon monoxide (CO) were investigated at and around a street canyon intersection in Central London, UK. During a four week field campaign, groups of four volunteers collected samples at three timings (morning, lunch, and afternoon), along two different routes (a heavily trafficked route and a backstreet route) via five modes of transport (walking, cycling, bus, car, and taxi). This was followed by an investigation into the determinants of exposure using a regression technique which incorporated the site-specific traffic counts, meteorological variables (wind speed and temperature) and the mode of transport used. The analyses explained 9, 62, and 43% of the variability observed in the exposure concentrations to PM2.5, ultrafine particle counts, and CO in this study, respectively. The mode of transport was a statistically significant determinant of personal exposure to PM2.5, ultrafine particle counts, and CO, and for PM2.5 and ultrafine particle counts it was the most important determinant. Traffic count explained little of the variability in the PM2.5 concentrations, but it had a greater influence on ultrafine particle count and CO concentrations. The analyses showed that temperature had a statistically significant impact on ultrafine particle count and CO concentrations. Wind speed also had a statistically significant effect but smaller. The small proportion in variability explained in PM2.5 by the model compared to the largest proportion in ultrafine particle counts and CO may be due to the effect of long-range transboundary sources, whereas for ultrafine particle counts and CO, local traffic is the main source.
Nakagawa, Fumiyo
2017-01-28
Migrants account for a significant number of people living with HIV in Europe, and it is important to fully consider this population in national estimates. Using a novel approach with the UK as an example, we present key public health measures of the HIV epidemic, taking into account both in-country infections and infections likely to have been acquired abroad. Mathematical model calibrated to extensive data sources. An individual-based stochastic simulation model is used to calibrate to routinely collected surveillance data in the UK. Data on number of new HIV diagnoses, number of deaths, CD4 cell count at diagnosis, as well as time of arrival into the UK for migrants and the annual number of people receiving care were used. An estimated 106 400 (90% plausibility range: 88 700-124 600) people were living with HIV in the UK in 2013. Twenty-three percent of these people, 24 600 (15 000-36 200) were estimated to be undiagnosed; this number has remained stable over the last decade. An estimated 32% of the total undiagnosed population had CD4 cell count less than 350 cells/μl in 2013. Twenty-five and 23% of black African men and women heterosexuals living with HIV were undiagnosed respectively. We have shown a working example to characterize the HIV population in a European context which incorporates migrants from countries with generalized epidemics. Despite all aspects of HIV care being free and widely available to anyone in need in the UK, there is still a substantial number of people who are not yet diagnosed and thus not in care.
Radio Source Contributions to the Microwave Sky
NASA Astrophysics Data System (ADS)
Boughn, S. P.; Partridge, R. B.
2008-03-01
Cross-correlations of the Wilkinson Microwave Anisotropy Probe (WMAP) full sky K-, Ka-, Q-, V-, and W-band maps with the 1.4 GHz NVSS source count map and the HEAO I A2 2-10 keV full sky X-ray flux map are used to constrain rms fluctuations due to unresolved microwave sources in the WMAP frequency range. In the Q band (40.7 GHz), a lower limit, taking account of only those fluctuations correlated with the 1.4 GHz radio source counts and X-ray flux, corresponds to an rms Rayleigh-Jeans temperature of ˜2 μK for a solid angle of 1 deg2 assuming that the cross-correlations are dominated by clustering, and ˜1 μK if dominated by Poisson fluctuations. The correlated fluctuations at the other bands are consistent with a β = -2.1 ± 0.4 frequency spectrum. If microwave sources are distributed similarly in redshift to the radio and X-ray sources and are similarly clustered, then the implied total rms microwave fluctuations correspond to ˜5 μK. While this value should be considered no more than a plausible estimate, it is similar to that implied by the excess, small angular scale fluctuations observed in the Q band by WMAP and is consistent with estimates made by extrapolating low-frequency source counts.
Rapid enumeration of low numbers of moulds in tea based drinks using an automated system.
Tanaka, Kouichi; Yamaguchi, Nobuyasu; Baba, Takashi; Amano, Norihide; Nasu, Masao
2011-01-31
Aseptically prepared cold drinks based on tea have become popular worldwide. Contamination of these drinks with harmful microbes is a potential health problem because such drinks are kept free from preservatives to maximize aroma and flavour. Heat-tolerant conidia and ascospores of fungi can survive pasteurization, and need to be detected as quickly as possible. We were able to rapidly and accurately detect low numbers of conidia and ascospores in tea-based drinks using fluorescent staining followed by an automated counting system. Conidia or ascospores were inoculated into green tea and oolong tea, and samples were immediately filtered through nitrocellulose membranes (pore size: 0.8 μm) to concentrate fungal propagules. These were transferred onto potato dextrose agar and incubated for 23 h at 28 °C. Fungi germinating on the membranes were fluorescently stained for 30 min. The stained mycelia were counted selectively within 90s using an automated counting system (MGS-10LD; Chuo Electric Works, Osaka, Japan). Very low numbers (1 CFU/100ml) of conidia or ascospores could be rapidly counted, in contrast to traditional labour intensive techniques. All tested mould strains were detected within 24h while conventional plate counting required 72 h for colony enumeration. Counts of slow-growing fungi (Cladosporium cladosporioides) obtained by automated counting and by conventional plate counting were close (r(2) = 0.986). Our combination of methods enables counting of both fast- and slow-growing fungi, and should be useful for microbiological quality control of tea-based and also other drinks. Copyright © 2011 Elsevier B.V. All rights reserved.
An asymptotic theory of supersonic propeller noise
NASA Technical Reports Server (NTRS)
Envia, Edmane
1992-01-01
A theory for predicting the noise field of a propeller with a realistic blade geometry is presented. The theory, which utilizes a large blade count approximation, provides an efficient formula for predicting the radiation of sound from all three sources of propeller noise. Comparisons with full numerical integration indicate that the noise levels predicted by this formula are quite accurate. Calculations based on this method also show that the radiation from the Lighthill quadrupole source is rather substantial when compared with thickness and loading noise for high speed propellers. A preliminary application of the theory to the problem of the sensitivity of the peak noise levels generated by a supersonic propeller to the variations in its tip helical Mach number has produced a trend that is in qualitative agreement with the experimental observations.
NASA Technical Reports Server (NTRS)
Taylor, R. S.; Clark, G. W.
1971-01-01
The all-sky, X-ray measurements are made in five broad energy bands from 0.5 to 60 keV with X-ray collimators of one and three degree FWHM response. Working with the onboard star sensor source locations may be determined to a precision of plus or minus 0.1 deg. The experiment is located in wheel compartment number three of the spacecraft. A time division logic system divides each wheel rotation into 256 data bins in each of which X-ray counts are accumulated over a 190 second interval. Measurement chain circuits include provision for both geometric and risetime anticoincidence. A detailed description of the instrument is included as is pertinent operating information.
Organic Scintillation Detectors for Spectroscopic Radiation Portal Monitors
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit
Thousands of radiation portal monitors have been deployed worldwide to detect and deter the smuggling of nuclear and radiological materials that could be used in nefarious acts. Radiation portal monitors are often installed at bottlenecks where large amounts of people or goods must traverse. Examples of use include scanning cargo containers at shipping ports, vehicles at border crossings, and people at high profile functions and events. Traditional radiation portal monitors contain separate detectors for passively measuring neutron and gamma ray count rates. 3He tubes embedded in polyethylene and slabs of plastic scintillators are the most common detector materials used in radiation portal monitors. The radiation portal monitor alarm mechanism relies on measuring radiation count rates above user defined alarm thresholds. These alarm thresholds are set above natural background count rates. Minimizing false alarms caused by natural background and maximizing sensitivity to weakly emitting threat sources must be balanced when setting these alarm thresholds. Current radiation portal monitor designs suffer from frequent nuisance radiation alarms. These radiation nuisance alarms are most frequently caused by shipments of large quantities of naturally occurring radioactive material containing cargo, like kitty litter, as well as by humans who have recently undergone a nuclear medicine procedure, particularly 99mTc treatments. Current radiation portal monitors typically lack spectroscopic capabilities, so nuisance alarms must be screened out in time-intensive secondary inspections with handheld radiation detectors. Radiation portal monitors using organic liquid scintillation detectors were designed, built, and tested. A number of algorithms were developed to perform on-the-fly radionuclide identification of single and combination radiation sources moving past the portal monitor at speeds up to 2.2 m/s. The portal monitor designs were tested extensively with a variety of shielded and unshielded radiation sources, including special nuclear material, at the European Commission Joint Research Centre in Ispra, Italy. Common medical isotopes were measured at the C.S. Mott Children's Hospital and added to the radionuclide identification algorithms.
NASA Technical Reports Server (NTRS)
Barnes, Robert A.; Holmes, Alan W.; Barnes, William L.; Esaias, Wayne E.; Mcclain, Charles R.; Svitek, Tomas; Hooker, Stanford B.; Firestone, Elaine R.; Acker, James G.
1994-01-01
Based on the operating characteristics of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), calibration equations have been developed that allow conversion of the counts from the radiometer into Earth-existing radiances. These radiances are the geophysical properties the instrument has been designed to measure. SeaWiFS uses bilinear gains to allow high sensitivity measurements of ocean-leaving radiances and low sensitivity measurements of radiances from clouds, which are much brighter than the ocean. The calculation of these bilinear gains is central to the calibration equations. Several other factors within these equations are also included. Among these are the spectral responses of the eight SeaWiFS bands. A band's spectral response includes the ability of the band to isolate a portion of the electromagnetic spectrum and the amount of light that lies outside of that region. The latter is termed out-of-band response. In the calibration procedure, some of the counts from the instrument are produced by radiance in the out-of-band region. The number of those counts for each band is a function of the spectral shape of the source. For the SeaWiFS calibration equations, the out-of-band responses are converted from those for the laboratory source into those for a source with the spectral shape of solar flux. The solar flux, unlike the laboratory calibration, approximates the spectral shape of the Earth-existing radiance from the oceans. This conversion modifies the results from the laboratory radiometric calibration by 1-4 percent, depending on the band. These and other factors in the SeaWiFS calibration equations are presented here, both for users of the SeaWiFS data set and for researchers making ground-based radiance measurements in support of Sea WiFS.
ERIC Educational Resources Information Center
Van Nuys, Ute Elisabeth
1986-01-01
Presents reviews of the following mathematics software designed to teach young children counting, number recognition, visual discrimination, matching, addition, and subtraction skills; Stickybear Numbers, Learning with Leeper, Getting Ready to Read and Add, Counting Parade, Early Games for Young Children, Charlie Brown's 1,2,3's, Let's Go Fishing,…
Shippee, Nathan D; Shippee, Tetyana P; Hess, Erik P; Beebe, Timothy J
2014-02-08
Emergency department (ED) use is costly, and especially frequent among publicly insured populations in the US, who also disproportionately encounter financial (cost/coverage-related) and non-financial/practical barriers to care. The present study examines the distinct associations financial and non-financial barriers to care have with patterns of ED use among a publicly insured population. This observational study uses linked administrative-survey data for enrollees of Minnesota Health Care Programs to examine patterns in ED use-specifically, enrollee self-report of the ED as usual source of care, and past-year count of 0, 1, or 2+ ED visits from administrative data. Main independent variables included a count of seven enrollee-reported financial concerns about healthcare costs and coverage, and a count of seven enrollee-reported non-financial, practical barriers to access (e.g., limited office hours, problems with childcare). Covariates included health, health care, and demographic measures. In multivariate regression models, only financial concerns were positively associated with reporting ED as usual source of care, but only non-financial barriers were significantly associated with greater ED visits. Regression-adjusted values indicated notable differences in ED visits by number of non-financial barriers: zero non-financial barriers meant an adjusted 78% chance of having zero ED visits (95% C.I.: 70.5%-85.5%), 15.9% chance of 1(95% C.I.: 10.4%-21.3%), and 6.2% chance (95% C.I.: 3.5%-8.8%) of 2+ visits, whereas having all seven non-financial barriers meant a 48.2% adjusted chance of zero visits (95% C.I.: 30.9%-65.6%), 31.8% chance of 1 visit (95% C.I.: 24.2%-39.5%), and 20% chance (95% C.I.: 8.4%-31.6%) of 2+ visits. Financial barriers were associated with identifying the ED as one's usual source of care but non-financial barriers were associated with actual ED visits. Outreach/literacy efforts may help reduce reliance on/perception of ED as usual source of care, whereas improved targeting/availability of covered services may help curb frequent actual visits, among publicly insured individuals.
Optimal staining methods for delineation of cortical areas and neuron counts in human brains.
Uylings, H B; Zilles, K; Rajkowska, G
1999-04-01
For cytoarchitectonic delineation of cortical areas in human brain, the Gallyas staining for somata with its sharp contrast between cell bodies and neuropil is preferable to the classical Nissl staining, the more so when an image analysis system is used. This Gallyas staining, however, does not appear to be appropriate for counting neuron numbers in pertinent brain areas, due to the lack of distinct cytological features between small neurons and glial cells. For cell counting Nissl is preferable. In an optimal design for cell counting at least both the Gallyas and the Nissl staining must be applied, the former staining for cytoarchitectural delineaton of cortical areas and the latter for counting the number of neurons in the pertinent cortical areas. Copyright 1999 Academic Press.
SHEEP: The Search for the High Energy Extragalactic Population
NASA Technical Reports Server (NTRS)
Nandra, K.; Georgantopoulos, I.; Ptak, A.; Turner, T. J.; White, Nicholas E. (Technical Monitor)
2002-01-01
We present the SHEEP survey for serendipitously-detected hard X-ray sources in ASCA GIS images. In a survey area of approx. 40 sq deg, 69 sources were detected in the 5-10 keV band to a limiting flux of approx. 10(exp -13) erg/sq cm/s. The number counts agree with those obtained by the similar BeppoSAX HELLAS survey, and both are in close agreement with ASCA and BeppoSAX 2-10 keV surveys. Spectral analysis of the SHEEP sample reveals that the 2-10 and 5-10 keV surveys do not sample the same populations, however, as we find considerably harder spectra, with an average Gamma approx. 1.0 assuming no absorption. The implication is that the agreement in the number counts is coincidental, with the 5-10 keV surveys gaining approximately as many hard sources as they lose soft ones, when compared to the 2-10 keV surveys. This is hard to reconcile with standard AGN "population synthesis" models for the X-ray background, which posit the existence of a large population of absorbed sources. We find no evidence of the population hardening at faint fluxes, with the exception that the few very brightest objects are anomalously soft. 53 of the SHEEP sources have been covered by ROSAT in the pointed phase. Of these 32 were detected. An additional 3 were detected in the RASS. As expected the sources detected with ROSAT are systematically softer than those detected with ASCA alone, and of the sample as a whole. Although they represent a biased subsample, the ROSAT positions allow relatively secure catalog identifications to be made. We find associations with a wide variety of AGN and a few clusters and groups. At least two X-ray sources identified with high-z QSOs present very hard X-ray spectra indicative of absorption, despite the presence of broad optical lines. A possible explanation for this is that we are seeing relatively dust-free "warm absorbers" in high luminosity/redshift objects. Our analysis defines a new, hard X-ray selected sample of objects - mostly active galactic nuclei - which is less prone to bias due to obscuration than previous optical or soft X-ray samples. They are therefore more representative of the population of AGN in the universe in general, and the SHEEP survey should produce bright examples of the sources that make up the hard X-ray background, the majority of which has recently been resolved by Chandra. This should help elucidate the nature of the new populations.
NASA Astrophysics Data System (ADS)
Fenske, Roger; Näther, Dirk U.; Dennis, Richard B.; Smith, S. Desmond
2010-02-01
Commercial Fluorescence Lifetime Spectrometers have long suffered from the lack of a simple, compact and relatively inexpensive broad spectral band light source that can be flexibly employed for both quasi-steady state and time resolved measurements (using Time Correlated Single Photon Counting [TCSPC]). This paper reports the integration of an optically pumped photonic crystal fibre, supercontinuum source1 (Fianium model SC400PP) as a light source in Fluorescence Lifetime Spectrometers (Edinburgh Instruments FLS920 and Lifespec II), with single photon counting detectors (micro-channel plate photomultiplier and a near-infrared photomultiplier) covering the UV to NIR range. An innovative method of spectral selection of the supercontinuum source involving wedge interference filters is also discussed.
de Hartog, Jeroen J; Hoek, Gerard; Mirme, Aadu; Tuch, Thomas; Kos, Gerard P A; ten Brink, Harry M; Brunekreef, Bert; Cyrys, Josef; Heinrich, Joachim; Pitz, Mike; Lanki, Timo; Vallius, Marko; Pekkanen, Juha; Kreyling, Wolfgang G
2005-04-01
Evidence on the correlation between particle mass and (ultrafine) particle number concentrations is limited. Winter- and spring-time measurements of urban background air pollution were performed in Amsterdam (The Netherlands), Erfurt (Germany) and Helsinki (Finland), within the framework of the EU funded ULTRA study. Daily average concentrations of ambient particulate matter with a 50% cut off of 2.5 microm (PM2.5), total particle number concentrations and particle number concentrations in different size classes were collected at fixed monitoring sites. The aim of this paper is to assess differences in particle concentrations in several size classes across cities, the correlation between different particle fractions and to assess the differential impact of meteorological factors on their concentrations. The medians of ultrafine particle number concentrations were similar across the three cities (range 15.1 x 10(3)-18.3 x 10(3) counts cm(-3)). Within the ultrafine particle fraction, the sub fraction (10-30 nm) made a higher contribution to particle number concentrations in Erfurt than in Helsinki and Amsterdam. Larger differences across the cities were found for PM2.5(range 11-17 microg m(-3)). PM2.5 and ultrafine particle concentrations were weakly (Amsterdam, Helsinki) to moderately (Erfurt) correlated. The inconsistent correlation for PM2.5 and ultrafine particle concentrations between the three cities was partly explained by the larger impact of more local sources from the city on ultrafine particle concentrations than on PM2.5, suggesting that the upwind or downwind location of the measuring site in regard to potential particle sources has to be considered. Also, relationship with wind direction and meteorological data differed, suggesting that particle number and particle mass are two separate indicators of airborne particulate matter. Both decreased with increasing wind speed, but ultrafine particle number counts consistently decreased with increasing relative humidity, whereas PM2.5 increased with increasing barometric pressure. Within the ultrafine particle mode, nucleation mode (10-30 nm) and Aitken mode (30-100 nm) had distinctly different relationships with accumulation mode particles and weather conditions. Since the composition of these particle fractions also differs, it is of interest to test in future epidemiological studies whether they have different health effects.
Nonomura, N; Takayama, H; Nishimura, K; Oka, D; Nakai, Y; Shiba, M; Tsujimura, A; Nakayama, M; Aozasa, K; Okuyama, A
2007-01-01
Mast cell infiltration is often observed around human tumours. Inflammatory cells such as macrophages, neutrophils and mast cells infiltrating around tumours are known to contribute to tumour growth; however, the clinical significance of mast cell invasion in prostate cancer (PCa) has not been investigated. Mast cell infiltration was evaluated in 104 patients (age range, 45–88 years; median, 72 years), who underwent needle biopsy of the prostate and were confirmed to have PCa. Needle biopsy specimens of prostate were sliced into 5-μm-thick sections and immunostained for mast cells with monoclonal antibody against mast cell-specific tryptase. Mast cells were counted systematically under a microscope (× 400 magnification), and the relations between mast cell numbers and clinicopathologic findings were evaluated. The mast cell count was evaluated for prognostic value by multivariate analysis. Mast cells were immunostained around the cancer foci. The median number of mast cells in each case was 16. The mast cell count was higher around cancer foci in patients with higher Gleason scores than in those with low Gleason scores. The mast cell number correlated well with clinical stage (P<0.001). Prostate-specific antigen-free survival of patients with higher mast cell counts was better than that in patients with lower mast cell counts (P<0.001). Multivariate analysis revealed that mast cell count was a significant prognostic factor (P<0.005). The number of mast cells infiltrating around cancer foci in prostate biopsy specimens can be a significant prognostic factor of PCa. PMID:17848955
Seabird nest counts: A test of monitoring metrics using Red-tailed Tropicbirds
Seavy, N.E.; Reynolds, M.H.
2009-01-01
Counts of nesting birds are often used to monitor the abundance of breeding pairs at colonies. Mean incubation counts (MICs) are counts of nests with eggs at intervals that correspond to the mean incubation period of a species. The sum of all counts during the nesting season (MICtotal) and the highest single count during the season (MICmax) are metrics that can be generated from this method. However, the utility of these metrics as measures of the number of breeding pairs has not been well tested. We used two approaches to evaluate the bias and precision of MIC metrics for quantifying annual variation in the number of breeding Red-tailed Tropicbirds (Phaethon rubricauda) nesting on two islands in the Papahnaumokukea Marine National Monument in the northwest Hawaiian Islands. First, we used data from nest plots with individually marked birds to generate simulated MIC metrics that we compared to the known number of nesting individuals. The MICtotal overestimated the number of pairs by about 5%, whereas the MICmax underestimated the number of pairs by about 60%. However, both metrics exhibited similar precision. Second, we used a 12-yr time series of island-wide MICs to compare estimates of temporal trend and annual variation using the MICmax and MICtotal. The 95% confidence intervals for the trend estimates were overlapping and the residual standard errors for the two metrics were similar. Our results suggest that both metrics offered similar precision for indices of breeding pairs of Red-tailed Tropicbirds, but that MICtotal was more accurate. ?? 2009 Association of Field Ornithologists.
Pre-Exposure Prophylaxis YouTube Videos: Content Evaluation.
Kecojevic, Aleksandar; Basch, Corey; Basch, Charles; Kernan, William
2018-02-16
Antiretroviral (ARV) medicines reduce the risk of transmitting the HIV virus and are recommended as daily pre-exposure prophylaxis (PrEP) in combination with safer sex practices for HIV-negative individuals at a high risk for infection, but are underused in HIV prevention. Previous literature suggests that YouTube is extensively used to share health information. While pre-exposure prophylaxis (PrEP) is a novel and promising approach to HIV prevention, there is limited understanding of YouTube videos as a source of information on PrEP. The objective of this study was to describe the sources, characteristics, and content of the most widely viewed PrEP YouTube videos published up to October 1, 2016. The keywords "pre-exposure prophylaxis" and "Truvada" were used to find 217 videos with a view count >100. Videos were coded for source, view count, length, number of comments, and selected aspects of content. Videos were also assessed for the most likely target audience. The total cumulative number of views was >2.3 million, however, a single Centers for Disease Control and Prevention video accounted for >1.2 million of the total cumulative views. A great majority (181/217, 83.4%) of the videos promoted the use of PrEP, whereas 60.8% (132/217) identified the specific target audience. In contrast, only 35.9% (78/217) of the videos mentioned how to obtain PrEP, whereas less than one third addressed the costs, side effects, and safety aspects relating to PrEP. Medical and academic institutions were the sources of the largest number of videos (66/217, 30.4%), followed by consumers (63/217, 29.0%), community-based organizations (CBO; 48/217, 22.1%), and media (40/217, 18.4%). Videos uploaded by the media sources were more likely to discuss the cost of PrEP (P<.001), whereas the use of PrEP was less likely to be promoted in videos uploaded by individual consumers (P=.002) and more likely to be promoted in videos originated by CBOs (P=.009). The most common target audience for the videos was gay and bisexual men. YouTube videos can be used to share reliable PrEP information with individuals. Further research is needed to identify the best practices for using this medium to promote and increase PrEP uptake. ©Aleksandar Kecojevic, Corey Basch, Charles Basch, William Kernan. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 16.02.2018.
Events and the Ontology of Individuals: Verbs as a Source of Individuating Mass and Count Nouns
ERIC Educational Resources Information Center
Barner, David; Wagner, Laura; Snedeker, Jesse
2008-01-01
What does mass-count syntax contribute to the interpretation of noun phrases (NPs), and how much of NP meaning is contributed by lexical items alone? Many have argued that count syntax specifies reference to countable individuals (e.g., "cats") while mass syntax specifies reference to unindividuated entities (e.g., "water"). We evaluated this…
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2010 CFR
2010-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2011 CFR
2011-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
Lisle, John T.; Hamilton, Martin A.; Willse, Alan R.; McFeters, Gordon A.
2004-01-01
Total direct counts of bacterial abundance are central in assessing the biomass and bacteriological quality of water in ecological and industrial applications. Several factors have been identified that contribute to the variability in bacterial abundance counts when using fluorescent microscopy, the most significant of which is retaining an adequate number of cells per filter to ensure an acceptable level of statistical confidence in the resulting data. Previous studies that have assessed the components of total-direct-count methods that contribute to this variance have attempted to maintain a bacterial cell abundance value per filter of approximately 106 cells filter-1. In this study we have established the lower limit for the number of bacterial cells per filter at which the statistical reliability of the abundance estimate is no longer acceptable. Our results indicate that when the numbers of bacterial cells per filter were progressively reduced below 105, the microscopic methods increasingly overestimated the true bacterial abundance (range, 15.0 to 99.3%). The solid-phase cytometer only slightly overestimated the true bacterial abundances and was more consistent over the same range of bacterial abundances per filter (range, 8.9 to 12.5%). The solid-phase cytometer method for conducting total direct counts of bacteria was less biased and performed significantly better than any of the microscope methods. It was also found that microscopic count data from counting 5 fields on three separate filters were statistically equivalent to data from counting 20 fields on a single filter.
Automated food microbiology: potential for the hydrophobic grid-membrane filter.
Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L
1978-01-01
Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054
Study of a nTHGEM-based thermal neutron detector
NASA Astrophysics Data System (ADS)
Li, Ke; Zhou, Jian-Rong; Wang, Xiao-Dong; Xiong, Tao; Zhang, Ying; Xie, Yu-Guang; Zhou, Liang; Xu, Hong; Yang, Gui-An; Wang, Yan-Feng; Wang, Yan; Wu, Jin-Jie; Sun, Zhi-Jia; Hu, Bi-Tao
2016-07-01
With new generation neutron sources, traditional neutron detectors cannot satisfy the demands of the applications, especially under high flux. Furthermore, facing the global crisis in 3He gas supply, research on new types of neutron detector as an alternative to 3He is a research hotspot in the field of particle detection. GEM (Gaseous Electron Multiplier) neutron detectors have high counting rate, good spatial and time resolution, and could be one future direction of the development of neutron detectors. In this paper, the physical process of neutron detection is simulated with Geant4 code, studying the relations between thermal conversion efficiency, boron thickness and number of boron layers. Due to the special characteristics of neutron detection, we have developed a novel type of special ceramic nTHGEM (neutron THick GEM) for neutron detection. The performance of the nTHGEM working in different Ar/CO2 mixtures is presented, including measurements of the gain and the count rate plateau using a copper target X-ray source. A detector with a single nTHGEM has been tested for 2-D imaging using a 252Cf neutron source. The key parameters of the performance of the nTHGEM detector have been obtained, providing necessary experimental data as a reference for further research on this detector. Supported by National Natural Science Foundation of China (11127508, 11175199, 11205253, 11405191), Key Laboratory of Neutron Physics, CAEP (2013DB06, 2013BB04) and CAS (YZ201512)
Color quench correction for low level Cherenkov counting.
Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B
2009-05-01
The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.
The Chandra Source Catalog: Source Properties and Data Products
NASA Astrophysics Data System (ADS)
Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).
MOIRCS Deep Survey. I: DRG Number Counts
NASA Astrophysics Data System (ADS)
Kajisawa, Masaru; Konishi, Masahiro; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka; Katsuno; Yoshikawa, Tomohiro; Akiyama, Masayuki; Ichikawa, Takashi; Ouchi, Masami; Omata, Koji; Tanaka, Ichi; Nishimura, Tetsuo; Yamada, Toru
2006-12-01
We use very deep near-infrared imaging data taken with Multi-Object InfraRed Camera and Spectrograph (MOIRCS) on the Subaru Telescope to investigate the number counts of Distant Red Galaxies (DRGs). We have observed a 4x7 arcmin^2 field in the Great Observatories Origins Deep Survey North (GOODS-N), and our data reach J=24.6 and K=23.2 (5sigma, Vega magnitude). The surface density of DRGs selected by J-K>2.3 is 2.35+-0.31 arcmin^-2 at K<22 and 3.54+-0.38 arcmin^-2 at K<23, respectively. These values are consistent with those in the GOODS-South and FIRES. Our deep and wide data suggest that the number counts of DRGs turn over at K~22, and the surface density of the faint DRGs with K>22 is smaller than that expected from the number counts at the brighter magnitude. The result indicates that while there are many bright galaxies at 2
Cuatianquiz Lima, Cecilia; Macías Garcia, Constantino
2016-01-01
Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January-October 2009-2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity.
Abreu-Mendoza, Roberto A; Arias-Trejo, Natalia
2017-10-01
The authors investigated whether children with Down's syndrome (DS) who have not started to produce number words understand the one-to-one correspondence principle (Experiment 1), and they looked at the relationship between number word knowledge and receptive vocabulary (Experiment 2). Sixteen children with DS who did not recite the count list participated in Experiment 1, along with 2 comparison groups: 1 of 16 children with DS who recited up to 10, paired by chronological age, and another of 16 typically developing children paired by their ability to recite the list. The understanding of the principle was evaluated by a preferential looking task. Children saw 1 of 2 conditions. In the number condition, they heard number words and in the beep condition they heard computerized beeps. In both conditions, children saw videos depicting counting events that were principle-consistent or principle-inconsistent. Experiment 2 evaluated 25 children with DS using the Give-a-Number task and the Receptive Vocabulary subtest of the Wechsler Preschool and Primary Scale of Intelligence-III. In Experiment 1, children in the number condition preferred principle-consistent videos, independent of their ability to recite the count list. Experiment 2 showed a strong correlation between number word knowledge and receptive vocabulary scores, independent of chronological age. The results suggest that the difficulty of children with DS in acquiring counting ability might not reflect a lack of understanding of the one-to-one correspondence principle, but might instead be related to vocabulary development. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Geography-based structural analysis of the Internet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasiviswanathan, Shiva; Eidenbenz, Stephan; Yan, Guanhua
2010-01-01
In this paper, we study some geographic aspects of the Internet. We base our analysis on a large set of geolocated IP hop-level session data (including about 300,000 backbone routers, 150 million end hosts, and 1 billion sessions) that we synthesized from a variety of different input sources such as US census data, computer usage statistics, Internet market share data, IP geolocation data sets, CAJDA's Skitter data set for backbone connectivity, and BGP routing tables. We use this model to perform a nationwide and statewide geographic analysis of the Internet. Our main observations are: (1) There is a dominant coast-to-coastmore » pattern in the US Internet traffic. In fact, in many instances even if the end-devices are not near either coast, still the traffic between them takes a long detour through the coasts. (2) More than half of the Internet paths are inflated by 100% or more compared to their corresponding geometric straight-line distance. This circuitousness makes the average ratio between the routing distance and geometric distance big (around 10). (3) The weighted mean hop count is around 5, but the hop counts are very loosely correlated with the distances. The weighted mean AS count (number of ASes traversed) is around 3. (4) The AS size and the AS location number distributions are heavy-tailed and strongly correlated. Most of the ASes are medium sized and there is a wide variability in the geographic dispersion size (measured in terms of the convex hull area) of these ASes.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Fission product yield measurements using monoenergetic photon beams
NASA Astrophysics Data System (ADS)
Krishichayan; Bhike, M.; Tonchev, A. P.; Tornow, W.
2017-09-01
Measurements of fission products yields (FPYs) are an important source of information on the fission process. During the past couple of years, a TUNL-LANL-LLNL collaboration has provided data on the FPYs from quasi monoenergetic neutron-induced fission on 235U, 238U, and 239Pu and has revealed an unexpected energy dependence of both asymmetric fission fragments at energies below 4 MeV. This peculiar FPY energy dependence was more pronounced in neutron-induced fission of 239Pu. In an effort to understand and compare the effect of the incoming probe on the FPY distribution, we have carried out monoenergetic photon-induced fission experiments on the same 235U, 238U, and 239Pu targets. Monoenergetic photon beams of Eγ = 13.0 MeV were provided by the HIγS facility, the world's most intense γ-ray source. In order to determine the total number of fission events, a dual-fission chamber was used during the irradiation. These irradiated samples were counted at the TUNL's low-background γ-ray counting facility using high efficient HPGe detectors over a period of 10 weeks. Here we report on our first ever photofission product yield measurements obtained with monoenegetic photon beams. These results are compared with neutron-induced FPY data.
Whitman, Richard L.; Nevers, Meredith B.
2004-01-01
Monitoring beaches for recreational water quality is becoming more common, but few sampling designs or policy approaches have evaluated the efficacy of monitoring programs. The authors intensively sampled water for E. coli (N=1770) at 63rd Street Beach, Chicago for 6 months in 2000 in order to (1) characterize spatial-temporal trends, (2) determine between and within transect variation, and (3) estimate sample size requirements and determine sampling reliability.E. coli counts were highly variable within and between sampling sites but spatially and diurnally autocorrelated. Variation in counts decreased with water depth and time of day. Required number of samples was high for 70% precision around the critical closure level (i.e., 6 within or 24 between transect replicates). Since spatial replication may be cost prohibitive, composite sampling is an alternative once sources of error have been well defined. The results suggest that beach monitoring programs may be requiring too few samples to fulfill management objectives desired. As the recreational water quality national database is developed, it is important that sampling strategies are empirically derived from a thorough understanding of the sources of variation and the reliability of collected data. Greater monitoring efficacy will yield better policy decisions, risk assessments, programmatic goals, and future usefulness of the information.
Assessment of near-source air pollution at a fine spatial scale ...
Mobile monitoring is an emerging strategy to characterize spatially and temporally variable air pollution in areas near sources. EPA’s Geospatial Monitoring of Air Pollution (GMAP) vehicle, an all-electric vehicle measuring real-time concentrations of particulate and gaseous pollutants, was utilized to map air pollution trends near the Port of Charleston in South Carolina. High-resolution monitoring was performed along driving routes near several port terminals and rail yard facilities, recording geospatial coordinates and measurements of pollutants including black carbon, size-resolved particle count ranging from ultrafine to coarse (6 nm to 20 µm), carbon monoxide, carbon dioxide, and nitrogen dioxide. Additionally, a portable meteorological station was used to characterize local meteorology. Port activity data was provided by the Port Authority of Charleston and includes counts of ships and trucks, and port service operations such as cranes and forklifts during the sampling time periods. Measurements are supplemented with modeling performed with AERMOD and RLINE in order to characterize the impact of the various terminals at the Port of Charleston on local air quality. Specifically, the data are used to determine the magnitude of the increase in local, near-port pollutant concentrations as well as the spatial extent to which concentration is elevated above background. These effects are studied in relation to a number of potentially significant factors such
CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES: THE SOURCE CATALOG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Song; Liu, Jifeng; Qiu, Yanli
The Chandra archival data is a valuable resource for various studies on different X-ray astronomy topics. In this paper, we utilize this wealth of information and present a uniformly processed data set, which can be used to address a wide range of scientific questions. The data analysis procedures are applied to 10,029 Advanced CCD Imaging Spectrometer observations, which produces 363,530 source detections belonging to 217,828 distinct X-ray sources. This number is twice the size of the Chandra Source Catalog (Version 1.1). The catalogs in this paper provide abundant estimates of the detected X-ray source properties, including source positions, counts, colors,more » fluxes, luminosities, variability statistics, etc. Cross-correlation of these objects with galaxies shows that 17,828 sources are located within the D {sub 25} isophotes of 1110 galaxies, and 7504 sources are located between the D {sub 25} and 2 D {sub 25} isophotes of 910 galaxies. Contamination analysis with the log N –log S relation indicates that 51.3% of objects within 2 D {sub 25} isophotes are truly relevant to galaxies, and the “net” source fraction increases to 58.9%, 67.3%, and 69.1% for sources with luminosities above 10{sup 37}, 10{sup 38}, and 10{sup 39} erg s{sup −1}, respectively. Among the possible scientific uses of this catalog, we discuss the possibility of studying intra-observation variability, inter-observation variability, and supersoft sources (SSSs). About 17,092 detected sources above 10 counts are classified as variable in individual observation with the Kolmogorov–Smirnov (K–S) criterion ( P {sub K–S} < 0.01). There are 99,647 sources observed more than once and 11,843 sources observed 10 times or more, offering us a wealth of data with which to explore the long-term variability. There are 1638 individual objects (∼2350 detections) classified as SSSs. As a quite interesting subclass, detailed studies on X-ray spectra and optical spectroscopic follow-up are needed to categorize these SSSs and pinpoint their properties. In addition, this survey can enable a wide range of statistical studies, such as X-ray activity in different types of stars, X-ray luminosity functions in different types of galaxies, and multi-wavelength identification and classification of different X-ray populations.« less
A Chandra X-Ray Study of NGC 1068 IL the Luminous X-Ray Source Population
NASA Technical Reports Server (NTRS)
Smith, David A.; Wilson, Andrew S.
2003-01-01
We present an analysis of the compact X-ray source population in the Seyfert 2 galaxy NGC 1068, imaged with a approx. 50 ks Chandra observation. We find a total of 84 compact sources on the S3 chip, of which 66 are located within the 25.0 B-mag/arcsec isophote of the galactic disk of NGC 1068. Spectra have been obtained for the 21 sources with at least 50 counts and modeled with both multicolor disk blackbody and power-law models. The power-law model provides the better description of the spectrum for 18 of these sources. For fainter sources, the spectral index has been estimated from the hardness ratio. Five sources have 0.4 - 8 keV intrinsic luminosities greater than 10(exp 39)ergs/ s, assuming that their emission is isotropic and that they are associated with NGC 1068. We refer to these sources as intermediate-luminosity X-ray objects (ISOs). If these five sources are X-ray binaries accreting with luminosities that are both sub-Eddington and isotropic, then the implied source masses are approx greater than 7 solar mass, and so they are inferred to be black holes. Most of the spectrally modeled sources have spectral shapes similar to Galactic black hole candidates. However, the brightest compact source in NGC 1068 has a spectrum that is much harder than that found in Galactic black hole candidates and other ISOs. The brightest source also shows large amplitude variability on both short-term and long-term timescales, with the count rate possibly decreasing by a factor of 2 in approx. 2 ks during our Chundra observation, and the source flux decreasing by a factor of 5 between our observation and the grating observations taken just over 9 months later. The ratio of the number of sources with luminosities greater than 2.1 x 10(exp 38) ergs/s in the 0.4 - 8 keV band to the rate of massive (greater than 5 solar mass) star formation is the same, to within a factor of 2, for NGC 1068, the Antennae, NGC 5194 (the main galaxy in M51), and the Circinus galaxy. This suggests that the rate of production of X-ray binaries per massive star is approximately the same for galaxies with currently active star formation, including "starbursts."
Silicon Quantum Dots with Counted Antimony Donor Implants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Meenakshi; Pacheco, Jose L.; Perry, Daniel Lee
2015-10-01
Deterministic control over the location and number of donors is crucial to donor spin quantum bits (qubits) in semiconductor based quantum computing. A focused ion beam is used to implant close to quantum dots. Ion detectors are integrated next to the quantum dots to sense the implants. The numbers of ions implanted can be counted to a precision of a single ion. Regular coulomb blockade is observed from the quantum dots. Charge offsets indicative of donor ionization, are observed in devices with counted implants.
Revised Sunspot Numbers and the Effects on Understanding the Sunspot Cycle
NASA Astrophysics Data System (ADS)
Hathaway, D. H.
2014-12-01
While sunspot numbers provide only limited information about the sunspot cycle, they provide that information for at least twice as many sunspot cycles as any other direct solar observation. In particular, sunspot numbers are available before, during, and immediately after the Maunder Minimum (1645-1715). The instruments and methods used to count sunspots have changed over the last 400+ years. This leads to systematic changes in the sunspot number that can mask, or artificially introduce, characteristics of the sunspot cycle. The most widely used sunspot number is the International (Wolf/Zurich) sunspot number which is now calculated at the Solar Influences Data Center in Brussels, Belgium. These numbers extend back to 1749. The Group sunspot number extends back to the first telescopic observations of the Sun in 1610. There are well-known and significant differences between these two numbers where they overlap. Recent work has helped us to understand the sources of these differences and has led to proposed revisions in the sunspot numbers. Independent studies now support many of these revisions. These revised sunspot numbers suggest changes to our understanding of the sunspot cycle itself and to our understanding of its connection to climate change.
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Tailoring point counts for inference about avian density: dealing with nondetection and availability
Johnson, Fred A.; Dorazio, Robert M.; Castellón, Traci D.; Martin, Julien; Garcia, Jay O.; Nichols, James D.
2014-01-01
Point counts are commonly used for bird surveys, but interpretation is ambiguous unless there is an accounting for the imperfect detection of individuals. We show how repeated point counts, supplemented by observation distances, can account for two aspects of the counting process: (1) detection of birds conditional on being available for observation and (2) the availability of birds for detection given presence. We propose a hierarchical model that permits the radius in which birds are available for detection to vary with forest stand age (or other relevant habitat features), so that the number of birds available at each location is described by a Poisson-gamma mixture. Conditional on availability, the number of birds detected at each location is modeled by a beta-binomial distribution. We fit this model to repeated point count data of Florida scrub-jays and found evidence that the area in which birds were available for detection decreased with increasing stand age. Estimated density was 0.083 (95%CI: 0.060–0.113) scrub-jays/ha. Point counts of birds have a number of appealing features. Based on our findings, however, an accounting for both components of the counting process may be necessary to ensure that abundance estimates are comparable across time and space. Our approach could easily be adapted to other species and habitats.
High sensitivity pulse-counting mass spectrometer system for noble gas analysis
NASA Technical Reports Server (NTRS)
Hohenberg, C. M.
1980-01-01
A pulse-counting mass spectrometer is described which is comprised of a new ion source of cylindrical geometry, with exceptional optical properties (the Baur source), a dual focal plane externally adjustable collector slits, and a 17-stage Allen-type electron multiplier, all housed in a metal 21 cm radius, 90 deg magnetic sector flight tube. Mass discrimination of the instrument is less than 1 per mil per mass unit; the optical transmission is more than 90%; the source sensitivity (Faraday collection) is 4 ma/torr at 250 micron emission; and the abundance sensitivity is 30,000.
Getting something out of nothing in the measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Tan, Yong-Gang; Cai, Qing-Yu; Yang, Hai-Feng; Hu, Yao-Hua
2015-11-01
Because of the monogamy of entanglement, the measurement-device-independent quantum key distribution is immune to the side-information leaking of the measurement devices. When the correlated measurement outcomes are generated from the dark counts, no entanglement is actually obtained. However, secure key bits can still be proven to be generated from these measurement outcomes. Especially, we will give numerical studies on the contributions of dark counts to the key generation rate in practical decoy state MDI-QKD where a signal source, a weaker decoy source and a vacuum decoy source are used by either legitimate key distributer.
A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.
Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan
2016-07-01
Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.
Newark Kids Count 1998: A Profile of Child Well-Being.
ERIC Educational Resources Information Center
Lucas, Gina; Hernandez, Eloisa; Cheslow, Becky
This Kids Count report provides statistical data on several indicators of child well-being in Newark, New Jersey. Indicators are grouped into six categories: (1) Demographics (including population, number of registered voters, income level, people living below poverty level); (2) Family Well-Being (including average number of children receiving…
USDA-ARS?s Scientific Manuscript database
Ultrasonography is a powerful technology that can be used to improve reproductive management in heifers. By counting the number of antral follicles observed on an ultrasound screen the practitioner can gather additional information when reproductive tract scoring, because the number of antral folli...
Fractal analysis: A new tool in transient volcanic ash plume characterization.
NASA Astrophysics Data System (ADS)
Tournigand, Pierre-Yves; Peña Fernandez, Juan Jose; Taddeucci, Jacopo; Perugini, Diego; Sesterhenn, Jörn
2017-04-01
Transient volcanic plumes are time-dependent features generated by unstable eruptive sources. They represent a threat to human health and infrastructures, and a challenge to characterize due to their intrinsic instability. Plumes have been investigated through physical (e.g. visible, thermal, UV, radar imagery), experimental and numerical studies in order to provide new insights about their dynamics and better anticipate their behavior. It has been shown experimentally that plume dynamics is strongly dependent to source conditions and that plume shape evolution holds key to retrieve these conditions. In this study, a shape evolution analysis is performed on thermal high-speed videos of volcanic plumes from three different volcanoes Sakurajima (Japan), Stromboli (Italy) and Fuego (Guatemala), recorded with a FLIR SC655 thermal camera during several field campaigns between 2012 and 2016. To complete this dataset, three numerical gas-jet simulations at different Reynolds number (2000, 5000 and 10000) have been used in order to set reference values to the natural cases. Turbulent flow shapes are well known to feature scale-invariant structures and a high degree of complexity. For this reason we characterized the bi-dimensional shape of natural and synthetic plumes by using a fractal descriptor. Such method has been applied in other studies on experimental turbulent jets as well as on atmospheric clouds and have shown promising results. At each time-step plume contour has been manually outlined and measured using the box-counting method. This method consists in covering the image with squares of variable sizes and counting the number of squares containing the plume outline. The negative slope of the number of squares in function of their size in a log-log plot gives the fractal dimension of the plume at a given time. Preliminary results show an increase over time of the fractal dimension for natural volcanic plume as well as for the numerically simulated ones, but at varying rates. Increasing fractal dimension correspond to an increase in the overall complexity of plume shape and thus to an increase in flow turbulence over time. Accordingly, numerical simulations show that, fractal dimension increases faster with increasing Reynolds number. However, other parameters seem to play a role in volcanic plumes evolution. The features of the eruption source (e.g. vent number, size and shape, ejection duration, number and time interval between the different ejection pulses that characterize unsteady eruptions) seem also to have an effect on this time evolution with for example a single vent source generating a faster increase of the fractal dimension than in the case of a plume fed by several vents over time. This first attempt to use fractal analysis on volcanic plume could be the starting point towards a new kind of tools for volcanic plume characterization potentially giving an access to parameters so far unreachable by only using more traditional techniques. Fractal dimension analysis applied on volcanic plumes could directly link a shape evolution to source conditions and thus help to constrain uncertainties existing on such parameters.
Which button will I press? Preference for correctly ordered counting sequences in 18-month-olds.
Ip, Martin Ho Kwan; Imuta, Kana; Slaughter, Virginia
2018-04-16
Correct counting respects the stable order principle whereby the count terms are recited in a fixed order every time. The 4 experiments reported here tested whether precounting infants recognize and prefer correct stable-ordered counting. The authors introduced a novel preference paradigm in which infants could freely press two buttons to activate videos of counting events. In the "correct" counting video, number words were always recited in the canonical order ("1, 2, 3, 4, 5, 6"). The "incorrect" counting video was identical except that the number words were recited in a random order (e.g., "5, 3, 1, 6, 4, 2"). In Experiment 1, 18-month-olds (n = 21), but not 15-month-olds (n = 24), significantly preferred to press the button that activated correct counting events. Experiment 2 revealed that English-learning 18-month-olds' (n = 21) preference for stable-ordered counting disappeared when the counting was done in Japanese. By contrast, Experiment 3 showed that multilingual 18-month-olds (n = 24) preferred correct stable-ordered counting in an unfamiliar foreign language. In Experiment 4, multilingual 18-month-olds (N = 21) showed no preference for stable-ordered alphabet sequences, ruling out some alternative explanations for the Experiment 3 results. Overall these findings are consistent with the idea that implicit recognition of the stable order principle of counting is acquired by 18 months of age, and that learning more than one language may accelerate infants' understanding of abstract counting principles. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Enumeration of Vibrio cholerae O1 in Bangladesh waters by fluorescent-antibody direct viable count.
Brayton, P R; Tamplin, M L; Huq, A; Colwell, R R
1987-01-01
A field trial to enumerate Vibrio cholerae O1 in aquatic environments in Bangladesh was conducted, comparing fluorescent-antibody direct viable count with culture detection by the most-probable-number index. Specificity of a monoclonal antibody prepared against the O1 antigen was assessed and incorporated into the fluorescence staining method. All pond and water samples yielded higher counts of viable V. cholerae O1 by fluorescent-antibody direct viable count than by the most-probable-number index. Fluorescence microscopy is a more sensitive detection system than culture methods because it allows the enumeration of both culturable and nonculturable cells and therefore provides more precise monitoring of microbiological water quality. PMID:3324967
Detection ratios on winter surveys of Rocky Mountain Trumpeter Swans Cygnus buccinator
Bart, J.; Mitchell, C.D.; Fisher, M.N.; Dubovsky, J.A.
2007-01-01
We estimated the detection ratio for Rocky Mountain Trumpeter Swans Cygnus buccinator that were counted during aerial surveys made in winter. The standard survey involved counting white or grey birds on snow and ice and thus might be expected to have had low detection ratios. On the other hand, observers were permitted to circle areas where the birds were concentrated multiple times to obtain accurate counts. Actual numbers present were estimated by conducting additional intensive aerial counts either immediately before or immediately after the standard count. Surveyors continued the intensive surveys at each area until consecutive counts were identical. The surveys were made at 10 locations in 2006 and at 19 locations in 2007. A total of 2,452 swans were counted on the intensive surveys. Detection ratios did not vary detectably with year, observer, which survey was conducted first, age of the swans, or the number of swans present. The overall detection ratio was 0.93 (90% confidence interval 0.82-1.04), indicating that the counts were quite accurate. Results are used to depict changes in population size for Rocky Mountain Trumpeter Swans from 1974-2007. ?? Wildfowl & Wetlands Trust.
NASA Astrophysics Data System (ADS)
Uttley, P.; Gendreau, K.; Markwardt, C.; Strohmayer, T. E.; Bult, P.; Arzoumanian, Z.; Pottschmidt, K.; Ray, P. S.; Remillard, R.; Pasham, D.; Steiner, J.; Neilsen, J.; Homan, J.; Miller, J. M.; Iwakiri, W.; Fabian, A. C.
2018-03-01
NICER observed the new X-ray transient MAXI J1820+070 (ATel #11399, #11400, #11403, #11404, #11406, #11418, #11420, #11421) on multiple occasions from 2018 March 12 to 14. & nbsp;During this time the source brightened rapidly, from a total NICER mean count rate of 880 count/s on March 12 to 2800 count/s by March 14 17:00 & nbsp;UTC, corresponding to a change in 2-10 keV modelled flux (see below) from 1.9E-9 to 5E-9 erg cm-2 s-1. & nbsp; The broadband X-ray spectrum is absorbed by a low column density (fitting the model given below, we obtain 1.5E21 cm-2), in keeping with the low Galactic column in the direction of the source (ATel #11418; Dickey & Lockman, 1990, ARAA, 28, 215; Kalberla et al. 2005, A &A, 440, 775) and consists of a hard power-law component with weak reflection features (broad iron line and narrow 6.4 keV line core) and an additional soft X-ray component.
Van Stappen, Vicky; Van Dyck, Delfien; Latomme, Julie; De Bourdeaudhuij, Ilse; Moreno, Luis; Socha, Piotr; Iotova, Violeta; Koletzko, Berthold; Manios, Yannis; Androutsos, Odysseas; Cardon, Greet; De Craemer, Marieke
2018-02-07
This study is part of the ToyBox-study, which is conducted in six European countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain), aiming to develop a cost-effective kindergarten-based, family-involved intervention to prevent overweight and obesity in four- to six-year-old preschool children. In the current study, we aimed to examine and compare preschoolers' step count patterns, across the six European countries. A sample of 3578 preschoolers (mean age: 4.8 ± 0.4) was included. Multilevel analyses were performed to take clustering of measurements into account. Based on the average hourly steps, step count patterns for the six European countries were created for weekdays and weekend days. The step count patterns during weekdays were related to the daily kindergarten schedules. Step count patterns during weekdays showed several significant peaks and troughs ( p < 0.01) and clearly reflected the kindergartens' daily schedules, except for Germany. For example, low numbers of steps were observed during afternoon naptimes and high numbers of steps during recess. In Germany, step count patterns did not show clear peaks and troughs, which can be explained by a less structured kindergarten schedule. On weekend days, differences in step count patterns were observed in the absolute number of steps in the afternoon trough and the period in which the evening peak occurred. Differences in step count patterns across the countries can be explained by differences in (school) policy, lifestyle habits, and culture. Therefore, it might be important to respond to these step count patterns and more specifically to tackle the inactive periods during interventions to promote physical activity in preschoolers.
Van Stappen, Vicky; Latomme, Julie; Moreno, Luis; Socha, Piotr; Iotova, Violeta; Koletzko, Berthold; Manios, Yannis; Androutsos, Odysseas
2018-01-01
This study is part of the ToyBox-study, which is conducted in six European countries (Belgium, Bulgaria, Germany, Greece, Poland and Spain), aiming to develop a cost-effective kindergarten-based, family-involved intervention to prevent overweight and obesity in four- to six-year-old preschool children. In the current study, we aimed to examine and compare preschoolers’ step count patterns, across the six European countries. A sample of 3578 preschoolers (mean age: 4.8 ± 0.4) was included. Multilevel analyses were performed to take clustering of measurements into account. Based on the average hourly steps, step count patterns for the six European countries were created for weekdays and weekend days. The step count patterns during weekdays were related to the daily kindergarten schedules. Step count patterns during weekdays showed several significant peaks and troughs (p < 0.01) and clearly reflected the kindergartens’ daily schedules, except for Germany. For example, low numbers of steps were observed during afternoon naptimes and high numbers of steps during recess. In Germany, step count patterns did not show clear peaks and troughs, which can be explained by a less structured kindergarten schedule. On weekend days, differences in step count patterns were observed in the absolute number of steps in the afternoon trough and the period in which the evening peak occurred. Differences in step count patterns across the countries can be explained by differences in (school) policy, lifestyle habits, and culture. Therefore, it might be important to respond to these step count patterns and more specifically to tackle the inactive periods during interventions to promote physical activity in preschoolers. PMID:29414916
SCUBA-2 follow-up of Herschel-SPIRE observed Planck overdensities
NASA Astrophysics Data System (ADS)
MacKenzie, Todd P.; Scott, Douglas; Bianconi, Matteo; Clements, David L.; Dole, Herve A.; Flores-Cacho, Inés; Guery, David; Kneissl, Ruediger; Lagache, Guilaine; Marleau, Francine R.; Montier, Ludovic; Nesvadba, Nicole P. H.; Pointecouteau, Etienne; Soucail, Genevieve
2017-07-01
We present SCUBA-2 follow-up of 61 candidate high-redshift Planck sources. Of these, 10 are confirmed strong gravitational lenses and comprise some of the brightest such submm sources on the observed sky, while 51 are candidate proto-cluster fields undergoing massive starburst events. With the accompanying Herschel-Spectral and Photometric Imaging Receiver observations and assuming an empirical dust temperature prior of 34^{+13}_{-9} K, we provide photometric redshift and far-IR luminosity estimates for 172 SCUBA-2-selected sources within these Planck overdensity fields. The redshift distribution of the sources peak between a redshift of 2 and 4, with one-third of the sources having S500/S350 > 1. For the majority of the sources, we find far-IR luminosities of approximately 1013 L⊙, corresponding to star formation rates of around 1000 M⊙ yr-1. For S850 > 8 mJy sources, we show that there is up to an order of magnitude increase in star formation rate density and an increase in uncorrected number counts of 6 for S850 > 8 mJy when compared to typical cosmological survey fields. The sources detected with SCUBA-2 account for only approximately 5 per cent of the Planck flux at 353 GHz, and thus many more fainter sources are expected in these fields.
Gross, Hans J
2011-09-01
Human inborn numerical competence means our ability to recognize object numbers precisely under circumstances which do not allow sequential counting. This archaic process has been called "subitizing," from the Latin "subito" = suddenly, immediately, indicating that the objects in question are presented to test persons only for a fraction of a second in order to prevent counting. In contrast, however, sequential counting, an outstanding cultural achievement of mankind, means to count "1, 2, 3, 4, 5, 6, 7, 8…" without a limit. The following essay will explain how the limit of numerical competence, i.e., the recognition of object numbers without counting, has been determined for humans and how this has been achieved for the first time in case of an invertebrate, the honeybee. Finally, a hypothesis explaining the influence of our limited, inborn numerical competence on counting in our times, e.g., in the Russian language, will be presented. Subitizing versus counting by young Down syndrome infants and autistics and the Savant syndrome will be discussed.
To bee or not to bee, this is the question…
2011-01-01
Human inborn numerical competence means our ability to recognize object numbers precisely under circumstances which do not allow sequential counting. This archaic process has been called “subitizing,” from the Latin “subito” = suddenly, immediately, indicating that the objects in question are presented to test persons only for a fraction of a second in order to prevent counting. In contrast, however, sequential counting, an outstanding cultural achievement of mankind, means to count “1, 2, 3, 4, 5, 6, 7, 8…” without a limit. The following essay will explain how the limit of numerical competence, i.e., the recognition of object numbers without counting, has been determined for humans and how this has been achieved for the first time in case of an invertebrate, the honeybee. Finally, a hypothesis explaining the influence of our limited, inborn numerical competence on counting in our times, e.g., in the Russian language, will be presented. Subitizing versus counting by young Down syndrome infants and autistics and the Savant syndrome will be discussed. PMID:22046473
Rocha, João M; Malcata, F Xavier
2012-08-01
A thorough microbiological study of maize and rye flours, and sourdoughs obtained therefrom for eventual manufacture of broa--a dark sour bread typical in Northern Portugal, following artisanal practices, was carried out. Towards this purpose, samples were supplied by 14 artisanal producers, selected from 4 sub-regions, during two periods of the year. Total viable counts, as well as viable mesophilic and thermophilic microorganisms, yeasts and molds, Gram⁻ rods, endospore-forming and nonsporing Gram⁺ rods, and catalase⁺ and catalase⁻ Gram⁺ cocci were assayed for. The comprehensive experimental dataset unfolded a unique and rather complex wild microflora in flours and sourdoughs throughout the whole region, which did not discriminate among sub-regions or seasons, or flour source for that matter. However, fermentation played a major role upon the numbers of the various microbial groups: the viable counts of yeasts, lactobacilli, streptococci, lactococci, enterococci and leuconostocs increased, whereas those of molds, Enterobacteriaceae, Pseudomonadaceae, staphylococci and micrococci decreased. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vehicular crash data used to rank intersections by injury crash frequency and severity.
Liu, Yi; Li, Zongzhi; Liu, Jingxian; Patel, Harshingar
2016-09-01
This article contains data on research conducted in "A double standard model for allocating limited emergency medical service vehicle resources ensuring service reliability" (Liu et al., 2016) [1]. The crash counts were sorted out from comprehensive crash records of over one thousand major signalized intersections in the city of Chicago from 2004 to 2010. For each intersection, vehicular crashes were counted by crash severity levels, including fatal, injury Types A, B, and C for major, moderate, and minor injury levels, property damage only (PDO), and unknown. The crash data was further used to rank intersections by equivalent injury crash frequency. The top 200 intersections with the highest number of crash occurrences identified based on crash frequency- and severity-based scenarios are shared in this brief. The provided data would be a valuable source for research in urban traffic safety analysis and could also be utilized to examine the effectiveness of traffic safety improvement planning and programming, intersection design enhancement, incident and emergency management, and law enforcement strategies.
On Approaching the Ultimate Limits of Communication Using a Photon-Counting Detector
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.; Dolinar, Samuel J.; Birnbaum, Kevin M.; Divsalar, Dariush
2012-01-01
Coherent states achieve the Holevo capacity of a pure-loss channel when paired with an optimal measurement, but a physical realization of this measurement scheme is as of yet unknown, and it is also likely to be of high complexity. In this paper, we focus on the photon-counting measurement and study the photon and dimensional efficiencies attainable with modulations over classical- and nonclassical-state alphabets. We analyze two binary modulation architectures that improve upon the dimensional versus photon efficiency tradeoff achievable with the state-of-the-art coherent-state on-off keying modulation. We show that at high photon efficiency these architectures achieve an efficiency tradeoff that differs from the best possible tradeoff--determined by the Holevo capacity--by only a constant factor. The first architecture we analyze is a coherent-state transmitter that relies on feedback from the receiver to control the transmitted energy. The second architecture uses a single-photon number-state source.
Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S
2011-07-01
A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.
Reducing sampling error in faecal egg counts from black rhinoceros (Diceros bicornis).
Stringer, Andrew P; Smith, Diane; Kerley, Graham I H; Linklater, Wayne L
2014-04-01
Faecal egg counts (FECs) are commonly used for the non-invasive assessment of parasite load within hosts. Sources of error, however, have been identified in laboratory techniques and sample storage. Here we focus on sampling error. We test whether a delay in sample collection can affect FECs, and estimate the number of samples needed to reliably assess mean parasite abundance within a host population. Two commonly found parasite eggs in black rhinoceros (Diceros bicornis) dung, strongyle-type nematodes and Anoplocephala gigantea, were used. We find that collection of dung from the centre of faecal boluses up to six hours after defecation does not affect FECs. More than nine samples were needed to greatly improve confidence intervals of the estimated mean parasite abundance within a host population. These results should improve the cost-effectiveness and efficiency of sampling regimes, and support the usefulness of FECs when used for the non-invasive assessment of parasite abundance in black rhinoceros populations.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Comparison of bacteria populations in clean and recycled sand used for bedding in dairy facilities.
Kristula, M A; Rogers, W; Hogan, J S; Sabo, M
2005-12-01
Bedding samples were collected twice from commercial dairy free-stall facilities that used recycled sand and clean sand in both the summer and winter. Collection began on the day sand was taken from the pile (d 0) and placed in the free stalls, and continued for 5 to 7 additional days. The number of colonies per gram of bedding of gram-negative bacteria, coliforms, Streptococcus spp., and Klebsiella spp. were estimated for each sand sample as well as amounts of dry and organic matter. Clean sand (CS) and recycled sand (RS) had the same bacterial counts when compared at any sampling time. The mean counts of bacterial populations did vary over the course of the study in both CS and RS. There was a significant increase in bacterial counts from d 0 to d 1 for gram-negative bacteria, coliforms, and Streptococcus spp. in both winter and summer. Counts of gram-negative bacteria, coliforms, Klebsiella spp., and Streptococcus spp. did not differ from d 1 to 7 in the winter. Total counts of gram-negative bacteria did not differ from d 1 to 7 in the summer. On d 1 in the summer, coliform counts were lower than at d 5 to 7, and Klebsiella spp. counts were lower than on d 3 to 7. Streptococcus spp. counts were high on d 1 and were constant through d 7 in both winter and summer trials. The number of coliform and Klebsiella spp. in both CS and RS was below the threshold thought to cause mastitis during the sampling times. The number of Streptococcus spp. was high in both CS and RS during the sampling periods. Other management factors need to be identified to decrease the number of Streptococcus spp. in bedding. Recycled sand had a higher organic matter and lower dry matter compared with CS in winter and summer. The results for this study were obtained from multiple herd comparisons, and herd was a significant effect suggesting that different management systems influence the number and types of bacteria in both CS and RS.
Comparison and assessment of aerial and ground estimates of waterbird colonies
Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.
2008-01-01
Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.
Counting pollen grains using readily available, free image processing and analysis software.
Costa, Clayton M; Yang, Suann
2009-10-01
Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.
Nutsedge Counts Predict Meloidogyne incognita Juvenile Counts in an Integrated Management System.
Ou, Zhining; Murray, Leigh; Thomas, Stephen H; Schroeder, Jill; Libbin, James
2008-06-01
The southern root-knot nematode (Meloidogyne incognita), yellow nutsedge (Cyperus esculentus) and purple nutsedge (Cyperus rotundus) are important pests in crops grown in the southern US. Management of the individual pests rather than the pest complex is often unsuccessful due to mutually beneficial pest interactions. In an integrated pest management scheme using alfalfa to suppress nutsedges and M. incognita, we evaluated quadratic polynomial regression models for prediction of the number of M. incognita J2 in soil samples as a function of yellow and purple nutsedge plant counts, squares of nutsedge counts and the cross-product between nutsedge counts . In May 2005, purple nutsedge plant count was a significant predictor of M. incognita count. In July and September 2005, counts of both nutsedges and the cross-product were significant predictors. In 2006, the second year of the alfalfa rotation, counts of all three species were reduced. As a likely consequence, the predictive relationship between nutsedges and M. incognita was not significant for May and July. In September 2006, purple nutsedge was a significant predictor of M. incognita. These results lead us to conclude that nutsedge plant counts in a field infested with the M. incognita-nutsedge pest complex can be used as a visual predictor of M. incognita J2 populations, unless the numbers of nutsedge plants and M. incognita are all very low.
Nutsedge Counts Predict Meloidogyne incognita Juvenile Counts in an Integrated Management System
Ou, Zhining; Murray, Leigh; Thomas, Stephen H.; Schroeder, Jill; Libbin, James
2008-01-01
The southern root-knot nematode (Meloidogyne incognita), yellow nutsedge (Cyperus esculentus) and purple nutsedge (Cyperus rotundus) are important pests in crops grown in the southern US. Management of the individual pests rather than the pest complex is often unsuccessful due to mutually beneficial pest interactions. In an integrated pest management scheme using alfalfa to suppress nutsedges and M. incognita, we evaluated quadratic polynomial regression models for prediction of the number of M. incognita J2 in soil samples as a function of yellow and purple nutsedge plant counts, squares of nutsedge counts and the cross-product between nutsedge counts . In May 2005, purple nutsedge plant count was a significant predictor of M. incognita count. In July and September 2005, counts of both nutsedges and the cross-product were significant predictors. In 2006, the second year of the alfalfa rotation, counts of all three species were reduced. As a likely consequence, the predictive relationship between nutsedges and M. incognita was not significant for May and July. In September 2006, purple nutsedge was a significant predictor of M. incognita. These results lead us to conclude that nutsedge plant counts in a field infested with the M. incognita-nutsedge pest complex can be used as a visual predictor of M. incognita J2 populations, unless the numbers of nutsedge plants and M. incognita are all very low. PMID:19259526
Adjusting MtDNA Quantification in Whole Blood for Peripheral Blood Platelet and Leukocyte Counts.
Hurtado-Roca, Yamilee; Ledesma, Marta; Gonzalez-Lazaro, Monica; Moreno-Loshuertos, Raquel; Fernandez-Silva, Patricio; Enriquez, Jose Antonio; Laclaustra, Martin
2016-01-01
Alterations of mitochondrial DNA copy number (mtDNAcn) in the blood (mitochondrial to nuclear DNA ratio) appear associated with several systemic diseases, including primary mitochondrial disorders, carcinogenesis, and hematologic diseases. Measuring mtDNAcn in DNA extracted from whole blood (WB) instead of from peripheral blood mononuclear cells or buffy coat may yield different results due to mitochondrial DNA present in platelets. The aim of this work is to quantify the contribution of platelets to mtDNAcn in whole blood [mtDNAcn(WB)] and to propose a correction formula to estimate leukocytes' mtDNAcn [mtDNAcn(L)] from mtDNAcn(WB). Blood samples from 10 healthy adults were combined with platelet-enriched plasma and saline solution to produce artificial blood preparations. Aliquots of each sample were combined with five different platelet concentrations. In 46 of these blood preparations, mtDNAcn was measured by qPCR. MtDNAcn(WB) increased 1.07 (95%CI 0.86, 1.29; p<0.001) per 1000 platelets present in the preparation. We proved that leukocyte count should also be taken into account as mtDNAcn(WB) was inversely associated with leukocyte count; it increased 1.10 (95%CI 0.95, 1.25, p<0.001) per unit increase of the ratio between platelet and leukocyte counts. If hematological measurements are available, subtracting 1.10 the platelets/leukocyte ratio from mtDNAcn(WB) may serve as an estimation for mtDNAcn(L). Both platelet and leukocyte counts in the sample are important sources of variation if comparing mtDNAcn among groups of patients when mtDNAcn is measured in DNA extracted from whole blood. Not taking the platelet/leukocyte ratio into account in whole blood measurements, may lead to overestimation and misclassification if interpreted as leukocytes' mtDNAcn.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Multi-channel photon counting DOT system based on digital lock-in detection technique
NASA Astrophysics Data System (ADS)
Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng
2011-02-01
Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
The impact of finger counting habits on arithmetic in adults and children.
Newman, Sharlene D; Soylu, Firat
2014-07-01
Here, we explored the impact of finger counting habits on arithmetic in both adults and children. Two groups of participants were examined, those that begin counting with their left hand (left-starters) and those that begin counting with their right hand (right-starters). For the adults, performance on an addition task in which participants added 2 two-digit numbers was compared. The results revealed that left-starters were slower than right-starters when adding and they had lower forward and backward digit-span scores. The children (aged 5-12) showed similar results on a single-digit timed addition task-right-starters outperformed left-starters. However, the children did not reveal differences in working memory or verbal and non-verbal intelligence as a function of finger counting habit. We argue that the motor act of finger counting influences how number is represented and suggest that left-starters may have a more bilateral representation that accounts for the slower processing.
NASA Technical Reports Server (NTRS)
Steidel, Charles C.; Hamilton, Donald
1993-01-01
We present an analysis of the number counts and colors of faint galaxies to about 26.5 mag in the fields of two high Galactic latitude, very-high-redshift QSOs. We concentrate on the general properties of the field galaxies at faint magnitudes. In particular, we readdress the faint galaxy number counts and colors as a function of apparent magnitude and we reexamine the possible contribution of very-high-redshift galaxies to the faint samples. We find that the number counts to R = 26 are well fitted by the relation log N(m) = 0.31R + C. The G-band counts for the same galaxies are consistent with the same slope fainter than G about 23.5, but exhibit a much steeper slope at brighter magnitudes. At R = 25.5, the differential number counts have reached about 1.2 x 10 exp 5/sq deg; the same surface density of galaxies is reached at G = 26.5. We confirm the existence of a gradual 'blueing' trend of the field galaxies toward fainter apparent magnitude; however, the blueing trend appears to extend only as faint as G about 24, fainter than which both the (G-R) and (U sub n-G) colors appear to level off. The mean colors of faint galaxies are considerably redder than flat spectrum. There are essentially no objects to R = 26 which have spectral energy distributions which are bluer than flat spectrum. The potential contribution of very-high-redshift galaxies may have been underestimated in previous analyses; the current data are consistent with the same population of relatively luminous galaxies at z about 3 as exist at z about 0.7.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Fever in trauma patients: evaluation of risk factors, including traumatic brain injury.
Bengualid, Victoria; Talari, Goutham; Rubin, David; Albaeni, Aiham; Ciubotaru, Ronald L; Berger, Judith
2015-03-01
The role of fever in trauma patients remains unclear. Fever occurs as a response to release of cytokines and prostaglandins by white blood cells. Many factors, including trauma, can trigger release of these factors. To determine whether (1) fever in the first 48 hours is related to a favorable outcome in trauma patients and (2) fever is more common in patients with head trauma. Retrospective study of trauma patients admitted to the intensive care unit for at least 2 days. Data were analyzed by using multivariate analysis. Of 162 patients studied, 40% had fever during the first 48 hours. Febrile patients had higher mortality rates than did afebrile patients. When adjusted for severity of injuries, fever did not correlate with mortality. Neither the incidence of fever in the first 48 hours after admission to the intensive care unit nor the number of days febrile in the unit differed between patients with and patients without head trauma (traumatic brain injury). About 70% of febrile patients did not have a source found for their fever. Febrile patients without an identified source of infection had lower peak white blood cell counts, lower maximum body temperature, and higher minimum platelet counts than did febrile patients who had an infectious source identified. The most common infection was pneumonia. No relationship was found between the presence of fever during the first 48 hours and mortality. Patients with traumatic brain injury did not have a higher incidence of fever than did patients without traumatic brain injury. About 30% of febrile patients had an identifiable source of infection. Further studies are needed to understand the origin and role of fever in trauma patients. ©2015 American Association of Critical-Care Nurses.
Different binarization processes validated against manual counts of fluorescent bacterial cells.
Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W
2016-09-01
State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.
A Statistical Treatment of Bioassay Pour Fractions
NASA Technical Reports Server (NTRS)
Barengoltz, Jack; Hughes, David W.
2014-01-01
The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemcov, M.; Cooray, A.; Bock, J.
We have observed four massive galaxy clusters with the SPIRE instrument on the Herschel Space Observatory and measure a deficit of surface brightness within their central region after removing detected sources. We simulate the effects of instrumental sensitivity and resolution, the source population, and the lensing effect of the clusters to estimate the shape and amplitude of the deficit. The amplitude of the central deficit is a strong function of the surface density and flux distribution of the background sources. We find that for the current best fitting faint end number counts, and excellent lensing models, the most likely amplitudemore » of the central deficit is the full intensity of the cosmic infrared background (CIB). Our measurement leads to a lower limit to the integrated total intensity of the CIB of I{sub 250{mu}m}>0.69{sub -0.03}{sup +0.03}(stat.){sub -0.06}{sup +0.11}(sys.) MJy sr{sup -1}, with more CIB possible from both low-redshift sources and from sources within the target clusters. It should be possible to observe this effect in existing high angular resolution data at other wavelengths where the CIB is bright, which would allow tests of models of the faint source component of the CIB.« less
Scaup migration patterns in North Dakota relative to temperatures and water conditions
Austin, J.E.; Granfors, D.A.; Johnson, M.A.; Kohn, S.C.
2002-01-01
Greater (Aythya marila) and lesser scaup (A. affinis) have protracted spring migrations. Migrants may still be present on southern breeding areas when the annual Waterfowl Breeding Population and Habitat Surveys (WBPHS) are being conducted. Understanding factors affecting the chronology and rate of spring migration is important for the interpretation of data from annual population surveys. We describe the general temporal pattern of scaup numbers in south-central North Dakota in spring, examine the relationships between scaup numbers and measures of local water conditions and spring temperatures, and assess timing of the WBPHS relative to numbers of scaup occurring in the study area in late May. Scaup were counted weekly on a 95-km, 400-m-wide transect from late March through May, 1957-1999. Average numbers of scaup per count were positively associated with numbers of seasonal, semipermanent, and total ponds. Average minimum daily ambient temperatures showed a trend of increasing temperatures over the 43 years, and dates of peak scaup counts became progressively earlier. Weeks of early migration usually had higher temperatures than weeks of delayed migration. The relationship between temperature and timing of migration was strongest during the second and third weeks of April, which is A# 1 week before numbers peak (median date = 19 Apr). Trends in sex and pair ratios were not consistent among years. Counts in late May-early June indicated considerable annual variability in the magnitude of late migrants. Scaup numbers during this period seemed to stabilize in only 5 of the 19 years when 2 or more surveys were conducted after the WBPHS. These findings corroborate concerns regarding the accuracy of the WBPHS for representing breeding populations of scaup and the possibility of double-counting scaup in some years.
NASA Astrophysics Data System (ADS)
Gomez, Jose Alfonso; Landa del Castillo, Blanca; Guzman, Gema; Petticrew, Ellen L.; Owens, Phillip N.
2016-04-01
Recently, several studies have shown the effect of soil management on the soil microbial community in olive orchards, how this might differ due to a combination of management and soil type, and how these can be identified using DNA markers (Landa et al., 2014). Using DNA markers of soil bacteria seems to have the potential to detect differences in soil properties between different areas (Joe-Strack and Petticrew, 2012), particularly in those that by their location and characteristics might not present differences in other chemical or geochemical soil properties. This presentation describes the preliminary results of an exploratory survey to evaluate the potential of soil bacteria community composition in determining the origin of the sediment in two small endorheic lagoons in southern Spain. Two lagoons (Zoñar and Dulce) in southern Spain with a small contributing area (877 and 263 ha respectively) were selected for this study. These lagoons were chosen because of their environmental relevance and increasing siltation problems. The dominant land use in most of their contributing catchments is rain-fed olive tree cultivation. In May 2015, two small subcatchments within each of the lagoon's contributing area were sampled. At each sampling point, a composite sample was collected of three subsamples taken within a 5 m radiusa. We differentiated between 0-20 and 20-40 cm soil depth. Additionally, in both lagoons samples were taken from the sedimentation of the stream draining the subcatchment into the lagoon shores, at 0-20 -cm depth. Prior to each sampling each of the the two subcatchments were explored for indications of different properties or management that could help divide it into different "homogeneous" units, including: soil management, visual indications of erosion symptoms (e.g. rills, soil mounds around olive trees), colour, and landscape position. As a result, the subcatchment in each lagoon was divided into three areas (referred to as 1, 2 and 3). The bulk community of DNA was extracted from 250 mg of soil samples (three replicates per sample) using the procedure described in Landa et al. (2014). The bacterial 16S rRNA gene V1-V2 hypervariable regions were amplified in polymerase chain reaction (PCR). The sequencing procedure was performed according to the manufacturer's recommendations using MiSeq Reagent Kit v2 for 300 cycles on MiSeq desktop sequencer. The raw dataset for each sample consisted of the number of counts for each of the 6640 operational taxonomic units (OTU) analyzed. All the screening and analysis was performed independently for each lagoon. Given the large number of OTUs, a first screening was made discarding any OTU that did not presented at least five samples with counts >20 for that OTU. This lowered the number of OTUs to 205 in Dulce and 217 in Zoñar. Because of the limited number of samples, we did not perform independent analysis for each soil depth. All the analyses were performed twice; one with the original number of counts and another with the normalized number of counts. We screened the OTU following a 4-step method to determine those with the best ability to discriminate among the three potential source areas. These steps were: 1) eliminate OTUs with no readings or very few, that could be experimental noise; 2) keep only OTUs that are different among source areas; 3) eliminate OTUs that range outside of feasible solutions to explain average values found in sediment; and 4) eliminate OTUs with the largest variability. Afterwards, several over-determined mixing models were solved considering different combinations of OTUs using limSolve (Soetaert et al., 2014) in R. Preliminary results show that 0.2 to 0.6 % of the searched OTUs (i.e. 14 to 42) had the potential for use in the mixing models after the four-step screening process. The results indicate a large variability in the number of counts among the samples from different areas within the subcatchments ranging, on average, from 49 to 127 % in Dulce and from 80 to 117 % in Zóñar. These rangesare within values reported for other soil chemical and physical properties, although the higher values are above the most commonly reported CVs which tend to be in the range from 30 to 80 %. Some groups, that are relatively stable to the normalization process, can provide enough information for solving a mixing model, although the specific groups vary between the two catchments as expected from previous studies. Overall, all the models for Zóñar tended to provide similar results with low contributions from source areas 1 and 2, and a much larger contribution from source area 3. For this solution, the mixing model was able to replicate the values of all the OTUs included in the model. The predicted values for Dulce were not as stable. The model with 10 OTUs were similar with a very low contribution from source area 2, a moderate contribution from source area 3 and a maximum contribution from source area 1. However, these values differed from those with only three OTUs, and they also differed between themselves when the normalized and non-normalized values were used. This solution also seemed to replicate the averaged measured values of most of the OTÚs included in the model. These preliminary results demonstrate the potential of soil bacterial 16S rDNA in sediment fingerprinting studies, although some questions need to be addressed in more detail, including: the temporal evolution of the distribution of the bacterial markers with soil depth; the implications of selective transport by runoff; and the relatively large variability of counts among samples from the same area. We are currently repeating the sampling in one of the subcatchments to provide some insight into these issues. Key words: sediment, fingerprinting, soil, microbial, DNA, lagoon References Joe-Strack, J.A., Petticrew, E.L. 2012. Use of LH-PCR as a DNA fingerprint technique to trace sediment-associated microbial communities from various land uses. European Geosciences Union Annual Meeting, Vienna, Austria. (abstract) Landa, B. B., Montes-Borrego, M., Aranda, S., Soriano, M.A., Gómez, J.A., Navas-Cortés, J.A. 2014. Soil factors involved in the diversity and structure of soil bacterial communities in commercial organic olive orchards in Southern Spain. Environmental Microbiology Reports 6: 196 - 207. Soetaert, K., Van den Meersche, K., van Oevelen, D. 2015. Package limSolve , solving linear inverse models in R. https://cran.r-project.org/web/packages/limSolve/index.html Last accessed in August 15th 2015.
Cuatianquiz Lima, Cecilia
2016-01-01
Secondary cavity nesting (SCN) birds breed in holes that they do not excavate themselves. This is possible where there are large trees whose size and age permit the digging of holes by primary excavators and only rarely happens in forest plantations, where we expected a deficit of both breeding holes and SCN species. We assessed whether the availability of tree cavities influenced the number of SCNs in two temperate forest types, and evaluated the change in number of SCNs after adding nest boxes. First, we counted all cavities within each of our 25-m radius sampling points in mature and young forest plots during 2009. We then added nest boxes at standardised locations during 2010 and 2011 and conducted fortnightly bird counts (January–October 2009–2011). In 2011 we added two extra plots of each forest type, where we also conducted bird counts. Prior to adding nest boxes, counts revealed more SCNs in mature than in young forest. Following the addition of nest boxes, the number of SCNs increased significantly in the points with nest boxes in both types of forest. Counts in 2011 confirmed the increase in number of birds due to the addition of nest boxes. Given the likely benefits associated with a richer bird community we propose that, as is routinely done in some countries, forest management programs preserve old tree stumps and add nest boxes to forest plantations in order to increase bird numbers and bird community diversity. PMID:26998410
Galaxy Number Counts from the Sloan Digital Sky Survey Commissioning Data
NASA Astrophysics Data System (ADS)
Yasuda, Naoki; Fukugita, Masataka; Narayanan, Vijay K.; Lupton, Robert H.; Strateva, Iskra; Strauss, Michael A.; Ivezić, Željko; Kim, Rita S. J.; Hogg, David W.; Weinberg, David H.; Shimasaku, Kazuhiro; Loveday, Jon; Annis, James; Bahcall, Neta A.; Blanton, Michael; Brinkmann, Jon; Brunner, Robert J.; Connolly, Andrew J.; Csabai, István; Doi, Mamoru; Hamabe, Masaru; Ichikawa, Shin-Ichi; Ichikawa, Takashi; Johnston, David E.; Knapp, G. R.; Kunszt, Peter Z.; Lamb, D. Q.; McKay, Timothy A.; Munn, Jeffrey A.; Nichol, Robert C.; Okamura, Sadanori; Schneider, Donald P.; Szokoly, Gyula P.; Vogeley, Michael S.; Watanabe, Masaru; York, Donald G.
2001-09-01
We present bright galaxy number counts in five broad bands (u', g', r', i', z') from imaging data taken during the commissioning phase of the Sloan Digital Sky Survey (SDSS). The counts are derived from two independent stripes of imaging scans along the celestial equator, one each toward the northern and the southern Galactic cap, covering about 230 and 210 deg2, respectively. A careful study is made to verify the reliability of the photometric catalog. For galaxies brighter than r*=16, the catalog produced by automated software is examined against eye inspection of all objects. Statistically meaningful results on the galaxy counts are obtained in the magnitude range 12<=r*<=21, using a sample of 900,000 galaxies. The counts from the two stripes differ by about 30% at magnitudes brighter than r*=15.5, consistent with a local 2 σ fluctuation due to large-scale structure in the galaxy distribution. The shape of the number counts-magnitude relation brighter than r*=16 is well characterized by N~100.6m, the relation expected for a homogeneous galaxy distribution in a ``Euclidean'' universe. In the magnitude range 16
Embedding Number-Combinations Practice Within Word-Problem Tutoring
Powell, Sarah R.; Fuchs, Lynn S.; Fuchs, Douglas
2012-01-01
Two aspects of mathematics with which students with mathematics learning difficulty (MLD) often struggle are word problems and number-combination skills. This article describes a math program in which students receive instruction on using algebraic equations to represent the underlying problem structure for three word-problem types. Students also learn counting strategies for answering number combinations that they cannot retrieve from memory. Results from randomized-control trials indicated that embedding the counting strategies for number combinations produces superior word-problem and number-combination outcomes for students with MLD beyond tutoring programs that focus exclusively on number combinations or word problems. PMID:22661880
Embedding Number-Combinations Practice Within Word-Problem Tutoring.
Powell, Sarah R; Fuchs, Lynn S; Fuchs, Douglas
2010-09-01
Two aspects of mathematics with which students with mathematics learning difficulty (MLD) often struggle are word problems and number-combination skills. This article describes a math program in which students receive instruction on using algebraic equations to represent the underlying problem structure for three word-problem types. Students also learn counting strategies for answering number combinations that they cannot retrieve from memory. Results from randomized-control trials indicated that embedding the counting strategies for number combinations produces superior word-problem and number-combination outcomes for students with MLD beyond tutoring programs that focus exclusively on number combinations or word problems.
The dormant blood microbiome in chronic, inflammatory diseases.
Potgieter, Marnie; Bester, Janette; Kell, Douglas B; Pretorius, Etheresia
2015-07-01
Blood in healthy organisms is seen as a 'sterile' environment: it lacks proliferating microbes. Dormant or not-immediately-culturable forms are not absent, however, as intracellular dormancy is well established. We highlight here that a great many pathogens can survive in blood and inside erythrocytes. 'Non-culturability', reflected by discrepancies between plate counts and total counts, is commonplace in environmental microbiology. It is overcome by improved culturing methods, and we asked how common this would be in blood. A number of recent, sequence-based and ultramicroscopic studies have uncovered an authentic blood microbiome in a number of non-communicable diseases. The chief origin of these microbes is the gut microbiome (especially when it shifts composition to a pathogenic state, known as 'dysbiosis'). Another source is microbes translocated from the oral cavity. 'Dysbiosis' is also used to describe translocation of cells into blood or other tissues. To avoid ambiguity, we here use the term 'atopobiosis' for microbes that appear in places other than their normal location. Atopobiosis may contribute to the dynamics of a variety of inflammatory diseases. Overall, it seems that many more chronic, non-communicable, inflammatory diseases may have a microbial component than are presently considered, and may be treatable using bactericidal antibiotics or vaccines. © FEMS 2015.
Martin, J.; Kitchens, W.M.; Hines, J.E.
2007-01-01
Monitoring natural populations is often a necessary step to establish the conservation status of species and to help improve management decisions. Nevertheless, many monitoring programs do not effectively address primary sources of variability in monitoring data, which ultimately may limit the utility of monitoring in identifying declines and improving management. To illustrate the importance of taking into account detectability and spatial variation, we used a recently proposed estimator of abundance (superpopulation estimator) to estimate population size of and number of young produced by the Snail Kite (Rostrhamus sociabilis plumbeus) in Florida. During the last decade, primary recovery targets set by the U.S. Fish and Wildlife Service for the Snail Kite that were based on deficient monitoring programs (i.e., uncorrected counts) were close to being met (by simply increasing search effort during count surveys). During that same period, the Snail Kite population declined dramatically (by 55% from 1997 to 2005) and the number of young decreased by 70% between 1992?1998 and 1999?2005. Our results provide a strong practical case in favor of the argument that investing a sufficient amount of time and resources into designing and implementing monitoring programs that carefully address detectability and spatial variation is critical for the conservation of endangered species.
The dormant blood microbiome in chronic, inflammatory diseases
Potgieter, Marnie; Bester, Janette; Kell, Douglas B.; Pretorius, Etheresia
2015-01-01
Blood in healthy organisms is seen as a ‘sterile’ environment: it lacks proliferating microbes. Dormant or not-immediately-culturable forms are not absent, however, as intracellular dormancy is well established. We highlight here that a great many pathogens can survive in blood and inside erythrocytes. ‘Non-culturability’, reflected by discrepancies between plate counts and total counts, is commonplace in environmental microbiology. It is overcome by improved culturing methods, and we asked how common this would be in blood. A number of recent, sequence-based and ultramicroscopic studies have uncovered an authentic blood microbiome in a number of non-communicable diseases. The chief origin of these microbes is the gut microbiome (especially when it shifts composition to a pathogenic state, known as ‘dysbiosis’). Another source is microbes translocated from the oral cavity. ‘Dysbiosis’ is also used to describe translocation of cells into blood or other tissues. To avoid ambiguity, we here use the term ‘atopobiosis’ for microbes that appear in places other than their normal location. Atopobiosis may contribute to the dynamics of a variety of inflammatory diseases. Overall, it seems that many more chronic, non-communicable, inflammatory diseases may have a microbial component than are presently considered, and may be treatable using bactericidal antibiotics or vaccines. PMID:25940667
Rcount: simple and flexible RNA-Seq read counting.
Schmid, Marc W; Grossniklaus, Ueli
2015-02-01
Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Code of Federal Regulations, 2013 CFR
2013-01-01
... indicated on the package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth...
Code of Federal Regulations, 2014 CFR
2014-01-01
... indicated on the package. The number of pears in the box shall not vary more than 3 from the number indicated on the box. (b) When the numerical count is marked on western standard pear boxes the pears shall not vary more than three-eighths inch in their transverse diameter for counts 120 or less; one-fourth...
Newark Kids Count 2000: A Profile of Child Well-Being.
ERIC Educational Resources Information Center
Cheslow, Becky
This Kids Count report provides statistical data on several indicators of child well-being in Newark, New Jersey. Indicators were grouped into seven categories: (1) Demographics (including population, number of registered voters, income levels, and persons living below poverty level); (2) Family Well-Being (including average number of children in…
More Combinatorial Proofs via Flagpole Arrangements
ERIC Educational Resources Information Center
DeTemple, Duane; Reynolds, H. David, II
2006-01-01
Combinatorial identities are proved by counting the number of arrangements of a flagpole and guy wires on a row of blocks that satisfy a set of conditions. An identity is proved by first deriving and then equating two expressions that each count the number of permissible arrangements. Identities for binomial coefficients and recursion relations…
21 CFR 1304.11 - Inventory requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... solutions), identified by the batch number or other appropriate identifying number, and if possible the... listed in Schedule I or II, make an exact count or measure of the contents, or (ii) If the substance is... holds more than 1,000 tablets or capsules in which case he/she must make an exact count of the contents...
21 CFR 1304.11 - Inventory requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... solutions), identified by the batch number or other appropriate identifying number, and if possible the... listed in Schedule I or II, make an exact count or measure of the contents, or (ii) If the substance is... holds more than 1,000 tablets or capsules in which case he/she must make an exact count of the contents...
21 CFR 1304.11 - Inventory requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... solutions), identified by the batch number or other appropriate identifying number, and if possible the... listed in Schedule I or II, make an exact count or measure of the contents, or (ii) If the substance is... holds more than 1,000 tablets or capsules in which case he/she must make an exact count of the contents...
21 CFR 1304.11 - Inventory requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... solutions), identified by the batch number or other appropriate identifying number, and if possible the... listed in Schedule I or II, make an exact count or measure of the contents, or (ii) If the substance is... holds more than 1,000 tablets or capsules in which case he/she must make an exact count of the contents...
21 CFR 1304.11 - Inventory requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... solutions), identified by the batch number or other appropriate identifying number, and if possible the... listed in Schedule I or II, make an exact count or measure of the contents, or (ii) If the substance is... holds more than 1,000 tablets or capsules in which case he/she must make an exact count of the contents...
The State of Our Children: Kids Count in Vermont. 1994 Data Book.
ERIC Educational Resources Information Center
Finn, Carlen; And Others
This KIDS COUNT factbook presents statistical data and examines trends for several indicators of children's well-being in Vermont. Four groups of indicators are examined: (1) economic security, including child population, child poverty, number of children receiving Aid to Needy Families with Children (ANFC) and food stamps, number of children…
Kids Count in Nebraska: 1995 Report.
ERIC Educational Resources Information Center
Nebraska Univ. Medical Center, Omaha.
While a vast majority of children in Nebraska are experiencing a safe, healthy, and nurturing childhood, a significant number are not, and some of these numbers are growing. This Kids Count report is the third annual comprehensive review of available data in nine areas of child health and well-being in the state. Presented with these statistics…
Further Development of Measures of Early Math Performance for Preschoolers
ERIC Educational Resources Information Center
VanDerHeyden, Amanda M.; Broussard, Carmen; Cooley, Amanda
2006-01-01
The purpose of this study was to examine the progress monitoring and screening accuracy for a set of curriculum-based measures (CBM) of early mathematics skills. Measures included counting objects, selecting numbers, naming numbers, counting, and visual discrimination. Measures were designed to be administered with preschoolers in a short period…
The Association between Students' Number Knowledge and Social Disadvantage at School Entry
ERIC Educational Resources Information Center
Gould, Peter
2014-01-01
At the start of the Kindergarten year in New South Wales (NSW) government schools, teachers gather information on several aspects of children's number knowledge to guide their teaching programs. This includes knowledge of the sequence of words used for counting, numeral identification, and using counting to solve problems. This study investigated…
Kathryn L. Purcell; Sylvia R. Mori; Mary K. Chase
2005-01-01
We used data from two oak-woodland sites in California to develop guidelines for the design of bird monitoring programs using point counts. We used power analysis to determine sample size adequacy when varying the number of visits, count stations, and years for examining trends in abundance. We assumed an overdispersed Poisson distribution for count data, with...
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Corsi, Steven R.; Walker, John F.; Graczyk, D.J.; Greb, S.R.; Owens, D.W.; Rappold, K.F.
1995-01-01
A special study was done to determine the effect of holding time on fecal coliform colony counts. A linear regression indicated that the mean decrease in colony counts over 72 hours was 8.2 percent per day. Results after 24 hours showed that colony counts increased in some samples and decreased in others.
Predicting Attack-Prone Components with Source Code Static Analyzers
2009-05-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications
Count distribution for mixture of two exponentials as renewal process duration with applications
NASA Astrophysics Data System (ADS)
Low, Yeh Ching; Ong, Seng Huat
2016-06-01
A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.
Lee, Yangsoon; Kim, Sinyoung; Lee, Seung-Tae; Kim, Han-Soo; Baek, Eun-Jung; Kim, Hyung Jin; Lee, MeeKyung; Kim, Hyun Ok
2009-08-01
We investigated the characteristics of the mononuclear cells remaining in the leukoreduction system (LRS) chambers of Trima Accel in comparison with those of standard buffy coat cells, and evaluated their potential for differentiation into dendritic cells. Twenty-six LRS chambers of Trima Accel were collected after platelet pheresis from healthy adults. Flow cytometric analysis for T, B, NK, and CD14+ cells was performed and the number of CD34+ cells was counted. Differentiation and maturation into dendritic cells were induced using CD14+ cells seperated via Magnetic cell sorting (MACS) Seperation (Miltenyi Biotec Inc., USA). Total white blood cell (WBC) count in LRS chambers was 10.8 x 10(8) (range 7.7-18.0 x 10(8)). The median values (range) of proportions of each cells were CD4+ T cell 29.6% (18.7-37.6), CD8+ T cell 27.7% (19.2-40.0), B cell 5.5% (2.2-12.1), NK cell 15.7% (13.7-19.9), and CD14+ cells 12.4% (8.6-32.3) respectively. Although total WBC count was significantly higher in the buffy coat (whole blood of 400 mL) than the LRS chambers, the numbers of lymphocytes and monocytes were not statistically different. The numbers of B cells and CD4+ cells were significantly higher in the buffy coat than the LRS chambers (P<0.05). The median value (range) of CD34+ cells obtained from the LRS chambers was 0.9 x 10(6) (0.2-2.6 x 10(6)). After 7 days of cytokine-supplemented culture, the CD14+ cells were successfully differentiated into dendritic cells. The mononuclear cells in LRS chambers of Trima Accel are an excellent alternative source of viable and functional human blood cells, which can be used for research purposes.
Gabús, R; Magariños, A; Zamora, M; De Lisa, E; Landoni, A I; Martínez, G; Canessa, C; Giordano, H; Bodega, E
1999-08-01
Our main goal was to evaluate the CD34+ dose in patients undergoing haemotopoietic stem celltransplantation and its results in terms of recovery of neutrophile and platelet counts, transfusion requirements, days of fever, antibiotic requirements and length of hospital stay. We studied 38 consecutive patients with haematological malignancies transplanted at our Department, from Feb. 96 through Sept. 98. The CD34+ cell quantification technique was standardized, using a modification of the ISAGHE 96 protocol. Patients were sorted into three groups according to the CD34+ count administered: a) between 3 and 5 x 10(6) cells/kg; b) between 5 and 10 x 10(6) cells/kg; c) > 10 x 10(6) CD34+ cells/kg. As a secondary end point, results were assessed according to the number of aphereses required to arrive at the target count of CD34+, separating those patients that required only 1 or 2 aphereses versus those requiring 3 or more. Finally, an analysis was made of the results of transplantation comparing the different sources of stem cells (PBSC versus PBSC + B.M.). The best results were obtained in the group with cells between 3 and 5 x 10(6) CD34+. No statistically significant advantages were found in the group with cells over 5. The supra-optimal dose of more 10 x 10(6) would yield no additional beneficial results, while they can imply a greater infusion of residual tumor cells. The number of aphereses had no impact on engraftment. Results obtained with PBSC transplants were better than those with BM+PBSC in terms of neutrophile and platelet recovery. The number of CD34+ cells remains the main element in stem cell transplantation to evaluate the haematopoietic recovery after engraftment. Minimum and optimum yields remain unclear. Centers should establish their own optimal dose based on local methodologies and outcomes, maximizing costs and benefits.
O'Sullivan, T; Friendship, R; Pearl, D L; McEwen, B; Ker, A; Dewey, C
2012-10-01
An intuitive assumption is to believe that the number of submissions made to a veterinary diagnostic laboratory is dictated by the financial state of the industries using the laboratory. However, no research is available to document how the economics of a food animal industry affects laboratory submissions and therefore disease monitoring and surveillance efforts. The objective of this study was to determine if economic indices associated with the Ontario swine industry can account for the variability seen in these submissions. Retrospective swine submissions made to the Animal Health Laboratory at the University of Guelph, Guelph, Ontario from January 1998 to July 2009 were compiled. The following economic, demographic, and health variables impacting Ontario swine production were selected for analysis: auction price, lean-hog futures, currency exchange rate, price of corn, an outbreak of porcine circovirus type-2 associated diseases (PCVAD), government incentive program, number of farms in province, and average farm size. All independent variables identified by unconditional associations to have a significance of P≤0.2 with the outcome of monthly submission count were included in a multivariable negative binomial model. A final model was identified by a backwards elimination procedure. A total of 30,432 swine submissions were recorded. The mean frequency of monthly submissions over 139 months was 212.9 (SD=56.0). After controlling for farm size, the number of pigs in Ontario, higher submission counts were associated with a weaker CAD$ versus US$, higher auction prices, and a PCVAD outbreak (P<0.001). The results suggest that both economic volatility and disease outbreaks in the Ontario swine industry drive submissions to the laboratory. In conclusion, lab submissions are a useful source of animal health data for disease surveillance; however, surveillance activities should also monitor the economics of the industry. Copyright © 2012 Elsevier B.V. All rights reserved.
Fairman, Kathleen A; Davis, Lindsay E; Kruse, Courtney R; Sclar, David A
2017-04-01
Faced with rising healthcare costs, state Medicaid programs need short-term, easily calculated budgetary estimates for new drugs, accounting for medical cost offsets due to clinical advantages. To estimate the budgetary impact of direct-acting oral anticoagulants (DOACs) compared with warfarin, an older, lower-cost vitamin K antagonist, on 12-month Medicaid expenditures for nonvalvular atrial fibrillation (NVAF) using number needed to treat (NNT). Medicaid utilization files, 2009 through second quarter 2015, were used to estimate OAC cost accounting for generic/brand statutory minimum (13/23%) and assumed maximum (13/50%) manufacturer rebates. NNTs were calculated from clinical trial reports to estimate avoided medical events for a hypothetical population of 500,000 enrollees (approximate NVAF prevalence × Medicaid enrollment) under two DOAC market share scenarios: 2015 actual and 50% increase. Medical service costs were based on published sources. Costs were inflation-adjusted (2015 US$). From 2009-2015, OAC reimbursement per claim increased by 173 and 279% under maximum and minimum rebate scenarios, respectively, while DOAC market share increased from 0 to 21%. Compared with a warfarin-only counterfactual, counts of ischemic strokes, intracranial hemorrhages, and systemic embolisms declined by 36, 280, and 111, respectively; counts of gastrointestinal hemorrhages increased by 794. Avoided events and reduced monitoring, respectively, offset 3-5% and 15-24% of increased drug cost. Net of offsets, DOAC-related cost increases were US$258-US$464 per patient per year (PPPY) in 2015 and US$309-US$579 PPPY after market share increase. Avoided medical events offset a small portion of DOAC-related drug cost increase. NNT-based calculations provide a transparent source of budgetary-impact information for new medications.
Luminosity of serendipitous x-ray QSOs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margon, B.; Chanan, G.A.; Downes, R.A.
1982-02-01
We have identified the optical counterparts of 47 serendipitously discovered Einstein Observatory X-ray sources with previously unreported quasi-stellar objects. The mean ratio of X-ray to optical luminosity of this sample agrees reasonably well with that derived from X-ray observations of previously known QSOs. However, despite the fact that our limiting magnitude V = 18.5 should permit detection of typical QSOs (i.e., M/sub c/ = -26) to z = 0.9, the mean redshift of our sample is only z = 0.42 Thus the mean luminosity of these objects, M/sub c/ = -24, differs significantly from that of previous QSO surveys withmore » similar optical thresholds. The existence of large numbers of these lower luminosity QSOs which are difficult to discover by previous selection techniques, provides observational confirmation of the steep luminosity function inferred indirectly from optical counts. However, possible explanations for the lack of higher luminosity QSOs in our sample prove even more interesting. If one accepts the global value of the X-ray to optical luminosity ratio proposed by Zamorani et al, and Ku, Helfand, and Lucy, then reconciliation of this ratio with our observations severely constrains the QSO space density and luminosity functions. Alternatively, the ''typical'' QSO-a radio quiet, high redshift (z>1), optically luminous but not superluminous (M/sub c/> or =-27) object-may not be a strong X-ray source. This inference is not in conflict with existing results from Einstein X-ray surveys of preselected QSOs, which also fail to detect such objects. The contribution of QSOs to the diffuse X-ray background radiation is therefore highly uncertain, but may be quite small. Current X-ray data probably do not place significant constraints on the optical number counts of faint QSOs.« less
QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.
Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei
2014-01-01
Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.
Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance
NASA Technical Reports Server (NTRS)
Cohen, Martin
1994-01-01
I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).
An evidential example of airborne bacteria in a crowded, underground public concourse in Tokyo
NASA Astrophysics Data System (ADS)
Seino, Kaoruko; Takano, Takehito; Nakamura, Keiko; Watanabe, Masafumi
2005-01-01
We examined airborne bacteria in an underground concourse in Tokyo and investigated conditions that influenced bacterial counts. Airborne bacteria were collected by using an impactor sampler. Colonies on plate count agar (PCA) and Columbia colistin-nalidixic acid agar with 5% sheep blood (CNA agar) were enumerated. The range, geometric mean, and 95% CI of the bacterial counts (CFU m-3) on PCA and CNA agar were 150-1380, 456, 382-550 and 50-990, 237, 182-309, respectively. Bacterial counts on PCA significantly correlated with number of the pedestrians (r=0.89), relative humidity (r=0.70) and airborne dust (PM5.0) (r=0.73). Results of a multiple regression indicated independent positive association between the number of pedestrians and bacterial counts on PCA (p<0.01) after excluding the influence of relative humidity and airborne dust. Similar results were obtained with the statistical analysis for the counts of bacteria on CNA agar. Gram-positive cocci were dominant on PCA and CNA agar. Staphylococcus epidermidis and Micrococcus spp. were dominant among the 11 genera and 19 species identified in the present study. Considering the pattern of identified species and the significant independent association between number of pedestrians and bacterial counts, airborne bacteria in a crowded underground concourse were mostly originated from the pedestrians who were walking in the underground concourse. This study gave an evidential example of bacterial conditions in the air of an underground crowded public space in Tokyo.
Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging
Iwanczyk, Jan S.; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C.; Hartsough, Neal E.; Malakhov, Nail; Wessel, Jan C.
2009-01-01
The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm2/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a 57Co source. An output rate of 6×106 counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown. PMID:19920884
Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging.
Iwanczyk, Jan S; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C; Hartsough, Neal E; Malakhov, Nail; Wessel, Jan C
2009-01-01
The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm(2)/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a (57)Co source. An output rate of 6×10(6) counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown.
Radionuclide counting technique for measuring wind velocity and direction
NASA Technical Reports Server (NTRS)
Singh, J. J. (Inventor)
1984-01-01
An anemometer utilizing a radionuclide counting technique for measuring both the velocity and the direction of wind is described. A pendulum consisting of a wire and a ball with a source of radiation on the lower surface of the ball is positioned by the wind. Detectors and are located in a plane perpendicular to pendulum (no wind). The detectors are located on the circumferene of a circle and are equidistant from each other as well as the undisturbed (no wind) source ball position.
Monitoring the scale-up of antiretroviral therapy programmes: methods to estimate coverage.
Boerma, J. Ties; Stanecki, Karen A.; Newell, Marie-Louise; Luo, Chewe; Beusenberg, Michel; Garnett, Geoff P.; Little, Kirsty; Calleja, Jesus Garcia; Crowley, Siobhan; Kim, Jim Yong; Zaniewski, Elizabeth; Walker, Neff; Stover, John; Ghys, Peter D.
2006-01-01
This paper reviews the data sources and methods used to estimate the number of people on, and coverage of, antiretroviral therapy (ART) programmes in low- and middle-income countries and to monitor the progress towards the "3 by 5" target set by WHO and UNAIDS. We include a review of the data sources used to estimate the coverage of ART programmes as well as the efforts made to avoid double counting and over-reporting. The methods used to estimate the number of people in need of ART are described and expanded with estimates of treatment needs for children, both for ART and for cotrimoxazole prophylaxis. An estimated 6.5 million people were in need of treatment in low- and middle-income countries by the end of 2004, including 660,000 children under age 15 years. The mid-2005 estimate of 970,000 people receiving ART in low- and middle-income countries (with an uncertainty range 840,000-1,100,000) corresponds to a coverage of 15% of people in need of treatment. PMID:16501733
OpenCFU, a new free and open-source software to count cell colonies and other circular objects.
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.
NASA Astrophysics Data System (ADS)
Dong, Kyung-Rae; Shim, Dong-Oh; Kim, Ho-Sung; Park, Yong-Soon; Chung, Woon-Kwan; Cho, Jae-Hwan
2013-02-01
In a nuclear medicine examination, methods to acquire a static image include the preset count method and the preset time method. The preset count method is used mainly in a static renal scan that utilizes 99 m Tc-DMSA (dimoercaptosuccinic acid) whereas the preset time method is used occasionally. When the preset count method is used, the same number of acquisition counts is acquired for each time, but the scan time varies. When the preset time method is used, the scan time is constant, but the number of counts acquired is not the same. Therefore, this study examined the dependence of the difference in information on the function and the shape of both sides of the kidneys on the counts acquired during a renal scan that utilizes 99 m Tc-DMSA. The study involved patients who had 40-60% relative function of one kidney among patients who underwent a 99 m Tc-DMSA renal scan in the Nuclear Medicine Department during the period from January 11 to March 31, 2012. A gamma camera was used to obtain the acquisition count continuously using 100,000 counts and 300,000 counts, and an acquisition time of 7 minutes (exceeding 300,000 counts). The function and the shape of the kidney were evaluated by measuring the relative function of both sides of the kidneys, the geometric mean, and the size of kidney before comparative analysis. According to the study results, neither the relative function nor the geometric mean of both sides of the kidneys varied significantly with the acquisition count. On the other hand, the size of the kidney tended to be larger with increasing acquisition count.
Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas
2014-09-30
A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated information and, therefore, more reliable estimated breeding values were obtained. The proposed unified method integrated and blended several sources of information well into a genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. The unified method can also be extended to other types of situations such as single-step genomic or multi-trait evaluations, combining information across different traits.
Is It Counting, or Is It Adding?
ERIC Educational Resources Information Center
Eisenhardt, Sara; Fisher, Molly H.; Thomas, Jonathan; Schack, Edna O.; Tassell, Janet; Yoder, Margaret
2014-01-01
The Common Core State Standards for Mathematics (CCSSI 2010) expect second grade students to "fluently add and subtract within 20 using mental strategies" (2.OA.B.2). Most children begin with number word sequences and counting approximations and then develop greater skill with counting. But do all teachers really understand how this…
21 CFR 1210.16 - Method of bacterial count.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... FEDERAL IMPORT MILK ACT Inspection and Testing § 1210.16 Method of bacterial count. The bacterial count of milk and cream refers to the number of viable bacteria as determined by the standard plate method of...
21 CFR 1210.16 - Method of bacterial count.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... FEDERAL IMPORT MILK ACT Inspection and Testing § 1210.16 Method of bacterial count. The bacterial count of milk and cream refers to the number of viable bacteria as determined by the standard plate method of...
The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures
NASA Astrophysics Data System (ADS)
Stephenson, W. Kirk
2009-08-01
A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes.
ERIC Educational Resources Information Center
Stake, Bernadine Evans
This document focuses on one child's skip counting methods. The pupil, a second grade student at Steuben School, in Kankakee, Illinois, was interviewed as she made several attempts at counting twenty-five poker chips on a circular piece of paper. The interview was part of a larger study of "Children's Conceptions of Number and Numeral,"…
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
Counting on fine motor skills: links between preschool finger dexterity and numerical skills.
Fischer, Ursula; Suggate, Sebastian P; Schmirl, Judith; Stoeger, Heidrun
2017-10-26
Finger counting is widely considered an important step in children's early mathematical development. Presumably, children's ability to move their fingers during early counting experiences to aid number representation depends in part on their early fine motor skills (FMS). Specifically, FMS should link to children's procedural counting skills through consistent repetition of finger-counting procedures. Accordingly, we hypothesized that (a) FMS are linked to early counting skills, and (b) greater FMS relate to conceptual counting knowledge (e.g., cardinality, abstraction, order irrelevance) via procedural counting skills (i.e., one-one correspondence and correctness of verbal counting). Preschool children (N = 177) were administered measures of procedural counting skills, conceptual counting knowledge, FMS, and general cognitive skills along with parent questionnaires on home mathematics and fine motor environment. FMS correlated with procedural counting skills and conceptual counting knowledge after controlling for cognitive skills, chronological age, home mathematics and FMS environments. Moreover, the relationship between FMS and conceptual counting knowledge was mediated by procedural counting skills. Findings suggest that FMS play a role in early counting and therewith conceptual counting knowledge. © 2017 John Wiley & Sons Ltd.
Enhanced Methodologies to Enumerate Persons Experiencing Homelessness in a Large Urban Area.
Troisi, Catherine L; D'Andrea, Ritalinda; Grier, Gary; Williams, Stephen
2015-10-01
Homelessness is a public health problem, and persons experiencing homelessness are a vulnerable population. Estimates of the number of persons experiencing homelessness inform funding allocations and services planning and directly determine the ability of a community to intervene effectively in homelessness. The point-in-time (PIT) count presents a logistical problem in large urban areas, particularly those covering a vast geographical area. Working together, academia, local government, and community organizations improved the methodology for the count. Specific enhancements include use of incident command system (ICS), increased number of staging areas/teams, specialized outreach and Special Weapons and Tactics teams, and day-after surveying to collect demographic information. This collaboration and enhanced methodology resulted in a more accurate estimate of the number of persons experiencing homelessness and allowed comparison of findings for 4 years. While initial results showed an increase due to improved counting, the number of persons experiencing homelessness counted for the subsequent years showed significant decrease during the same time period as a "housing first" campaign was implemented. The collaboration also built capacity in each sector: The health department used ICS as a training opportunity; the academics enhanced their community health efforts; the service sector was taught and implemented more rigorous quantitative methods; and the community was exposed to public health as a pragmatic and effective discipline. Improvements made to increase the reliability of the PIT count can be adapted for use in other jurisdictions, leading to improved counts and better evaluation of progress in ending homelessness. © The Author(s) 2015.
SU-E-I-79: Source Geometry Dependence of Gamma Well-Counter Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, M; Belanger, A; Kijewski, M
Purpose: To determine the effect of liquid sample volume and geometry on counting efficiency in a gamma well-counter, and to assess the relative contributions of sample geometry and self-attenuation. Gamma wellcounters are standard equipment in clinical and preclinical studies, for measuring patient blood radioactivity and quantifying animal tissue uptake for tracer development and other purposes. Accurate measurements are crucial. Methods: Count rates were measured for aqueous solutions of 99m- Tc at four liquid volume values in a 1-cm-diam tube and at six volume values in a 2.2-cm-diam vial. Total activity was constant for all volumes, and data were corrected formore » decay. Count rates from a point source in air, supported by a filter paper, were measured at seven heights between 1.3 and 5.7 cm from the bottom of a tube. Results: Sample volume effects were larger for the tube than for the vial. For the tube, count efficiency relative to a 1-cc volume ranged from 1.05 at 0.05 cc to 0.84 at 3 cc. For the vial, relative count efficiency ranged from 1.02 at 0.05 cc to 0.87 at 15 cc. For the point source, count efficiency relative to 1.3 cm from the tube bottom ranged from 0.98 at 1.8 cm to 0.34 at 5.7 cm. The relative efficiency of a 3-cc liquid sample in a tube compared to a 1-cc sample is 0.84; the average relative efficiency for the solid sample in air between heights in the tube corresponding to the surfaces of those volumes (1.3 and 4.8 cm) is 0.81, implying that the major contribution to efficiency loss is geometry, rather than attenuation. Conclusion: Volume-dependent correction factors should be used for accurate quantitation radioactive of liquid samples. Solid samples should be positioned at the bottom of the tube for maximum count efficiency.« less
Lithium and boron based semiconductors for thermal neutron counting
NASA Astrophysics Data System (ADS)
Kargar, Alireza; Tower, Joshua; Hong, Huicong; Cirignano, Leonard; Higgins, William; Shah, Kanai
2011-09-01
Thermal neutron detectors in planar configuration were fabricated from LiInSe2 and B2Se3 crystals grown at RMD Inc. All fabricated semiconductor devices were characterized for the current-voltage (I-V) characteristic and neutron counting measurement. Pulse height spectra were collected from 241AmBe (neutron source on all samples), as well as 137Cs and 60Co gamma ray sources. In this study, the resistivity of all crystals is reported and the collected pulse height spectra are presented for fabricated devices. Note that, the 241AmBe neutron source was custom designed with polyethylene around the source as the neutron moderator, mainly to thermalize the fast neutrons before reaching the detectors. Both LiInSe2 and B2Se3 devices showed response to thermal neutrons of the 241AmBe source.
Modeling Zero-Inflated and Overdispersed Count Data: An Empirical Study of School Suspensions
ERIC Educational Resources Information Center
Desjardins, Christopher David
2016-01-01
The purpose of this article is to develop a statistical model that best explains variability in the number of school days suspended. Number of school days suspended is a count variable that may be zero-inflated and overdispersed relative to a Poisson model. Four models were examined: Poisson, negative binomial, Poisson hurdle, and negative…
Learning from Number Board Games: You Learn What You Encode
ERIC Educational Resources Information Center
Laski, Elida V.; Siegler, Robert S.
2014-01-01
We tested the hypothesis that encoding the numerical-spatial relations in a number board game is a key process in promoting learning from playing such games. Experiment 1 used a microgenetic design to examine the effects on learning of the type of counting procedure that children use. As predicted, having kindergartners count-on from their current…
Diplomas Count 2010. Graduation by the Numbers: Putting Data to Work for Student Success
ERIC Educational Resources Information Center
Education Week, 2010
2010-01-01
Every year, "Diplomas Count" takes a careful look at nationwide trends related to high graduation. This year, they have titled their report "Graduation by the Numbers--Putting Data to Work for Student Success." This year's research shows that today's graduation climate is a tough one, particularly for minority students and…
Modification of Point Counts for Surveying Cropland Birds
Kathryn Freemark; Catherine Rogers
1995-01-01
As part of a comparative study of agricultural impacts on wildlife, modifications to the point count method were evaluated for surveying birds in, and adjacent to, cropland during the breeding season (May to early July) in Ontario. Location in the field, observation direction and distance, number of visits, and number of study sites per farm were examined using point...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-24
...] Information Collection Request Sent to the Office of Management and Budget (OMB) for Approval; Mourning Dove...: Mourning Dove Call Count Survey. Service Form Number(s): 3-159. Type of Request: Extension of currently... migratory bird populations. The Mourning Dove Call Count Survey is an essential part of the migratory bird...
Does Learning to Count Involve a Semantic Induction?
ERIC Educational Resources Information Center
Davidson, Kathryn; Eng, Kortney; Barner, David
2012-01-01
We tested the hypothesis that, when children learn to correctly count sets, they make a semantic induction about the meanings of their number words. We tested the logical understanding of number words in 84 children that were classified as "cardinal-principle knowers" by the criteria set forth by Wynn (1992). Results show that these children often…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollister, R
2009-08-26
Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less
Development of microcontroller based water flow measurement
NASA Astrophysics Data System (ADS)
Munir, Muhammad Miftahul; Surachman, Arif; Fathonah, Indra Wahyudin; Billah, Muhammad Aziz; Khairurrijal, Mahfudz, Hernawan; Rimawan, Ririn; Lestari, Slamet
2015-04-01
A digital instrument for measuring water flow was developed using an AT89S52 microcontroller, DS1302 real time clock (RTC), and EEPROM for an external memory. The sensor used for probing the current was a propeller that will rotate if immersed in a water flow. After rotating one rotation, the sensor sends one pulse and the number of pulses are counted for a certain time of counting. The measurement data, i.e. the number of pulses per unit time, are converted into water flow velocity (m/s) through a mathematical formula. The microcontroller counts the pulse sent by the sensor and the number of counted pulses are stored into the EEPROM memory. The time interval for counting is provided by the RTC and can be set by the operator. The instrument was tested under various time intervals ranging from 10 to 40 seconds and several standard propellers owned by Experimental Station for Hydraulic Structure and Geotechnics (BHGK), Research Institute for Water Resources (Pusair). Using the same propellers and water flows, it was shown that water flow velocities obtained from the developed digital instrument and those found by the provided analog one are almost similar.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Germonpré, Peter; Papadopoulou, Virginie; Hemelryck, Walter; Obeid, Georges; Lafère, Pierre; Eckersley, Robert J; Tang, Meng-Xing; Balestra, Costantino
2014-03-01
'Decompression stress' is commonly evaluated by scoring circulating bubble numbers post dive using Doppler or cardiac echography. This information may be used to develop safer decompression algorithms, assuming that the lower the numbers of venous gas emboli (VGE) observed post dive, the lower the statistical risk of decompression sickness (DCS). Current echocardiographic evaluation of VGE, using the Eftedal and Brubakk method, has some disadvantages as it is less well suited for large-scale evaluation of recreational diving profiles. We propose and validate a new 'frame-based' VGE-counting method which offers a continuous scale of measurement. Nine 'raters' of varying familiarity with echocardiography were asked to grade 20 echocardiograph recordings using both the Eftedal and Brubakk grading and the new 'frame-based' counting method. They were also asked to count the number of bubbles in 50 still-frame images, some of which were randomly repeated. A Wilcoxon Spearman ρ calculation was used to assess test-retest reliability of each rater for the repeated still frames. For the video images, weighted kappa statistics, with linear and quadratic weightings, were calculated to measure agreement between raters for the Eftedal and Brubakk method. Bland-Altman plots and intra-class correlation coefficients were used to measure agreement between raters for the frame-based counting method. Frame-based counting showed a better inter-rater agreement than the Eftedal and Brubakk grading, even with relatively inexperienced assessors, and has good intra- and inter-rater reliability. Frame-based bubble counting could be used to evaluate post-dive decompression stress, and offers possibilities for computer-automated algorithms to allow near-real-time counting.
Rehn, B; Bruch, J; Zou, T; Hobusch, G
1992-01-01
When rat (female Wistar) lungs were lavaged (bronchoalveolar lavage, BAL) six times with physiological saline, approximately the same number of alveolar macrophages (AM) were found in the first and second BAL, whereas in the third fourth, fifth, and sixth BAL, the number of AM decreased exponentially. Morphometric counting of the number of AM in histological sections of lung tissue showed that only 14% of the AM population had been recovered by BAL. Although additives to the BAL fluid such as lidocaine and/or fetal calf serum increased the AM count in the first washing considerably, the total number of AM washed out remained unaltered. Addition of the phagocytosis stimulant zymosan increased the AM count in BAL by a factor of more than 2. On stimulation of the lungs with an inert dust (silicon carbide), the AM count in the BAL and the lung was only slightly increased 8 weeks after intratracheal instillation. In contrast, after exposure to fibrogenic and cytotoxic quartz, the AM count in BAL and lung was significantly increased, and the recovery of AM had also increased by a factor of approximately 2. The experiments show that it is the micromilieu of the alveoli and the condition of the AM (certain physiological activation states, such as phagocytic activity) that essentially determine the degree of recovery. PMID:1396444
Rosa, Gabriela; Procop, Gary W; Schold, Jesse D; Piliang, Melissa P
2016-10-01
Although syphilis is uncommon, infection rates are much higher in HIV-infected individuals than the general population. A proposed explanation is impaired cellular immunity with HIV infection. A search of one institution yielded 10 patients with a diagnosis of secondary syphilis on skin biopsy, positive syphilis serology and available CD4 counts. We evaluated 11 biopsies from the 10 patients. We correlated the patients' CD4 counts with the histologic findings and with the number of treponemes on skin biopsies, highlighted by immunohistochemistry (IHC). We also compared the detection of spirochetes in silver stained sections (e.g. Warthin-Starry) with T. pallidum IHC. All biopsies were assessed for various histologic features. The sensitivity of IHC to detect treponemes was 64% and of silver stain was 9% (p-value 0.04). The number of treponemes on the biopsies was determined by IHC. High numbers of spirochetes (i.e. >100 per 10 hpf) were only seen in patients with CD4 counts less than 250 cells/ml. The most consistent histologic finding was a moderate to severe lymphoplasmacytic infiltrate. Although the study is small, it appears that a higher number of spirochetes is associated with CD4 counts less than 250 cell/ml. The T. pallidum IHC stain was vastly superior to the Warthin-Starry stain. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Aerial estimation of the size of gull breeding colonies
Kadlec, J.A.; Drury, W.H.
1968-01-01
Counts on photographs and visual estimates of the numbers of territorial gulls are usually reliable indicators of the number of gull nests, but single visual estimates are not adequate to measure the number of nests in individual colonies. To properly interpret gull counts requires that several islands with known numbers of nests be photographed to establish the ratio of gulls to nests applicable for a given local census. Visual estimates are adequate to determine total breeding gull numbers by regions. Neither visual estimates nor photography will reliably detect annual changes of less than about 2.5 percent.
Ultraviolet Communication for Medical Applications
2013-06-01
sky was clear and no moonlight was visible during testing. There was light fog and high pollen count (9 grains per m3), and relative humidity was...improved LED light source was evaluated outdoors using the test bench system at a range of 50 m, and received photon counts were consistent with...bench system at a range of 50 m, and received photon counts were consistent with medium data rate communication. Future Phase II efforts will develop
Thompson, W.L.
2003-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.