Li, Gang; Xu, Jiayun; Zhang, Jie
2015-01-01
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinohara, K., E-mail: shinohara.koji@jaea.go.jp; Ochiai, K.; Sukegawa, A.
In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons.more » The results suggested that the concept of a multi-stage detection system will work in practice.« less
How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology
NASA Astrophysics Data System (ADS)
Ekers, Ron
2012-09-01
In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
Point count length and detection of forest neotropical migrant birds
Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
Neutronic analysis of the 1D and 1E banks reflux detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
1999-12-21
Two H Canyon neutron monitoring systems for early detection of postulated abnormal reflux conditions in the Second Uranium Cycle 1E and 1D Mixer-Settle Banks have been designed and built. Monte Carlo neutron transport simulations using the general purpose, general geometry, n-particle MCNP code have been performed to model expected response of the monitoring systems to varying conditions.The confirmatory studies documented herein conclude that the 1E and 1D neutron monitoring systems are able to achieve adequate neutron count rates for various neutron source and detector configurations, thereby eliminating excessive integration count time. Neutron count rate sensitivity studies are also performed. Conversely,more » the transport studies concluded that the neutron count rates are statistically insensitive to nitric acid content in the aqueous region and to the transition region length. These studies conclude that the 1E and 1D neutron monitoring systems are able to predict the postulated reflux conditions for all examined perturbations in the neutron source and detector configurations. In the cases examined, the relative change in the neutron count rates due to postulated transitions from normal {sup 235}U concentration levels to reflux levels remain satisfactory detectable.« less
X-ray detection of Nova Del 2013 with Swift
NASA Astrophysics Data System (ADS)
Castro-Tirado, Alberto J.; Martin-Carrillo, Antonio; Hanlon, Lorraine
2013-08-01
Continuous X-ray monitoring by Swift of Nova Del 2013 (see CBET #3628) shows an increase of X-ray emission at the source location compared to previous observations (ATEL #5283, ATEL #5305) during a 3.9 ksec observation at UT 2013-08-22 12:05. With the XRT instrument operating in window timing mode, 744 counts were extracted from a 50 pixel long source region and 324 counts from a similar box for a background region, resulting in a 13-sigma detection with a net count rate of 0.11±0.008 counts/sec.
NASA Technical Reports Server (NTRS)
Elvis, Martin; Plummer, David; Schachter, Jonathan; Fabbiano, G.
1992-01-01
A catalog of 819 sources detected in the Einstein IPC Slew Survey of the X-ray sky is presented; 313 of the sources were not previously known as X-ray sources. Typical count rates are 0.1 IPC count/s, roughly equivalent to a flux of 3 x 10 exp -12 ergs/sq cm s. The sources have positional uncertainties of 1.2 arcmin (90 percent confidence) radius, based on a subset of 452 sources identified with previously known pointlike X-ray sources (i.e., extent less than 3 arcmin). Identifications based on a number of existing catalogs of X-ray and optical objects are proposed for 637 of the sources, 78 percent of the survey (within a 3-arcmin error radius) including 133 identifications of new X-ray sources. A public identification data base for the Slew Survey sources will be maintained at CfA, and contributions to this data base are invited.
Automatic measurements and computations for radiochemical analyses
Rosholt, J.N.; Dooley, J.R.
1960-01-01
In natural radioactive sources the most important radioactive daughter products useful for geochemical studies are protactinium-231, the alpha-emitting thorium isotopes, and the radium isotopes. To resolve the abundances of these thorium and radium isotopes by their characteristic decay and growth patterns, a large number of repeated alpha activity measurements on the two chemically separated elements were made over extended periods of time. Alpha scintillation counting with automatic measurements and sample changing is used to obtain the basic count data. Generation of the required theoretical decay and growth functions, varying with time, and the least squares solution of the overdetermined simultaneous count rate equations are done with a digital computer. Examples of the complex count rate equations which may be solved and results of a natural sample containing four ??-emitting isotopes of thorium are illustrated. These methods facilitate the determination of the radioactive sources on the large scale required for many geochemical investigations.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
One of the most valuable unique characteristics of the PCA is the high count rates (100,000 c/s) it can record, and the resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle-1 work on Sco X-1 has shown that performing high count rate observations is very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of-the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-7 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-8 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-9 proposal. The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life og the satallire probably only one black-hole transient (if any) will reach 10^5 cps/5PCU levels. when this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-5 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2&3 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1-3 work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2,3&4 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1-3 work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
RESUBMISSION ACCEPTED CYCLE 2 PROPOSAL - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1&2 work on Sco X-1 and 1744-28 has shown that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-10 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates (core Program)
NASA Astrophysics Data System (ADS)
Resubmission accepted Cycle 2-11 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-11 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ~6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the artmore » and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr 3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr 3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2” (length) × 2” (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ~3 Mcps. An experimental methodology was developed that uses the average current from the PMT’s anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr 3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ~3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.« less
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril; ...
2017-10-09
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ~6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the artmore » and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr 3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr 3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2” (length) × 2” (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ~3 Mcps. An experimental methodology was developed that uses the average current from the PMT’s anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr 3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ~3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.« less
NASA Astrophysics Data System (ADS)
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril; Hunt, Alan W.; Ludewigt, Bernhard
2018-01-01
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ∼6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the art and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2" (length) × 2" (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ∼3 Mcps. An experimental methodology was developed that uses the average current from the PMT's anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ∼3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.
Special land use trip generation in Virginia.
DOT National Transportation Integrated Search
1981-01-01
Vehicle trip rates at shopping centers, apartment complexes, and subdivisions throughout Virginia were determined from seven-day volume counts. These rates were then compared with rates reported in four recognized sources of trip rate statistics and ...
Search for optical bursts from the gamma ray burst source GBS 0526-66
NASA Astrophysics Data System (ADS)
Seetha, S.; Sreenivasaiah, K. V.; Marar, T. M. K.; Kasturirangan, K.; Rao, U. R.; Bhattacharyya, J. C.
1985-08-01
Attempts were made to detect optical bursts from the gamma-ray burst source GBS 0526-66 during Dec. 31, 1984 to Jan. 2, 1985 and Feb. 23 to Feb. 24, 1985, using the one meter reflector of the Kavalur Observatory. Jan. 1, 1985 coincided with the zero phase of the predicted 164 day period of burst activity from the source (Rothschild and Lingenfelter, 1984). A new optical burst photon counting system with adjustable trigger threshold was used in parallel with a high speed photometer for the observations. The best time resolution was 1 ms and maximum count rate capability was 255,000 counts s(-1). Details of the instrumentation and observational results are presented.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
Data-Fusion for a Vision-Aided Radiological Detection System: Sensor dependence and Source Tracking
NASA Astrophysics Data System (ADS)
Stadnikia, Kelsey; Martin, Allan; Henderson, Kristofer; Koppal, Sanjeev; Enqvist, Andreas
2018-01-01
The University of Florida is taking a multidisciplinary approach to fuse the data between 3D vision sensors and radiological sensors in hopes of creating a system capable of not only detecting the presence of a radiological threat, but also tracking it. The key to developing such a vision-aided radiological detection system, lies in the count rate being inversely dependent on the square of the distance. Presented in this paper are the results of the calibration algorithm used to predict the location of the radiological detectors based on 3D distance from the source to the detector (vision data) and the detectors count rate (radiological data). Also presented are the results of two correlation methods used to explore source tracking.
Extreme Ultraviolet Explorer observations of the magnetic cataclysmic variable RE 1938-461
NASA Technical Reports Server (NTRS)
Warren, John K.; Vallerga, John V.; Mauche, Christopher W.; Mukai, Koji; Siegmund, Oswald H. W.
1993-01-01
The magnetic cataclysmic variable RE 1938-461 was observed by the Extreme Ultraviolet Explorer (EUVE) Deep Survey instrument on 1992 July 8-9 during in-orbit calibration. It was detected in the Lexan/ boron (65-190 A) band, with a quiescent count rate of 0.0062 +/- 0.0017/s, and was not detected in the aluminum/carbon (160-360 A) band. The Lexan/boron count rate is lower than the corresponding ROSAT wide-field camera Lexan/boron count rate. This is consistent with the fact that the source was in a low state during an optical observation performed just after the EUVE observation, whereas it was in an optical high state during the ROSAT observation. The quiescent count rates are consistent with a virtual cessation of accretion. Two transient events lasting about 1 hr occurred during the Lexan/boron pointing, the second at a count rate of 0.050 +/- 0.006/s. This appears to be the first detection of an EUV transient during the low state of a magnetic cataclysmic variable. We propose two possible explanations for the transient events.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
The MIT/OSO 7 catalog of X-ray sources - Intensities, spectra, and long-term variability
NASA Technical Reports Server (NTRS)
Markert, T. H.; Laird, F. N.; Clark, G. W.; Hearn, D. R.; Sprott, G. F.; Li, F. K.; Bradt, H. V.; Lewin, W. H. G.; Schnopper, H. W.; Winkler, P. F.
1979-01-01
This paper is a summary of the observations of the cosmic X-ray sky performed by the MIT 1-40-keV X-ray detectors on OSO 7 between October 1971 and May 1973. Specifically, mean intensities or upper limits of all third Uhuru or OSO 7 cataloged sources (185 sources) in the 3-10-keV range are computed. For those sources for which a statistically significant (greater than 20) intensity was found in the 3-10-keV band (138 sources), further intensity determinations were made in the 1-15-keV, 1-6-keV, and 15-40-keV energy bands. Graphs and other simple techniques are provided to aid the user in converting the observed counting rates to convenient units and in determining spectral parameters. Long-term light curves (counting rates in one or more energy bands as a function of time) are plotted for 86 of the brighter sources.
A radionuclide counting technique for measuring wind velocity. [drag force anemometers
NASA Technical Reports Server (NTRS)
Singh, J. J.; Khandelwal, G. S.; Mall, G. H.
1981-01-01
A technique for measuring wind velocities of meteorological interest is described. It is based on inverse-square-law variation of the counting rates as the radioactive source-to-counter distance is changed by wind drag on the source ball. Results of a feasibility study using a weak bismuth 207 radiation source and three Geiger-Muller radiation counters are reported. The use of the technique is not restricted to Martian or Mars-like environments. A description of the apparatus, typical results, and frequency response characteristics are included. A discussion of a double-pendulum arrangement is presented. Measurements reported herein indicate that the proposed technique may be suitable for measuring wind speeds up to 100 m/sec, which are either steady or whose rates of fluctuation are less than 1 kHz.
The Lambert-Beer law in time domain form and its application.
Mosorov, Volodymyr
2017-10-01
The majority of current radioisotope gauges utilize measurements of intensity for a chosen sampling time interval using a detector. Such an approach has several disadvantages: temporal resolution of the gauge is fixed and the accuracy of the measurements is not the same for different count rate. The solution can be the use of a stronger radioactive source, but it will be conflicted with ALARA (As Low As Reasonably Achievable) principle. Therefore, the article presents an alternative approach which is based on modified Lambert-Beer law. The basis of the approach is the registration of time intervals instead of the registration of counts. It allows to increase the temporal resolution of a gauge without the necessity of using a stronger radioactive source and the accuracy of the measurements will not depend on count rate. Copyright © 2017 Elsevier Ltd. All rights reserved.
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...
2017-04-29
We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less
Estimating the effective system dead time parameter for correlated neutron counting
NASA Astrophysics Data System (ADS)
Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.
2017-11-01
Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.
Getting something out of nothing in the measurement-device-independent quantum key distribution
NASA Astrophysics Data System (ADS)
Tan, Yong-Gang; Cai, Qing-Yu; Yang, Hai-Feng; Hu, Yao-Hua
2015-11-01
Because of the monogamy of entanglement, the measurement-device-independent quantum key distribution is immune to the side-information leaking of the measurement devices. When the correlated measurement outcomes are generated from the dark counts, no entanglement is actually obtained. However, secure key bits can still be proven to be generated from these measurement outcomes. Especially, we will give numerical studies on the contributions of dark counts to the key generation rate in practical decoy state MDI-QKD where a signal source, a weaker decoy source and a vacuum decoy source are used by either legitimate key distributer.
Ultraviolet Communication for Medical Applications
2013-06-01
sky was clear and no moonlight was visible during testing. There was light fog and high pollen count (9 grains per m3), and relative humidity was...improved LED light source was evaluated outdoors using the test bench system at a range of 50 m, and received photon counts were consistent with...bench system at a range of 50 m, and received photon counts were consistent with medium data rate communication. Future Phase II efforts will develop
Highly efficient entanglement swapping and teleportation at telecom wavelength
Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide
2015-01-01
Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links. PMID:25791212
Highly efficient entanglement swapping and teleportation at telecom wavelength.
Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide
2015-03-20
Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links.
A strong X-ray Flare in 1ES 1959+650
NASA Astrophysics Data System (ADS)
Kapanadze, Bidzina
2016-06-01
The nearby TeV-detected HBL object 1ES 1959+650 (z=0.047) has been observed by Swift today which revealed a strong X-ray flare in the source. Namely, the observation-binned 0.3-10 keV count rate is 16.49+/-0.15 cts/s that is by a factor 2.45 larger compared to weighted mean count rate from all Swift-XRT pointings to this source, and by 90% larger than the rate recorded during the previous observation (performed on June 4). Note that the higher brightness states were observed only three times in the past (in 2015 September - December; see Kapanadze B. et al. 2016, "A recent strong X-ray flaring activity of 1ES 1959+650 with possibly less efficient stochastic acceleration", MNRASL, in press).
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.; Kohno, K.; Aretxaga, I.; Austermann, J. E.; Ezawa, H.; Hughes, D. H.; Ikarashi, S.; Iono, D.; Kawabe, R.; Matsuo, H.; Matsuura, S.; Nakanishi, K.; Oshima, T.; Perera, T.; Scott, K. S.; Shirahata, M.; Takeuchi, T. T.; Tamura, Y.; Tanaka, K.; Tosaki, T.; Wilson, G. W.; Yun, M. S.
2010-10-01
We present number counts and clustering properties of millimeter-bright galaxies uncovered by the AzTEC camera mounted on the Atacama Submillimeter Telescope Experiment (ASTE). We surveyed the AKARI Deep Field South (ADF-S), the Subaru/XMM Newton Deep Field (SXDF), and the SSA22 fields with an area of ~0.25 deg2 each with an rms noise level of ~0.4-1.0 mJy. We constructed differential and cumulative number counts, which provide currently the tightest constraints on the faint end. The integration of the best-fit number counts in the ADF-S find that the contribution of 1.1 mm sources with fluxes >=1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which the number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with >=1 mJy using the best-fit number counts in the ADF-S and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z~2-3. The average mass of dark halos hosting bright 1.1 mm sources was calculated to be 1013-1014 Msolar. Comparison of correlation lengths of 1.1 mm sources with other populations and with a bias evolution model suggests that dark halos hosting bright 1.1 mm sources evolve into systems of clusters at present universe and the 1.1 mm sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
Long-distance practical quantum key distribution by entanglement swapping.
Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang
2011-02-14
We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizawa, T., E-mail: nishizawa@wisc.edu; Nornberg, M. D.; Den Hartog, D. J.
2016-11-15
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier’s cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
NASA Astrophysics Data System (ADS)
Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Craig, D.
2016-11-01
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier's cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Multiplicity counting from fission detector signals with time delay effects
NASA Astrophysics Data System (ADS)
Nagy, L.; Pázsit, I.; Pál, L.
2018-03-01
In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).
Investigating the Extraordinary X-Ray Variability of the Infrared Quasar IRAS 13349+2439
NASA Technical Reports Server (NTRS)
Leighly, Karen M.
1999-01-01
We observed the luminous quasar IRAS 13349+2439 using RXTE (X Ray Timing Explorer) in order to search for rapid variability. Unfortunately, the source was in a low state during the observation (PCA count rate approximately 1 - 2 counts/s). It was therefore somewhat weak for RXTE and detailed analysis proved to be difficult.
THE USE OF QUENCHING IN A LIQUID SCINTILLATION COUNTER FOR QUANTITATIVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, G.V.
1963-01-01
Quenching was used to quantitatively determine the amonnt of quenching agent present. A sealed promethium147 source was prepared to be used for the count rate determinations. Two methods to determine the amount of quenching agent present in a sample were developed. One method related the count rate of a sample containing a quenching agent to the amount of quenching agent present. Calibration curves were plotted using both color and chemical quenchers. The quenching agents used were: F.D.C. Orange No. 2, F.D.C. Yellow No. 3, F.D.C. Yellow No. 4, Scarlet Red, acetone, benzaldehyde, and carbon tetrachloride. the color quenchers gave amore » linear-relationship, while the chemical quenchers gave a non-linear relationship. Quantities of the color quenchers between about 0.008 mg and 0.100 mg can be determined with an error less than 5%. The calibration curves were found to be usable over a long period of time. The other method related the change in the ratio of the count rates in two voltage windows to the amount of quenching agent present. The quenchers mentioned above were used. Calibration curves were plotted for both the color and chemical quenchers. The relationships of ratio versus amount of quencher were non-linear in each case. It was shown that the reproducibility of the count rate and the ratio was independent of the amount of quencher present but was dependent on the count rate. At count rates above 10,000 counts per minute the reproducibility was better than 1%. (TCO)« less
Fission meter and neutron detection using poisson distribution comparison
Rowland, Mark S; Snyderman, Neal J
2014-11-18
A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.
Senftle, F.E.; Macy, R.J.; Mikesell, J.L.
1979-01-01
The fast- and thermal-neutron fluence rates from a 3.7 ??g 252Cf neutron source in a simulated borehole have been measured as a function of the source-to-detector distance using air, water, coal, iron ore-concrete mix, and dry sand as borehole media. Gamma-ray intensity measurements were made for specific spectral lines at low and high energies for the same range of source-to-detector distances in the iron ore-concrete mix and in coal. Integral gamma-ray counts across the entire spectrum were also made at each source-to-detector distance. From these data, the specific neutron-damage rate, and the critical count-rate criteria, we show that in an iron ore-concrete mix (low hydrogen concentration), 252Cf neutron sources of 2-40 ??g are suitable. The source size required for optimum gamma-ray sensitivity depends on the energy of the gamma ray being measured. In a hydrogeneous medium such as coal, similar measurements were made. The results show that sources from 2 to 20 ??g are suitable to obtain the highest gamma-ray sensitivity, again depending on the energy of the gamma ray being measured. In a hydrogeneous medium, significant improvement in sensitivity can be achieved by using faster electronics; in iron ore, it cannot. ?? 1979 North-Holland Publishing Co.
The long-term intensity behavior of Centaurus X-3
NASA Technical Reports Server (NTRS)
Schreier, E. J.; Swartz, K.; Giacconi, R.; Fabbiano, G.; Morin, J.
1976-01-01
In three years of observation, the X-ray source Cen X-3 appears to alternate between 'high states', with an intensity of 150 counts/s (2-6 keV) or greater, and 'low states', where the source is barely detectable. The time scale of this behavior is of the order of months, and no apparent periodicity has been observed. Analysis of two transitions between these states is reported. During two weeks in July 1972, the source increased from about 20 counts/s to 150 counts/s. The detailed nature of this turn-on is interpreted in terms of a model in which the supergiant's stellar wind decreases in density. A second transition, a turnoff in February 1973, is similarly analyzed and found to be consistent with a simple decrease in accretion rate. The presence of absorption dips during transitions at orbital phases 0.4-0.5 as well as at phase 0.75 is discussed. The data are consistent with a stellar-wind accretion model and with different kinds of extended lows caused by increased wind density masking the X-ray emission or by decreased wind density lowering the accretion rate.
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
SU-E-I-79: Source Geometry Dependence of Gamma Well-Counter Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, M; Belanger, A; Kijewski, M
Purpose: To determine the effect of liquid sample volume and geometry on counting efficiency in a gamma well-counter, and to assess the relative contributions of sample geometry and self-attenuation. Gamma wellcounters are standard equipment in clinical and preclinical studies, for measuring patient blood radioactivity and quantifying animal tissue uptake for tracer development and other purposes. Accurate measurements are crucial. Methods: Count rates were measured for aqueous solutions of 99m- Tc at four liquid volume values in a 1-cm-diam tube and at six volume values in a 2.2-cm-diam vial. Total activity was constant for all volumes, and data were corrected formore » decay. Count rates from a point source in air, supported by a filter paper, were measured at seven heights between 1.3 and 5.7 cm from the bottom of a tube. Results: Sample volume effects were larger for the tube than for the vial. For the tube, count efficiency relative to a 1-cc volume ranged from 1.05 at 0.05 cc to 0.84 at 3 cc. For the vial, relative count efficiency ranged from 1.02 at 0.05 cc to 0.87 at 15 cc. For the point source, count efficiency relative to 1.3 cm from the tube bottom ranged from 0.98 at 1.8 cm to 0.34 at 5.7 cm. The relative efficiency of a 3-cc liquid sample in a tube compared to a 1-cc sample is 0.84; the average relative efficiency for the solid sample in air between heights in the tube corresponding to the surfaces of those volumes (1.3 and 4.8 cm) is 0.81, implying that the major contribution to efficiency loss is geometry, rather than attenuation. Conclusion: Volume-dependent correction factors should be used for accurate quantitation radioactive of liquid samples. Solid samples should be positioned at the bottom of the tube for maximum count efficiency.« less
Population Census of a Large Common Tern Colony with a Small Unmanned Aircraft
Chabot, Dominique; Craik, Shawn R.; Bird, David M.
2015-01-01
Small unmanned aircraft systems (UAS) may be useful for conducting high-precision, low-disturbance waterbird surveys, but limited data exist on their effectiveness. We evaluated the capacity of a small UAS to census a large (>6,000 nests) coastal Common tern (Sterna hirundo) colony of which ground surveys are particularly disruptive and time-consuming. We compared aerial photographic tern counts to ground nest counts in 45 plots (5-m radius) throughout the colony at three intervals over a nine-day period in order to identify sources of variation and establish a coefficient to estimate nest numbers from UAS surveys. We also compared a full colony ground count to full counts from two UAS surveys conducted the following day. Finally, we compared colony disturbance levels over the course of UAS flights to matched control periods. Linear regressions between aerial and ground counts in plots had very strong correlations in all three comparison periods (R 2 = 0.972–0.989, P < 0.001) and regression coefficients ranged from 0.928–0.977 terns/nest. Full colony aerial counts were 93.6% and 94.0%, respectively, of the ground count. Varying visibility of terns with ground cover, weather conditions and image quality, and changing nest attendance rates throughout incubation were likely sources of variation in aerial detection rates. Optimally timed UAS surveys of Common tern colonies following our method should yield population estimates in the 93–96% range of ground counts. Although the terns were initially disturbed by the UAS flying overhead, they rapidly habituated to it. Overall, we found no evidence of sustained disturbance to the colony by the UAS. We encourage colonial waterbird researchers and managers to consider taking advantage of this burgeoning technology. PMID:25874997
Donkor, Eric S; Lanyo, R; Kayang, Boniface B; Quaye, Jonathan; Edoh, Dominic A
2010-09-01
The occurrence of pathogens in the internal parts of vegetables is usually associated with irrigation water or contaminated soil and could pose risk to consumers as the internalised pathogens are unaffected by external washing. This study was carried out to assess the rate of internalisation of microbes in common Ghanaian vegetables. Standard microbiological methods were employed in microbial enumeration of vegetables collected at the market and farm levels, as well as irrigation water and soil samples. The overall mean counts of vegetables were 4.0 x 10(3) cfu g(-1); 8.1 x 10(2) cfu g(-1); 2.0 x 10(2) cfu g(-1); 3.5 x 10(2) cfu g(-1) for total bacteria, coliform counts, faecal coliform counts and yeast counts, respectively. The rate of internalisation of coliforms in vegetables irrigated with stream/well water was 2.7 times higher than those irrigated with pipe water. The mean coliform counts (4.7 x 10(7) cfu g(-1)) and faecal coliform counts (1.8 x 10(6) cfu g(-1)) of soil samples were similar to those of stream water suggesting both sources exerted similar contamination rates on the vegetables. Generally, there were no significant variations between the rates of internalisation of microbes at the market and farm levels at p < 05, indicating that internalisation of microbes in the vegetables mainly occurred at the farm level. The study has shown that microbial contamination of vegetables in Ghana is not limited to the external surface, but internal vegetable parts could harbour high microbial loads and pose risk to consumers. Safety practices associated with the commodity should therefore not be limited to external washing only. There is the additional need of heating vegetables to eliminate microbes both externally and internally before consumption.
Comparison of digital signal processing modules in gamma-ray spectrometry.
Lépy, Marie-Christine; Cissé, Ousmane Ibrahima; Pierre, Sylvie
2014-05-01
Commercial digital signal-processing modules have been tested for their applicability to gamma-ray spectrometry. The tests were based on the same n-type high purity germanium detector. The spectrum quality was studied in terms of energy resolution and peak area versus shaping parameters, using a Eu-152 point source. The stability of a reference peak count rate versus the total count rate was also examined. The reliability of the quantitative results is discussed for their use in measurement at the metrological level. © 2013 Published by Elsevier Ltd.
Data-based Considerations in Portal Radiation Monitoring of Cargo Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weier, Dennis R.; O'Brien, Robert F.; Ely, James H.
2004-07-01
Radiation portal monitoring of cargo vehicles often includes a configuration of four-panel monitors that record gamma and neutron counts from vehicles transporting cargo. As vehicles pass the portal monitors, they generate a count profile over time that can be compared to the average panel background counts obtained just prior to the time the vehicle entered the area of the monitors. Pacific Northwest National Laboratory has accumulated considerable data regarding such background radiation and vehicle profiles from portal installations, as well as in experimental settings using known sources and cargos. Several considerations have a bearing on how alarm thresholds are setmore » in order to maintain sensitivity to radioactive sources while also controlling to a manageable level the rate of false or nuisance alarms. False alarms are statistical anomalies while nuisance alarms occur due to the presence of naturally occurring radioactive material (NORM) in cargo, for example, kitty litter. Considerations to be discussed include: • Background radiation suppression due to the shadow shielding from the vehicle. • The impact of the relative placement of the four panels on alarm decision criteria. • Use of plastic scintillators to separate gamma counts into energy windows. • The utility of using ratio criteria for the energy window counts rather than simply using total window counts. • Detection likelihood for these various decision criteria based on computer simulated injections of sources into vehicle profiles.« less
Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation
NASA Astrophysics Data System (ADS)
Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.
1997-02-01
The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.
NASA Astrophysics Data System (ADS)
Mayer, D. P.; Kite, E. S.
2016-12-01
Sandblasting, aeolian infilling, and wind deflation all obliterate impact craters on Mars, complicating the use of crater counts for chronology, particularly on sedimentary rock surfaces. However, crater counts on sedimentary rocks can be exploited to constrain wind erosion rates. Relatively small, shallow craters are preferentially obliterated as a landscape undergoes erosion, so the size-frequency distribution of impact craters in a landscape undergoing steady exhumation will develop a shallower power-law slope than a simple production function. Estimating erosion rates is important for several reasons: (1) Wind erosion is a source of mass for the global dust cycle, so the global dust reservoir will disproportionately sample fast-eroding regions; (2) The pace and pattern of recent wind erosion is a sorely-needed constraint on models of the sculpting of Mars' sedimentary-rock mounds; (3) Near-surface complex organic matter on Mars is destroyed by radiation in <108 years, so high rates of surface exhumation are required for preservation of near-surface organic matter. We use crater counts from 18 HiRISE images over sedimentary rock deposits as the basis for estimating erosion rates. Each image was counted by ≥3 analysts and only features agreed on by ≥2 analysts were included in the erosion rate estimation. Erosion rates range from 0.1-0.2 {μ }m/yr across all images. These rates represent an upper limit on surface erosion by landscape lowering. At the conference we will discuss the within and between-image variability of erosion rates and their implications for recent geological processes on Mars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Extreme Ultraviolet Explorer Bright Source List
NASA Technical Reports Server (NTRS)
Malina, Roger F.; Marshall, Herman L.; Antia, Behram; Christian, Carol A.; Dobson, Carl A.; Finley, David S.; Fruscione, Antonella; Girouard, Forrest R.; Hawkins, Isabel; Jelinsky, Patrick
1994-01-01
Initial results from the analysis of the Extreme Ultraviolet Explorer (EUVE) all-sky survey (58-740 A) and deep survey (67-364 A) are presented through the EUVE Bright Source List (BSL). The BSL contains 356 confirmed extreme ultraviolet (EUV) point sources with supporting information, including positions, observed EUV count rates, and the identification of possible optical counterparts. One-hundred twenty-six sources have been detected longward of 200 A.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoerner, M; Hintenlang, D
Purpose: A methodology is presented to correct for measurement inaccuracies at high detector count rates using a plastic and GOS scintillation fibers coupled to a photomultiplier tube with digital readout. This system allows temporal acquisition and manipulation of measured data. Methods: The detection system used was a plastic scintillator and a separate gadolinium scintillator, both (0.5 diameter) coupled to an optical fiber with a Hamamatsu photon counter with a built-in microcontroller and digital interface. Count rate performance of the system was evaluated using the nonparalzable detector model. Detector response was investigated across multiple radiation sources including: orthovoltage x-ray system, colbat-60more » gamma rays, proton therapy beam, and a diagnostic radiography x-ray tube. The dead time parameter was calculated by measuring the count rate of the system at different exposure rates using a reference detector. Results: The system dead time was evaluated for the following sources of radiation used clinically: diagnostic energy x-rays, cobalt-60 gamma rays, orthovoltage xrays, particle proton accelerator, and megavoltage x-rays. It was found that dead time increased significantly when exposing the detector to sources capable of generating Cerenkov radiation, all of the sources sans the diagnostic x-rays, with increasing prominence at higher photon energies. Percent depth dose curves generated by a dedicated ionization chamber and compared to the detection system demonstrated that correcting for dead time improves accuracy. On most sources, nonparalzable model fit provided an improved system response. Conclusion: Overall, the system dead time was variable across the investigated radiation particles and energies. It was demonstrated that the system response accuracy was greatly improved by correcting for dead time effects. Cerenkov radiation plays a significant role in the increase in the system dead time through transient absorption effects attributed to electron hole-pair creations within the optical waveguide.« less
NASA Astrophysics Data System (ADS)
Uttley, P.; Gendreau, K.; Markwardt, C.; Strohmayer, T. E.; Bult, P.; Arzoumanian, Z.; Pottschmidt, K.; Ray, P. S.; Remillard, R.; Pasham, D.; Steiner, J.; Neilsen, J.; Homan, J.; Miller, J. M.; Iwakiri, W.; Fabian, A. C.
2018-03-01
NICER observed the new X-ray transient MAXI J1820+070 (ATel #11399, #11400, #11403, #11404, #11406, #11418, #11420, #11421) on multiple occasions from 2018 March 12 to 14. & nbsp;During this time the source brightened rapidly, from a total NICER mean count rate of 880 count/s on March 12 to 2800 count/s by March 14 17:00 & nbsp;UTC, corresponding to a change in 2-10 keV modelled flux (see below) from 1.9E-9 to 5E-9 erg cm-2 s-1. & nbsp; The broadband X-ray spectrum is absorbed by a low column density (fitting the model given below, we obtain 1.5E21 cm-2), in keeping with the low Galactic column in the direction of the source (ATel #11418; Dickey & Lockman, 1990, ARAA, 28, 215; Kalberla et al. 2005, A &A, 440, 775) and consists of a hard power-law component with weak reflection features (broad iron line and narrow 6.4 keV line core) and an additional soft X-ray component.
Asm-Triggered too Observations of Z Sources at Low Accretion Rate
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
We propose to perform a pointed observation if the ASM shows that a Z source has entered a state of low accretion rate. This would provide a unique opportunity to detect millisecond pulsations. In Sco X-1 we would expect to discover beat-frequency QPO, and could perform a unique high count rate study of them. At sufficiently low accretion rate it would be possible to study the accretion flow when the magnetospheric radius approaches the corotation radius. The frequency of the horizontal branch QPO should go to zero here, and centrifugal inhibition of the accretion should set in, providing direct tests of the magnetospheric model of Z sources.
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
Goddard, Braden; Croft, Stephen; Lousteau, Angela; ...
2016-05-25
Safeguarding nuclear material is an important and challenging task for the international community. One particular safeguards technique commonly used for uranium assay is active neutron correlation counting. This technique involves irradiating unused uranium with ( α,n) neutrons from an Am-Li source and recording the resultant neutron pulse signal which includes induced fission neutrons. Although this non-destructive technique is widely employed in safeguards applications, the neutron energy spectra from an Am-Li sources is not well known. Several measurements over the past few decades have been made to characterize this spectrum; however, little work has been done comparing the measured spectra ofmore » various Am-Li sources to each other. This paper examines fourteen different Am-Li spectra, focusing on how these spectra affect simulated neutron multiplicity results using the code Monte Carlo N-Particle eXtended (MCNPX). Two measurement and simulation campaigns were completed using Active Well Coincidence Counter (AWCC) detectors and uranium standards of varying enrichment. The results of this work indicate that for standard AWCC measurements, the fourteen Am-Li spectra produce similar doubles and triples count rates. Finally, the singles count rates varied by as much as 20% between the different spectra, although they are usually not used in quantitative analysis.« less
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-05-18
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.
Estimation of Enterococci Input from Bathers and Animals on A Recreational Beach Using Camera Images
D, Wang John; M, Solo-Gabriele Helena; M, Abdelzaher Amir; E, Fleming Lora
2010-01-01
Enterococci, are used nationwide as a water quality indicator of marine recreational beaches. Prior research has demonstrated that enterococci inputs to the study beach site (located in Miami, FL) are dominated by non-point sources (including humans and animals). We have estimated their respective source functions by developing a counting methodology for individuals to better understand their non-point source load impacts. The method utilizes camera images of the beach taken at regular time intervals to determine the number of people and animal visitors. The developed method translates raw image counts for weekdays and weekend days into daily and monthly visitation rates. Enterococci source functions were computed from the observed number of unique individuals for average days of each month of the year, and from average load contributions for humans and for animals. Results indicate that dogs represent the larger source of enterococci relative to humans and birds. PMID:20381094
RXTE/PCA and Swift/XRT observations of GRO J1655-40 during decay
NASA Astrophysics Data System (ADS)
Homan, Jeroen; Kong, Albert; Tomsick, John; Miller, Jon; Campana, Sergio; Wijnands, Rudy; Belloni, Tomaso; Lewin, Walter
2005-10-01
Following its transition to the hard state (ATels #607,#612), we have continued our daily RXTE/PCA observations of the black hole X-ray transient GRO J1655-40 (see http://tahti.mit.edu/opensource/1655). Between September 23, when the source reached the hard state, and October 10, the RXTE/ PCA count rate decreased exponentially, with an e-folding time of ~7 days. After October 10 the decrease started to slow down and data from the last few days suggest that the count rate may have reached a constant level.
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-04-10
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source.
Anigstein, Robert; Erdman, Michael C.; Ansari, Armin
2017-01-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of 60Co, 137Cs, and 241Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229
A physics investigation of deadtime losses in neutron counting at low rates with Cf252
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Louise G; Croft, Stephen
2009-01-01
{sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less
Anigstein, Robert; Erdman, Michael C; Ansari, Armin
2016-06-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides.
Half-lives of 221Fr, 217At, 213Bi, 213Po and 209Pb from the 225Ac decay series.
Suliman, G; Pommé, S; Marouli, M; Van Ammel, R; Stroh, H; Jobbágy, V; Paepen, J; Dirican, A; Bruchertseifer, F; Apostolidis, C; Morgenstern, A
2013-07-01
The half-lives of (221)Fr, (217)At, (213)Bi, (213)Po, and (209)Pb were measured by means of an ion-implanted planar Si detector for alpha and beta particles emitted from weak (225)Ac sources or from recoil sources, which were placed in a quasi-2π counting geometry. Recoil sources were prepared by collecting atoms from an open (225)Ac source onto a glass substrate. The (221)Fr and (213)Bi half-lives were determined by following the alpha particle emission rate of recoil sources as a function of time. Similarly, the (209)Pb half-life was determined from the beta particle count rate. The shorter half-lives of (217)At and (213)Po were deduced from delayed coincidence measurements on weak (225)Ac sources using digital data acquisition in list mode. The resulting values: T1/2((221)Fr)=4.806 (6) min, T1/2((217)At)=32.8 (3)ms, T1/2((213)Bi)=45.62 (6)min, T1/2((213)Po)=3.708 (8) μs, and T1/2((209)Pb)=3.232 (5)h were in agreement only with the best literature data. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Hawaii SCUBA-2 Lensing Cluster Survey: Number Counts and Submillimeter Flux Ratios
NASA Astrophysics Data System (ADS)
Hsu, Li-Yen; Cowie, Lennox L.; Chen, Chian-Chou; Barger, Amy J.; Wang, Wei-Hao
2016-09-01
We present deep number counts at 450 and 850 μm using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μm and 850 μm, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength is contributed by faint sources with L IR < 1012 L ⊙, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μm stacking of K-selected sources from the literature, we conclude that the K-selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L IR < 1012 L ⊙. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μm and 850 μm selected sources. At 850 μm, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μm, we do not see a clear relation between the flux ratio and the observed flux.
Time encoded radiation imaging
Marleau, Peter; Brubaker, Erik; Kiff, Scott
2014-10-21
The various technologies presented herein relate to detecting nuclear material at a large stand-off distance. An imaging system is presented which can detect nuclear material by utilizing time encoded imaging relating to maximum and minimum radiation particle counts rates. The imaging system is integrated with a data acquisition system that can utilize variations in photon pulse shape to discriminate between neutron and gamma-ray interactions. Modulation in the detected neutron count rates as a function of the angular orientation of the detector due to attenuation of neighboring detectors is utilized to reconstruct the neutron source distribution over 360 degrees around the imaging system. Neutrons (e.g., fast neutrons) and/or gamma-rays are incident upon scintillation material in the imager, the photons generated by the scintillation material are converted to electrical energy from which the respective neutrons/gamma rays can be determined and, accordingly, a direction to, and the location of, a radiation source identified.
NASA Astrophysics Data System (ADS)
Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.
2002-07-01
High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
AzTEC/ASTE 1.1 mm Deep Surveys: Number Counts and Clustering of Millimeter-bright Galaxies
NASA Astrophysics Data System (ADS)
Hatsukade, B.
2011-11-01
We present results of a 1.1 mm deep survey of the AKARI Deep Field South (ADF-S) with AzTEC mounted on the Atacama Submillimetre Telescope Experiment (ASTE). We obtained a map of 0.25 deg2 area with an rms noise level of 0.32-0.71 mJy. This is one of the deepest and widest maps thus far at millimetre and submillimetre wavelengths. We uncovered 198 sources with a significance of 3.5-15.6σ, providing the largest catalog of 1.1 mm sources in a contiguous region. Most of the sources are not detected in the far-infrared bands of the AKARI satellite, suggesting that they are mostly at z ≥ 1.5 given the detection limits. We construct differential and cumulative number counts of the ADF-S, the Subaru/XMM Newton Deep Field (SXDF), and the SSA 22 field surveyed by AzTEC/ASTE, which provide currently the tightest constraints on the faint end. The integration of the differential number counts of the ADF-S find that the contribution of 1.1 mm sources with ≥1 mJy to the cosmic infrared background (CIB) at 1.1 mm is 12-16%, suggesting that the large fraction of the CIB originates from faint sources of which number counts are not yet constrained. We estimate the cosmic star-formation rate density contributed by 1.1 mm sources with ≥1 mJy using the differential number counts and find that it is lower by about a factor of 5-10 compared to those derived from UV/optically-selected galaxies at z ~ 2-3. Clustering analyses of AzTEC sources in the ADF-S and the SXDF find that bright (>3 mJy) AzTEC sources are more strongly clustered than faint (< 3 mJy) AzTEC sources and the average mass of dark halos hosting bright AzTEC sources was calculated to be 1013-1014M⊙. Comparison of correlation length of AzTEC sources with other populations and with a bias evolution model suggests that dark halos hosting bright AzTEC sources evolve into systems of clusters at present universe and the AzTEC sources residing the dark halos evolve into massive elliptical galaxies located in the center of clusters.
NASA Astrophysics Data System (ADS)
Edwards, Nathaniel S.; Conley, Jerrod C.; Reichenberger, Michael A.; Nelson, Kyle A.; Tiner, Christopher N.; Hinson, Niklas J.; Ugorowski, Philip B.; Fronk, Ryan G.; McGregor, Douglas S.
2018-06-01
The propagation of electrons through several linear pore densities of reticulated vitreous carbon (RVC) foam was studied using a Frisch-grid parallel-plate ionization chamber pressurized to 1 psig of P-10 proportional gas. The operating voltages of the electrodes contained within the Frisch-grid parallel-plate ionization chamber were defined by measuring counting curves using a collimated 241Am alpha-particle source with and without a Frisch grid. RVC foam samples with linear pore densities of 5, 10, 20, 30, 45, 80, and 100 pores per linear inch were separately positioned between the cathode and anode. Pulse-height spectra and count rates from a collimated 241Am alpha-particle source positioned between the cathode and each RVC foam sample were measured and compared to a measurement without an RVC foam sample. The Frisch grid was positioned in between the RVC foam sample and the anode. The measured pulse-height spectra were indiscernible from background and resulted in negligible net count rates for all RVC foam samples. The Frisch grid parallel-plate ionization chamber measurement results indicate that electrons do not traverse the bulk of RVC foam and consequently do not produce a pulse.
The silicon drift detector for the IXO high-time resolution spectrometer
NASA Astrophysics Data System (ADS)
Lechner, Peter; Amoros, Carine; Barret, Didier; Bodin, Pierre; Boutelier, Martin; Eckhardt, Rouven; Fiorini, Carlo; Kendziorra, Eckhard; Lacombe, Karine; Niculae, Adrian; Pouilloux, Benjamin; Pons, Roger; Rambaud, Damien; Ravera, Laurent; Schmid, Christian; Soltau, Heike; Strüder, Lothar; Tenzer, Christoph; Wilms, Jörn
2010-07-01
The High Time Resolution Spectrometer (HTRS) is one of six scientific payload instruments of the International X-ray Observatory (IXO). HTRS is dedicated to the physics of matter at extreme density and gravity and will observe the X-rays generated in the inner accretion flows around the most compact massive objects, i.e. black holes and neutron stars. The study of their timing signature and in addition the simultaneous spectroscopy of the gravitationally shifted and broadened iron line allows for probing general relativity in the strong field regime and understanding the inner structure of neutron stars. As the sources to be observed by HTRS are the brightest in the X-ray sky and the studies require good photon statistics the instrument design is driven by the capability to operate at extremely high count rates. The HTRS instrument is based on a monolithic array of Silicon Drift Detectors (SDDs) with 31 cells in a circular envelope and a sensitive volume of 4.5 cm2 × 450 μm. The SDD principle uses fast signal charge collection on an integrated amplifier by a focusing internal electrical field. It combines a large sensitive area and a small capacitance, thus facilitating good energy resolution and high count rate capability. The HTRS is specified to provide energy spectra with a resolution of 150 eV (FWHM at 6 keV) at high time resolution of 10 μsec and with high count rate capability up to a goal of 2.106 counts per second, corresponding to a 12 Crab equivalent source. As the HTRS is a non-imaging instrument and will target only point sources it is placed on axis but out of focus so that the spot is spread over the array of 31 SDD cells. The SDD array is logically organized in four independent 'quadrants', a dedicated 8-channel quadrant readout chip is in development.
A whole-system approach to x-ray spectroscopy in cargo inspection systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langeveld, Willem G. J.; Gozani, Tsahi; Ryge, Peter
The bremsstrahlung x-ray spectrum used in high-energy, high-intensity x-ray cargo inspection systems is attenuated and modified by the materials in the cargo in a Z-dependent way. Therefore, spectroscopy of the detected x rays yields information about the Z of the x-rayed cargo material. It has previously been shown that such ZSpectroscopy (Z-SPEC) is possible under certain circumstances. A statistical approach, Z-SCAN (Z-determination by Statistical Count-rate ANalysis), has also been shown to be effective, and it can be used either by itself or in conjunction with Z-SPEC when the x-ray count rate is too high for individual x-ray spectroscopy. Both techniquesmore » require fast x-ray detectors and fast digitization electronics. It is desirable (and possible) to combine all techniques, including x-ray imaging of the cargo, in a single detector array, to reduce costs, weight, and overall complexity. In this paper, we take a whole-system approach to x-ray spectroscopy in x-ray cargo inspection systems, and show how the various parts interact with one another. Faster detectors and read-out electronics are beneficial for both techniques. A higher duty-factor x-ray source allows lower instantaneous count rates at the same overall x-ray intensity, improving the range of applicability of Z-SPEC in particular. Using an intensity-modulated advanced x-ray source (IMAXS) allows reducing the x-ray count rate for cargoes with higher transmission, and a stacked-detector approach may help material discrimination for the lowest attenuations. Image processing and segmentation allow derivation of results for entire objects, and subtraction of backgrounds. We discuss R and D performed under a number of different programs, showing progress made in each of the interacting subsystems. We discuss results of studies into faster scintillation detectors, including ZnO, BaF{sub 2} and PbWO{sub 4}, as well as suitable photo-detectors, read-out and digitization electronics. We discuss high-duty-factor linear-accelerator x-ray sources and their associated requirements, and how such sources improve spectroscopic techniques. We further discuss how image processing techniques help in correcting for backgrounds and overlapping materials. In sum, we present an integrated picture of how to optimize a cargo inspection system for x-ray spectroscopy.« less
A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector
2018-01-01
This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644
A Neutron Burst Associated with an Extensive Air Shower?
NASA Astrophysics Data System (ADS)
Alves, Mauro; Martin, Inacio; Shkevov, Rumen; Gusev, Anatoly; De Abreu, Alessandro
2016-07-01
A portable and compact system based on a He-3 tube (LND, USA; model 25311) with an area of approximately 250 cm² and is used to record neutron count rates at ground level in the energy range of 0.025 eV to 10 MeV, in São José dos Campos, SP, Brazil (23° 12' 45" S, 45° 52' 00" W; altitude, 660m). The detector, power supply, digitizer and other hardware are housed in an air-conditioned room. The detector power supply and digitizer are not connected to the main electricity network; a high-capacity 12-V battery is used to power the detector and digitizer. Neutron counts are accumulated at 1-minute intervals continuously. The data are stored in a PC for further analysis. In February 8, 2015, at 12 h 22 min (local time) during a period of fair weather with minimal cloud cover (< 1 okta) the neutron detector recorded a sharp (count rate = 27 neutrons/min) and brief (< 1 min) increase in the count rate. In the days before and after this event, the neutron count rate has oscillated between 0 and 3 neutrons/min. Since the occurrence of this event is not related with spurious signals, malfunctioning equipment, oscillations in the mains voltage, etc. we are led to believe that the sharp increase was caused by a physical source such as a an extensive air shower that occurred over the detector.
Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts
NASA Technical Reports Server (NTRS)
Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.
2010-01-01
We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX measurements, which is not attributed to cluster member galaxies. Our galaxy counts are a better match to deeper UV counts measured with HST.
Swift J1822.3-1606: pre-outburst ROSAT limits (plus erratum)
NASA Astrophysics Data System (ADS)
Esposito, P.; Rea, N.; Israel, G. L.; Tieng, A.
2011-07-01
We report on a pre-outburst ROSAT PSPC observation of the new SGR discovered by Swift-BAT on 2011 July 14 (Cummings et al. Atel #3488). The PSPC observation was performed on 1993 September 12 for ~6.7ks. We find a source at: RA (2000) = 18 22 18.1 and Dec (2000)= -16 04 26.4, with a 5sigma detection significance. The count-rate (corrected for the PSPC PSF, sampling dead time, and vignetting) is about 0.012 counts/s.
VizieR Online Data Catalog: ChaMP. I. First X-ray source catalog (Kim+, 2004)
NASA Astrophysics Data System (ADS)
Kim, D.-W.; Cameron, R. A.; Drake, J. J.; Evans, N. R.; Freeman, P.; Gaetz, T. J.; Ghosh, H.; Green, P. J.; Harnden, F. R. Jr; Karovska, M.; Kashyap, V.; Maksym, P. W.; Ratzlaff, P. W.; Schlegel, E. M.; Silverman, J. D.; Tananbaum, H. D.; Vikhlinin, A. A.; Wilkes, B. J.; Grimes, J. P.
2004-01-01
The Chandra Multiwavelength Project (ChaMP) is a wide-area (~14deg2 < survey of serendipitous Chandra X-ray sources, aiming to establish fair statistical samples covering a wide range of characteristics (such as absorbed active galactic nuclei, high-z clusters of galaxies) at flux levels (fX~10-15 to 10-14erg/s/cm2) ) intermediate between the Chandra deep surveys and previous missions. We present the first ChaMP catalog, which consists of 991 near on-axis, bright X-ray sources obtained from the initial sample of 62 observations. The data have been uniformly reduced and analyzed with techniques specifically developed for the ChaMP and then validated by visual examination. To assess source reliability and positional uncertainty, we perform a series of simulations and also use Chandra data to complement the simulation study. The false source detection rate is found to be as good as or better than expected for a given limiting threshold. On the other hand, the chance of missing a real source is rather complex, depending on the source counts, off-axis distance (or PSF), and background rate. The positional error (95% confidence level) is usually less than 1" for a bright source, regardless of its off-axis distance, while it can be as large as 4" for a weak source (~20counts) at a large off-axis distance (Doff-axis>8'). We have also developed new methods to find spatially extended or temporary variable sources, and those sources are listed in the catalog. (5 data files).
THE HAWAII SCUBA-2 LENSING CLUSTER SURVEY: NUMBER COUNTS AND SUBMILLIMETER FLUX RATIOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Li-Yen; Cowie, Lennox L.; Barger, Amy J.
2016-09-20
We present deep number counts at 450 and 850 μ m using the SCUBA-2 camera on the James Clerk Maxwell Telescope. We combine data for six lensing cluster fields and three blank fields to measure the counts over a wide flux range at each wavelength. Thanks to the lensing magnification, our measurements extend to fluxes fainter than 1 mJy and 0.2 mJy at 450 μ m and 850 μ m, respectively. Our combined data highly constrain the faint end of the number counts. Integrating our counts shows that the majority of the extragalactic background light (EBL) at each wavelength ismore » contributed by faint sources with L {sub IR} < 10{sup 12} L {sub ⊙}, corresponding to luminous infrared galaxies (LIRGs) or normal galaxies. By comparing our result with the 500 μ m stacking of K -selected sources from the literature, we conclude that the K -selected LIRGs and normal galaxies still cannot fully account for the EBL that originates from sources with L {sub IR} < 10{sup 12} L {sub ⊙}. This suggests that many faint submillimeter galaxies may not be included in the UV star formation history. We also explore the submillimeter flux ratio between the two bands for our 450 μ m and 850 μ m selected sources. At 850 μ m, we find a clear relation between the flux ratio and the observed flux. This relation can be explained by a redshift evolution, where galaxies at higher redshifts have higher luminosities and star formation rates. In contrast, at 450 μ m, we do not see a clear relation between the flux ratio and the observed flux.« less
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
Lockhart, M.; Henzlova, D.; Croft, S.; ...
2017-09-20
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, M.; Henzlova, D.; Croft, S.
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
X-ray imaging detectors for synchrotron and XFEL sources
Hatsui, Takaki; Graafsma, Heinz
2015-01-01
Current trends for X-ray imaging detectors based on hybrid and monolithic detector technologies are reviewed. Hybrid detectors with photon-counting pixels have proven to be very powerful tools at synchrotrons. Recent developments continue to improve their performance, especially for higher spatial resolution at higher count rates with higher frame rates. Recent developments for X-ray free-electron laser (XFEL) experiments provide high-frame-rate integrating detectors with both high sensitivity and high peak signal. Similar performance improvements are sought in monolithic detectors. The monolithic approach also offers a lower noise floor, which is required for the detection of soft X-ray photons. The link between technology development and detector performance is described briefly in the context of potential future capabilities for X-ray imaging detectors. PMID:25995846
The first Extreme Ultraviolet Explorer source catalog
NASA Technical Reports Server (NTRS)
Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.
1994-01-01
The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.
Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.
Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S
2001-04-01
Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.
Processing of higher count rates in Troitsk nu-mass experiment
NASA Astrophysics Data System (ADS)
Nozik, Alexander; Chernov, Vaslily
2018-04-01
In this article we give a short outline of current status of search for sterile neutrinos with masses up to 4 keV in "Troitsk nu-mass experiment". We also discuss major sources of systematic uncertainties and methods to lower them.
NASA Astrophysics Data System (ADS)
Li, Zheng; Guan, Jun; Yang, Xudong; Lin, Chao-Hsin
2014-06-01
Airborne particles are an important type of air pollutants in aircraft cabin. Finding sources of particles is conducive to taking appropriate measures to remove them. In this study, measurements of concentration and size distribution of particles larger than 0.3 μm (PM>0.3) were made on nine short haul flights from September 2012 to March 2013. Particle counts in supply air and breathing zone air were both obtained. Results indicate that the number concentrations of particles ranged from 3.6 × 102 counts L-1 to 1.2 × 105 counts L-1 in supply air and breathing zone air, and they first decreased and then increased in general during the flight duration. Peaks of particle concentration were found at climbing, descending, and cruising phases in several flights. Percentages of particle concentration in breathing zone contributed by the bleed air (originated from outside) and cabin interior sources were calculated. The bleed air ratios, outside airflow rates and total airflow rates were calculated by using carbon dioxide as a ventilation tracer in five of the nine flights. The calculated results indicate that PM>0.3 in breathing zone mainly came from unfiltered bleed air, especially for particle sizes from 0.3 to 2.0 μm. And for particles larger than 2.0 μm, contributions from the bleed air and cabin interior were both important. The results would be useful for developing better cabin air quality control strategies.
THE CHANDRA SURVEY OF THE COSMOS FIELD. II. SOURCE DETECTION AND PHOTOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puccetti, S.; Vignali, C.; Cappelluti, N.
2009-12-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that covers the central contiguous {approx}0.92 deg{sup 2} of the COSMOS field. C-COSMOS is the result of a complex tiling, with every position being observed in up to six overlapping pointings (four overlapping pointings in most of the central {approx}0.45 deg{sup 2} area with the best exposure, and two overlapping pointings in most of the surrounding area, covering an additional {approx}0.47 deg{sup 2}). Therefore, the full exploitation of the C-COSMOS data requires a dedicated and accurate analysis focused on three main issues: (1) maximizing the sensitivity when themore » point-spread function (PSF) changes strongly among different observations of the same source (from {approx}1 arcsec up to {approx}10 arcsec half-power radius); (2) resolving close pairs; and (3) obtaining the best source localization and count rate. We present here our treatment of four key analysis items: source detection, localization, photometry, and survey sensitivity. Our final procedure consists of a two step procedure: (1) a wavelet detection algorithm to find source candidates and (2) a maximum likelihood PSF fitting algorithm to evaluate the source count rates and the probability that each source candidate is a fluctuation of the background. We discuss the main characteristics of this procedure, which was the result of detailed comparisons between different detection algorithms and photometry tools, calibrated with extensive and dedicated simulations.« less
High event rate ROICs (HEROICs) for astronomical UV photon counting detectors
NASA Astrophysics Data System (ADS)
Harwit, Alex; France, Kevin; Argabright, Vic; Franka, Steve; Freymiller, Ed; Ebbets, Dennis
2014-07-01
The next generation of astronomical photocathode / microchannel plate based UV photon counting detectors will overcome existing count rate limitations by replacing the anode arrays and external cabled electronics with anode arrays integrated into imaging Read Out Integrated Circuits (ROICs). We have fabricated a High Event Rate ROIC (HEROIC) consisting of a 32 by 32 array of 55 μm square pixels on a 60 μm pitch. The pixel sensitivity (threshold) has been designed to be globally programmable between 1 × 103 and 1 × 106 electrons. To achieve the sensitivity of 1 × 103 electrons, parasitic capacitances had to be minimized and this was achieved by fabricating the ROIC in a 65 nm CMOS process. The ROIC has been designed to support pixel counts up to 4096 events per integration period at rates up to 1 MHz per pixel. Integration time periods can be controlled via an external signal with a time resolution of less than 1 microsecond enabling temporally resolved imaging and spectroscopy of astronomical sources. An electrical injection port is provided to verify functionality and performance of each ROIC prior to vacuum integration with a photocathode and microchannel plate amplifier. Test results on the first ROICs using the electrical injection port demonstrate sensitivities between 3 × 103 and 4 × 105 electrons are achieved. A number of fixes are identified for a re-spin of this ROIC.
NASA Astrophysics Data System (ADS)
Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.
2017-07-01
The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.
Measurement of nitrogen in the body using a commercial PGNAA system--phantom experiments.
Chichester, D L; Empey, E
2004-01-01
An industrial prompt-gamma neutron activation analysis (PGNAA) system, originally designed for the real-time elemental analyses of bulk coal on a conveyor belt, has been studied to examine the feasibility of using such a system for body composition analysis. Experiments were conducted to measure nitrogen in a simple, tissue equivalent phantom comprised of 2.7 wt% of nitrogen. The neutron source for these experiments was 365 MBq (18.38 microg) of 252Cf located within an engineered low Z moderator and it yielded a dose rate in the measurement position of 3.91 mSv/h; data were collected using a 2780 cm(3) NaI(Tl) cylindrical detector with a digital signal processor and a 512 channel MCA. Source, moderator and detector geometries were unaltered from the system's standard configuration, where they have been optimized for considerations such as neutron thermalization, measurement sensitivity and uniformity, background radiation and external dose minimization. Based on net counts in the 10.8 MeV PGNAA nitrogen photopeak and its escape peaks the dose dependent nitrogen count rate was 11,600 counts/mSv with an uncertainty of 3.0% after 0.32 mSv (4.9 min), 2.0% after 0.74 mSv (11.4 min) and 1.0% after 3.02 mSv (46.4 min).
NASA Astrophysics Data System (ADS)
Steadman, Roger; Herrmann, Christoph; Livne, Amir
2017-08-01
Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.
NASA Astrophysics Data System (ADS)
Boutet, J.; Debourdeau, M.; Laidevant, A.; Hervé, L.; Dinten, J.-M.
2010-02-01
Finding a way to combine ultrasound and fluorescence optical imaging on an endorectal probe may improve early detection of prostate cancer. A trans-rectal probe adapted to fluorescence diffuse optical tomography measurements was developed by our team. This probe is based on a pulsed NIR laser source, an optical fiber network and a time-resolved detection system. A reconstruction algorithm was used to help locate and quantify fluorescent prostate tumors. In this study, two different kinds of time-resolved detectors are compared: High Rate Imaging system (HRI) and a photon counting system. The HRI is based on an intensified multichannel plate and a CCD Camera. The temporal resolution is obtained through a gating of the HRI. Despite a low temporal resolution (300ps), this system allows a simultaneous acquisition of the signal from a large number of detection fibers. In the photon counting setup, 4 photomultipliers are connected to a Time Correlated Single Photon Counting (TCSPC) board, providing a better temporal resolution (0.1 ps) at the expense of a limited number of detection fibers (4). At last, we show that the limited number of detection fibers of the photon counting setup is enough for a good localization and dramatically improves the overall acquisition time. The photon counting approach is then validated through the localization of fluorescent inclusions in a prostate-mimicking phantom.
NASA Astrophysics Data System (ADS)
Degenaar, N.; Wijnands, R.; Reynolds, M. T.; Miller, J. M.; Kennea, J. A.
2017-10-01
Daily Swift/XRT monitoring observations of the Galactic center (Degenaar et al. 2015) have picked up renewed activity of the transient neutron star low-mass X-ray binary and thermonuclear X-ray burster GRS 1741-2853, which is located 10 arcmin NW of Sgr A*. During a 1 ks PC-mode observation performed on 2017 October 11 the source is detected at a net count rate of 0.015 counts/s and it has been steadily brightening since, indicating the onset of a new accretion outburst.
Data indexing techniques for the EUVE all-sky survey
NASA Technical Reports Server (NTRS)
Lewis, J.; Saba, V.; Dobson, C.
1992-01-01
This poster describes techniques developed for manipulating large full-sky data sets for the Extreme Ultraviolet Explorer project. The authors have adapted the quatrilateralized cubic sphere indexing algorithm to allow us to efficiently store and process several types of large data sets, such as full-sky maps of photon counts, exposure time, and count rates. A variation of this scheme is used to index sparser data such as individual photon events and viewing times for selected areas of the sky, which are eventually used to create EUVE source catalogs.
Low cost digital electronics for isotope analysis with microcalorimeters - final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Hennig
2006-09-11
The overall goal of the Phase I research was to demonstrate that the digital readout electronics and filter algorithms developed by XIA for use with HPGe detectors can be adapted to high precision, cryogenic gamma detectors (microcalorimeters) and not only match the current state of the art in terms of energy resolution, but do so at a significantly reduced cost. This would make it economically feasible to instrument large arrays of microcalorimeters and would also allow automation of the setup, calibration and operation of large numbers of channels through software. We expected, and have demonstrated, that this approach would furthermore » allow much higher count rates than the optimum filter algorithms currently used. In particular, in measurements with a microcalorimeter at LLNL, the adapted Pixie-16 spectrometer achieved an energy resolution of 0.062%, significantly better than the targeted resolution of 0.1% in the Phase I proposal and easily matching resolutions obtained with LLNL readout electronics and optimum filtering (0.066%). The theoretical maximum output count rate for the filter settings used to achieve this resolution is about 120cps. If the filter is adjusted for maximum throughput with an energy resolution of 0.1% or better, rates of 260cps are possible. This is 20-50 times higher than the maximum count rates of about 5cps with optimum filters for this detector. While microcalorimeter measurements were limited to count rates of ~1.3cps due to the strength of available sources, pulser measurements demonstrated that measured energy resolutions were independent of counting rate to output counting rates well in excess of 200cps or more.. We also developed a preliminary hardware design of a spectrometer module, consisting of a digital processing core and several input options that can be implemented on daughter boards. Depending upon the daughter board, the total parts cost per channel ranged between $12 and $27, resulting in projected product prices of $80 to $160 per channel. This demonstrates that a price of $100 per channel is economically very feasible for large microcalorimeter arrays.« less
Advanced analysis techniques for uranium assay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.
2001-01-01
Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples countmore » rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.« less
NASA Astrophysics Data System (ADS)
Everett, Samantha
2010-10-01
A transmission curve experiment was carried out to measure the range of beta particles in aluminum in the health physics laboratory located on the campus of Texas Southern University. The transmission count rate through aluminum for varying radiation lengths was measured using beta particles emitted from a low activity (˜1 μCi) Sr-90 source. The count rate intensity was recorded using a Geiger Mueller tube (SGC N210/BNC) with an active volume of 61 cm^3 within a systematic detection accuracy of a few percent. We compared these data with a realistic simulation of the experimental setup using the Geant4 Monte Carlo toolkit (version 9.3). The purpose of this study was to benchmark our Monte Carlo for future experiments as part of a more comprehensive research program. Transmission curves were simulated based on the standard and low-energy electromagnetic physics models, and using the radioactive decay module for the electrons primary energy distribution. To ensure the validity of our measurements, linear extrapolation techniques were employed to determine the in-medium beta particle range from the measured data and was found to be 1.87 g/cm^2 (˜0.693 cm), in agreement with literature values. We found that the general shape of the measured data and simulated curves were comparable; however, a discrepancy in the relative count rates was observed. The origin of this disagreement is still under investigation.
van Sighem, Ard; Sabin, Caroline A.; Phillips, Andrew N.
2015-01-01
Background It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). Methods The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. Results For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150–199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150–199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29–100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. Conclusions The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART. PMID:25768925
Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N
2015-01-01
It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.
Abbene, L; Gerardi, G; Principato, F; Del Sordo, S; Ienzi, R; Raso, G
2010-12-01
Direct measurement of mammographic x-ray spectra under clinical conditions is a difficult task due to the high fluence rate of the x-ray beams as well as the limits in the development of high resolution detection systems in a high counting rate environment. In this work we present a detection system, based on a CdTe detector and an innovative digital pulse processing (DPP) system, for high-rate x-ray spectroscopy in mammography. The DPP system performs a digital pile-up inspection and a digital pulse height analysis of the detector signals, digitized through a 14-bit, 100 MHz digitizer, for x-ray spectroscopy even at high photon counting rates. We investigated on the response of the digital detection system both at low (150 cps) and at high photon counting rates (up to 500 kcps) by using monoenergetic x-ray sources and a nonclinical molybdenum anode x-ray tube. Clinical molybdenum x-ray spectrum measurements were also performed by using a pinhole collimator and a custom alignment device. The detection system shows excellent performance up to 512 kcps with an energy resolution of 4.08% FWHM at 22.1 keV. Despite the high photon counting rate (up to 453 kcps), the molybdenum x-ray spectra, measured under clinical conditions, are characterized by a low number of pile-up events. The agreement between the attenuation curves and the half value layer values, obtained from the measured spectra, simulated spectra, and from the exposure values directly measured with an ionization chamber, also shows the accuracy of the measurements. These results make the proposed detection system a very attractive tool for both laboratory research and advanced quality controls in mammography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jafari Salim, A., E-mail: ajafaris@uwaterloo.ca; Eftekharian, A.; University of Waterloo, Waterloo, Ontario N2L 3G1
In this paper, we theoretically show that a multi-layer superconducting nanowire single-photon detector (SNSPD) is capable of approaching characteristics of an ideal SNSPD in terms of the quantum efficiency, dark count, and band-width. A multi-layer structure improves the performance in two ways. First, the potential barrier for thermally activated vortex crossing, which is the major source of dark counts and the reduction of the critical current in SNSPDs is elevated. In a multi-layer SNSPD, a vortex is made of 2D-pancake vortices that form a stack. It will be shown that the stack of pancake vortices effectively experiences a larger potentialmore » barrier compared to a vortex in a single-layer SNSPD. This leads to an increase in the experimental critical current as well as significant decrease in the dark count rate. In consequence, an increase in the quantum efficiency for photons of the same energy or an increase in the sensitivity to photons of lower energy is achieved. Second, a multi-layer structure improves the efficiency of single-photon absorption by increasing the effective optical thickness without compromising the single-photon sensitivity.« less
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Bondi Accretion and the Problem of the Missing Isolated Neutron Stars
NASA Technical Reports Server (NTRS)
Perna, Rosalba; Narayan, Ramesh; Rybicki, George; Stella, Luigi; Treves, Aldo
2003-01-01
A large number of neutron stars (NSs), approximately 10(exp 9), populate the Galaxy, but only a tiny fraction of them is observable during the short radio pulsar lifetime. The majority of these isolated NSs, too cold to be detectable by their own thermal emission, should be visible in X-rays as a result of accretion from the interstellar medium. The ROSAT All-Sky Survey has, however, shown that such accreting isolated NSs are very elusive: only a few tentative candidates have been identified, contrary to theoretical predictions that up to several thousand should be seen. We suggest that the fundamental reason for this discrepancy lies in the use of the standard Bondi formula to estimate the accretion rates. We compute the expected source counts using updated estimates of the pulsar velocity distribution, realistic hydrogen atmosphere spectra, and a modified expression for the Bondi accretion rate, as suggested by recent MHD simulations and supported by direct observations in the case of accretion around supermassive black holes in nearby galaxies and in our own. We find that, whereas the inclusion of atmospheric spectra partly compensates for the reduction in the counts due to the higher mean velocities of the new distribution, the modified Bondi formula dramatically suppresses the source counts. The new predictions are consistent with a null detection at the ROSAT sensitivity.
The ROSAT All-Sky Survey view of the Large Magellanic Cloud (LMC)
NASA Technical Reports Server (NTRS)
Pietsch, W.; Denner, K.; Kahabka, P.; Pakull, M.; Schaeidt, S.
1996-01-01
During the Rosat all sky survey, centered on the Large Magellanic Cloud (LMC), 516 X-ray sources were detected. The field was covered from July 1990 to January 1991. The X-ray parameters of the sources, involving position, count rates, hardness ratios, extent, and time variability during the observations, are discussed. Identifications with objects from optical, radio and infrared wavelength allow the LMC candidates to be separated from the foreground stars and the background objects.
Cho, Hyo-Min; Barber, William C.; Ding, Huanjun; Iwanczyk, Jan S.; Molloi, Sabee
2014-01-01
Purpose: The possible clinical applications which can be performed using a newly developed detector depend on the detector's characteristic performance in a number of metrics including the dynamic range, resolution, uniformity, and stability. The authors have evaluated a prototype energy resolved fast photon counting x-ray detector based on a silicon (Si) strip sensor used in an edge-on geometry with an application specific integrated circuit to record the number of x-rays and their energies at high flux and fast frame rates. The investigated detector was integrated with a dedicated breast spectral computed tomography (CT) system to make use of the detector's high spatial and energy resolution and low noise performance under conditions suitable for clinical breast imaging. The aim of this article is to investigate the intrinsic characteristics of the detector, in terms of maximum output count rate, spatial and energy resolution, and noise performance of the imaging system. Methods: The maximum output count rate was obtained with a 50 W x-ray tube with a maximum continuous output of 50 kVp at 1.0 mA. A109Cd source, with a characteristic x-ray peak at 22 keV from Ag, was used to measure the energy resolution of the detector. The axial plane modulation transfer function (MTF) was measured using a 67 μm diameter tungsten wire. The two-dimensional (2D) noise power spectrum (NPS) was measured using flat field images and noise equivalent quanta (NEQ) were calculated using the MTF and NPS results. The image quality parameters were studied as a function of various radiation doses and reconstruction filters. The one-dimensional (1D) NPS was used to investigate the effect of electronic noise elimination by varying the minimum energy threshold. Results: A maximum output count rate of 100 million counts per second per square millimeter (cps/mm2) has been obtained (1 million cps per 100 × 100 μm pixel). The electrical noise floor was less than 4 keV. The energy resolution measured with the 22 keV photons from a 109Cd source was less than 9%. A reduction of image noise was shown in all the spatial frequencies in 1D NPS as a result of the elimination of the electronic noise. The spatial resolution was measured just above 5 line pairs per mm (lp/mm) where 10% of MTF corresponded to 5.4 mm−1. The 2D NPS and NEQ shows a low noise floor and a linear dependence on dose. The reconstruction filter choice affected both of the MTF and NPS results, but had a weak effect on the NEQ. Conclusions: The prototype energy resolved photon counting Si strip detector can offer superior imaging performance for dedicated breast CT as compared to a conventional energy-integrating detector due to its high output count rate, high spatial and energy resolution, and low noise characteristics, which are essential characteristics for spectral breast CT imaging. PMID:25186390
Deep 3 GHz number counts from a P(D) fluctuation analysis
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
NASA Astrophysics Data System (ADS)
Englander, J. G.; Brodrick, P. G.; Brandt, A. R.
2015-12-01
Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.
Nova Lup 2016 during the X-ray decay phase
NASA Astrophysics Data System (ADS)
Orio, Marina; Beardmore, Andrew; Page, Kim; Osborne, Julian
2017-09-01
Nova Lup 2016 (ASASSN-16kt; see ATel #9538, #9539, #8550, #9554, #9587, #9594 and #9644) has been regularly monitored with Swift since the observations published in ATel #10632 that revealed a luminous supersoft X-ray source with a peak XRT count rate of 61.1(+-)0.1 cts/s on 2017/2/22.
NASA Technical Reports Server (NTRS)
Tzanavaris, P.; Gallagher, S. C.; Hornschemeier, A. E.; Fedotov, K.; Eracleous, M.; Brandt, W. N.; Desjardins, T. D.; Charlton, J. C.; Gronwall, C.
2014-01-01
We present Chandra X-ray point source catalogs for 9 Hickson Compact Groups (HCGs, 37 galaxies) at distances of 34-89 Mpc. We perform detailed X-ray point source detection and photometry and interpret the point source population by means of simulated hardness ratios. We thus estimate X-ray luminosities (L(sub x)) for all sources, most of which are too weak for reliable spectral fitting. For all sources, we provide catalogs with counts, count rates, power-law indices (gamma), hardness ratios, and L(sub X), in the full (0.5-8.0 keV), soft (0.5-2.0 keV), and hard (2.0-8.0 keV) bands. We use optical emission-line ratios from the literature to re-classify 24 galaxies as star-forming, accreting onto a supermassive black hole (AGNs), transition objects, or low-ionization nuclear emission regions. Two-thirds of our galaxies have nuclear X-ray sources with Swift/UVOT counterparts. Two nuclei have L(sub X),0.5-8.0 keV > 10(exp 42) erg s-1, are strong multi-wavelength active galactic nuclei (AGNs), and follow the known alpha OX-?L? (nearUV) correlation for strong AGNs. Otherwise, most nuclei are X-ray faint, consistent with either a low-luminosity AGN or a nuclear X-ray binary population, and fall in the 'non-AGN locus' in alpha OX-?L? (nearUV) space, which also hosts other normal galaxies. Our results suggest that HCG X-ray nuclei in high specific star formation rate spiral galaxies are likely dominated by star formation, while those with low specific star formation rates in earlier types likely harbor a weak AGN. The AGN fraction in HCG galaxies with MR (is) less than -20 and L(sub X),0.5-8.0 keV (is) greater than 10(exp 41) erg s-1 is 0.08+0.35 -0.01, somewhat higher than the 5% fraction in galaxy clusters.
NASA Astrophysics Data System (ADS)
Tzanavaris, P.; Gallagher, S. C.; Hornschemeier, A. E.; Fedotov, K.; Eracleous, M.; Brandt, W. N.; Desjardins, T. D.; Charlton, J. C.; Gronwall, C.
2014-05-01
We present Chandra X-ray point source catalogs for 9 Hickson Compact Groups (HCGs, 37 galaxies) at distances of 34-89 Mpc. We perform detailed X-ray point source detection and photometry and interpret the point source population by means of simulated hardness ratios. We thus estimate X-ray luminosities (LX ) for all sources, most of which are too weak for reliable spectral fitting. For all sources, we provide catalogs with counts, count rates, power-law indices (Γ), hardness ratios, and LX , in the full (0.5-8.0 keV), soft (0.5-2.0 keV), and hard (2.0-8.0 keV) bands. We use optical emission-line ratios from the literature to re-classify 24 galaxies as star-forming, accreting onto a supermassive black hole (AGNs), transition objects, or low-ionization nuclear emission regions. Two-thirds of our galaxies have nuclear X-ray sources with Swift/UVOT counterparts. Two nuclei have L X, 0.5-8.0 keV >1042 erg s-1, are strong multi-wavelength active galactic nuclei (AGNs), and follow the known αOX-νL ν (nearUV) correlation for strong AGNs. Otherwise, most nuclei are X-ray faint, consistent with either a low-luminosity AGN or a nuclear X-ray binary population, and fall in the "non-AGN locus" in αOX-νL ν (nearUV) space, which also hosts other normal galaxies. Our results suggest that HCG X-ray nuclei in high specific star formation rate spiral galaxies are likely dominated by star formation, while those with low specific star formation rates in earlier types likely harbor a weak AGN. The AGN fraction in HCG galaxies with MR <= -20 and L X, 0.5-8.0 keV >=1041 erg s-1 is 0.08^{+0.35}_{-0.01}, somewhat higher than the ~5% fraction in galaxy clusters.
NASA Astrophysics Data System (ADS)
Hu, Jianwei; Tobin, Stephen J.; LaFleur, Adrienne M.; Menlove, Howard O.; Swinhoe, Martyn T.
2013-11-01
Self-Interrogation Neutron Resonance Densitometry (SINRD) is one of several nondestructive assay (NDA) techniques being integrated into systems to measure spent fuel as part of the Next Generation Safeguards Initiative (NGSI) Spent Fuel Project. The NGSI Spent Fuel Project is sponsored by the US Department of Energy's National Nuclear Security Administration to measure plutonium in, and detect diversion of fuel pins from, spent nuclear fuel assemblies. SINRD shows promising capability in determining the 239Pu and 235U content in spent fuel. SINRD is a relatively low-cost and lightweight instrument, and it is easy to implement in the field. The technique makes use of the passive neutron source existing in a spent fuel assembly, and it uses ratios between the count rates collected in fission chambers that are covered with different absorbing materials. These ratios are correlated to key attributes of the spent fuel assembly, such as the total mass of 239Pu and 235U. Using count rate ratios instead of absolute count rates makes SINRD less vulnerable to systematic uncertainties. Building upon the previous research, this work focuses on the underlying physics of the SINRD technique: quantifying the individual impacts on the count rate ratios of a few important nuclides using the perturbation method; examining new correlations between count rate ratio and mass quantities based on the results of the perturbation study; quantifying the impacts on the energy windows of the filtering materials that cover the fission chambers by tallying the neutron spectra before and after the neutrons go through the filters; and identifying the most important nuclides that cause cooling-time variations in the count rate ratios. The results of these studies show that 235U content has a major impact on the SINRD signal in addition to the 239Pu content. Plutonium-241 and 241Am are the two main nuclides responsible for the variation in the count rate ratio with cooling time. In short, this work provides insights into some of the main factors that affect the performance of SINRD, and it should help improve the hardware design and the algorithm used to interpret the signal for the SINRD technique. In addition, the modeling and simulation techniques used in this work can be easily adopted for analysis of other NDA systems, especially when complex systems like spent nuclear fuel are involved. These studies were conducted at Los Alamos National Laboratory.
Jizan, Iman; Helt, L. G.; Xiong, Chunle; Collins, Matthew J.; Choi, Duk-Yong; Joon Chae, Chang; Liscidini, Marco; Steel, M. J.; Eggleton, Benjamin J.; Clark, Alex S.
2015-01-01
The growing requirement for photon pairs with specific spectral correlations in quantum optics experiments has created a demand for fast, high resolution and accurate source characterisation. A promising tool for such characterisation uses classical stimulated processes, in which an additional seed laser stimulates photon generation yielding much higher count rates, as recently demonstrated for a χ(2) integrated source in A. Eckstein et al. Laser Photon. Rev. 8, L76 (2014). In this work we extend these results to χ(3) integrated sources, directly measuring for the first time the relation between spectral correlation measurements via stimulated and spontaneous four wave mixing in an integrated optical waveguide, a silicon nanowire. We directly confirm the speed-up due to higher count rates and demonstrate that this allows additional resolution to be gained when compared to traditional coincidence measurements without any increase in measurement time. As the pump pulse duration can influence the degree of spectral correlation, all of our measurements are taken for two different pump pulse widths. This allows us to confirm that the classical stimulated process correctly captures the degree of spectral correlation regardless of pump pulse duration, and cements its place as an essential characterisation method for the development of future quantum integrated devices. PMID:26218609
Organic Scintillation Detectors for Spectroscopic Radiation Portal Monitors
NASA Astrophysics Data System (ADS)
Paff, Marc Gerrit
Thousands of radiation portal monitors have been deployed worldwide to detect and deter the smuggling of nuclear and radiological materials that could be used in nefarious acts. Radiation portal monitors are often installed at bottlenecks where large amounts of people or goods must traverse. Examples of use include scanning cargo containers at shipping ports, vehicles at border crossings, and people at high profile functions and events. Traditional radiation portal monitors contain separate detectors for passively measuring neutron and gamma ray count rates. 3He tubes embedded in polyethylene and slabs of plastic scintillators are the most common detector materials used in radiation portal monitors. The radiation portal monitor alarm mechanism relies on measuring radiation count rates above user defined alarm thresholds. These alarm thresholds are set above natural background count rates. Minimizing false alarms caused by natural background and maximizing sensitivity to weakly emitting threat sources must be balanced when setting these alarm thresholds. Current radiation portal monitor designs suffer from frequent nuisance radiation alarms. These radiation nuisance alarms are most frequently caused by shipments of large quantities of naturally occurring radioactive material containing cargo, like kitty litter, as well as by humans who have recently undergone a nuclear medicine procedure, particularly 99mTc treatments. Current radiation portal monitors typically lack spectroscopic capabilities, so nuisance alarms must be screened out in time-intensive secondary inspections with handheld radiation detectors. Radiation portal monitors using organic liquid scintillation detectors were designed, built, and tested. A number of algorithms were developed to perform on-the-fly radionuclide identification of single and combination radiation sources moving past the portal monitor at speeds up to 2.2 m/s. The portal monitor designs were tested extensively with a variety of shielded and unshielded radiation sources, including special nuclear material, at the European Commission Joint Research Centre in Ispra, Italy. Common medical isotopes were measured at the C.S. Mott Children's Hospital and added to the radionuclide identification algorithms.
Swift XRT observation of HETE J1900.1-2455
NASA Astrophysics Data System (ADS)
Campana, S.
2005-06-01
Sergio Campana (INAF-OAB), Antonino Cucchiara (PSU) and Dave Burrows (PSU) on behalf of the XRT team report that Swift observed the seventh millisecond X-ray pulsar HETE J1900.1-2455 on 2005-06-24 UT21:03:37 for 1329 s in a single orbit. The source is bright an is observed in Window Timing (WT) mode, therefore not providing an image. The source count rate in the 0.5-10 keV energy band is 14.9+/-0.1 c/s.
Harmon, S Michele; West, Ryan T; Yates, James R
2014-12-01
Sources of fecal coliform pollution in a small South Carolina (USA) watershed were identified using inexpensive methods and commonly available equipment. Samples from the upper reaches of the watershed were analyzed with 3M(™) Petrifilm(™) count plates. We were able to narrow down the study's focus to one particular tributary, Sand River, that was the major contributor of the coliform pollution (both fecal and total) to a downstream reservoir that is heavily used for recreation purposes. Concentrations of total coliforms ranged from 2,400 to 120,333 cfu/100 mL, with sharp increases in coliform counts observed in samples taken after rain events. Positive correlations between turbidity and fecal coliform counts suggested a relationship between fecal pollution and stormwater runoff. Antibiotic resistance analysis (ARA) compared antibiotic resistance profiles of fecal coliform isolates from the stream to those of a watershed-specific fecal source library (equine, waterfowl, canines, and untreated sewage). Known fecal source isolates and unknown isolates from the stream were exposed to six antibiotics at three concentrations each. Discriminant analysis grouped known isolates with an overall average rate of correct classification (ARCC) of 84.3 %. A total of 401 isolates from the first stream location were classified as equine (45.9 %), sewage (39.4 %), waterfowl (6.2 %), and feline (8.5 %). A similar pattern was observed at the second sampling location, with 42.6 % equine, 45.2 % sewage, 2.8 % waterfowl, 0.6 % canine, and 8.8 % feline. While there were slight weather-dependent differences, the vast majority of the coliform pollution in this stream appeared to be from two sources, equine and sewage. This information will contribute to better land use decisions and further justify implementation of low-impact development practices within this urban watershed.
Shanahan, Meghan E; Fliss, Mike D; Proescholdbell, Scott K
2018-01-01
BACKGROUND As child maltreatment often occurs in private, child welfare numbers underestimate its true prevalence. Child maltreatment surveillance systems have been used to ascertain more accurate counts of children who experience maltreatment. This manuscript describes the results from a pilot child maltreatment surveillance system in Wake County, North Carolina. METHODS We linked 2010 and 2011 data from 3 sources (Child Protective Services, Raleigh Police Department, and Office of the Chief Medical Examiner) to obtain rates of definite and possible child maltreatment. We separately analyzed emergency department visits from 2010 and 2011 to obtain counts of definite and possible child maltreatment. We then compared the results from the surveillance systems to those obtained from Child Protective Services (CPS) data alone. RESULTS In 2010 and 2011, rates of definite child maltreatment were 11.7 and 11.3 per 1,000 children, respectively, when using the linked data, compared to 10.0 and 9.5 per 1,000 children using CPS data alone. The rates of possible maltreatment were 25.3 and 23.8 per 1,000, respectively. In the 2010 and 2011 emergency department data, there were 68 visits and 84 visits, respectively, that met the case definition for maltreatment. LIMITATIONS While 4 data sources were analyzed, only 3 were linked in the current surveillance system. It is likely that we would have identified more cases of maltreatment had more sources been included. CONCLUSION While the surveillance system identified more children who met the case definition of maltreatment than CPS data alone, the rates of definite child maltreatment were not considerably higher than official reports. Rates of possible child maltreatment were much higher than both the definite case definition and child welfare records. Tracking both definite and possible case definitions and using a variety of data sources provides a more complete picture of child maltreatment in North Carolina. ©2018 by the North Carolina Institute of Medicine and The Duke Endowment. All rights reserved.
A Chandra X-Ray Study of NGC 1068 IL the Luminous X-Ray Source Population
NASA Technical Reports Server (NTRS)
Smith, David A.; Wilson, Andrew S.
2003-01-01
We present an analysis of the compact X-ray source population in the Seyfert 2 galaxy NGC 1068, imaged with a approx. 50 ks Chandra observation. We find a total of 84 compact sources on the S3 chip, of which 66 are located within the 25.0 B-mag/arcsec isophote of the galactic disk of NGC 1068. Spectra have been obtained for the 21 sources with at least 50 counts and modeled with both multicolor disk blackbody and power-law models. The power-law model provides the better description of the spectrum for 18 of these sources. For fainter sources, the spectral index has been estimated from the hardness ratio. Five sources have 0.4 - 8 keV intrinsic luminosities greater than 10(exp 39)ergs/ s, assuming that their emission is isotropic and that they are associated with NGC 1068. We refer to these sources as intermediate-luminosity X-ray objects (ISOs). If these five sources are X-ray binaries accreting with luminosities that are both sub-Eddington and isotropic, then the implied source masses are approx greater than 7 solar mass, and so they are inferred to be black holes. Most of the spectrally modeled sources have spectral shapes similar to Galactic black hole candidates. However, the brightest compact source in NGC 1068 has a spectrum that is much harder than that found in Galactic black hole candidates and other ISOs. The brightest source also shows large amplitude variability on both short-term and long-term timescales, with the count rate possibly decreasing by a factor of 2 in approx. 2 ks during our Chundra observation, and the source flux decreasing by a factor of 5 between our observation and the grating observations taken just over 9 months later. The ratio of the number of sources with luminosities greater than 2.1 x 10(exp 38) ergs/s in the 0.4 - 8 keV band to the rate of massive (greater than 5 solar mass) star formation is the same, to within a factor of 2, for NGC 1068, the Antennae, NGC 5194 (the main galaxy in M51), and the Circinus galaxy. This suggests that the rate of production of X-ray binaries per massive star is approximately the same for galaxies with currently active star formation, including "starbursts."
2008-01-01
Objective To determine if citation counts at two years could be predicted for clinical articles that pass basic criteria for critical appraisal using data within three weeks of publication from external sources and an online article rating service. Design Retrospective cohort study. Setting Online rating service, Canada. Participants 1274 articles from 105 journals published from January to June 2005, randomly divided into a 60:40 split to provide derivation and validation datasets. Main outcome measures 20 article and journal features, including ratings of clinical relevance and newsworthiness, routinely collected by the McMaster online rating of evidence system, compared with citation counts at two years. Results The derivation analysis showed that the regression equation accounted for 60% of the variation (R2=0.60, 95% confidence interval 0.538 to 0.629). This model applied to the validation dataset gave a similar prediction (R2=0.56, 0.476 to 0.596, shrinkage 0.04; shrinkage measures how well the derived equation matches data from the validation dataset). Cited articles in the top half and top third were predicted with 83% and 61% sensitivity and 72% and 82% specificity. Higher citations were predicted by indexing in numerous databases; number of authors; abstraction in synoptic journals; clinical relevance scores; number of cited references; and original, multicentred, and therapy articles from journals with a greater proportion of articles abstracted. Conclusion Citation counts can be reliably predicted at two years using data within three weeks of publication. PMID:18292132
1992-09-01
1 54. 1 71. 50s css/3:Cfautsbulation (count) tables For ALS-N &MARSTAT-M &SOURCE-USNA & DEGREE-UNDRG3RAD SELECTED DUrYSTA QUA FMF NON HOM REC 104 3...MARS1’AT-M & SOURCE- USNA __________________& DEGREE-UNDRGRAD ____ ________ SELECTED DUTrYSTA QUA FMF NON HQM REC 104 1 2.9 29. 62. 6.7 0 N so 33...M & SOURCE-OCS ___________________& DEGREE-ADVANCED ____ ________ SELECrED DUTrYSTA QUA FMF NON HOM REC N 72 7 2241 1 1 Y 198 471 104 16 3 X~SELECMD
Femtosecond Laser--Pumped Source of Entangled Photons for Quantum Cryptography Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, D.; Donaldson, W.; Sobolewski, R.
2007-07-31
We present an experimental setup for generation of entangled-photon pairs via spontaneous parametric down-conversion, based on the femtosecond-pulsed laser. Our entangled-photon source utilizes a 76-MHz-repetition-rate, 100-fs-pulse-width, mode-locked, ultrafast femtosecond laser, which can produce, on average, more photon pairs than a cw laser of an equal pump power. The resulting entangled pairs are counted by a pair of high-quantum-efficiency, single-photon, silicon avalanche photodiodes. Our apparatus is intended as an efficient source/receiver system for the quantum communications and quantum cryptography applications.
Extending pure luminosity evolution models into the mid-infrared, far-infrared and submillimetre
NASA Astrophysics Data System (ADS)
Hill, Michael D.; Shanks, Tom
2011-07-01
Simple pure luminosity evolution (PLE) models, in which galaxies brighten at high redshift due to increased star formation rates (SFRs), are known to provide a good fit to the colours and number counts of galaxies throughout the optical and near-infrared. We show that optically defined PLE models, where dust reradiates absorbed optical light into infrared spectra composed of local galaxy templates, fit galaxy counts and colours out to 8 μm and to at least z≈ 2.5. At 24-70 μm, the model is able to reproduce the observed source counts with reasonable success if 16 per cent of spiral galaxies show an excess in mid-IR flux due to a warmer dust component and a higher SFR, in line with observations of local starburst galaxies. There remains an underprediction of the number of faint-flux, high-z sources at 24 μm, so we explore how the evolution may be altered to correct this. At 160 μm and longer wavelengths, the model fails, with our model of normal galaxies accounting for only a few percent of sources in these bands. However, we show that a PLE model of obscured AGN, which we have previously shown to give a good fit to observations at 850 μm, also provides a reasonable fit to the Herschel/BLAST number counts and redshift distributions at 250-500 μm. In the context of a ΛCDM cosmology, an AGN contribution at 250-870 μm would remove the need to invoke a top-heavy IMF for high-redshift starburst galaxies.
Kids Count Data Book 1996: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
Annie E. Casey Foundation, Baltimore, MD.
This book provides a national and state-by-state (including the District of Columbia) compilation of benchmarks of the educational, social, economic, and physical well-being of children in the United States. Ten indicators of children's well-being are taken from government sources: (1) percent low birth-weight babies; (2) infant mortality rate;…
9C spectral-index distributions and source-count estimates from 15 to 93 GHz - a re-assessment
NASA Astrophysics Data System (ADS)
Waldram, E. M.; Bolton, R. C.; Riley, J. M.; Pooley, G. G.
2018-01-01
In an earlier paper (2007), we used follow-up observations of a sample of sources from the 9C survey at 15.2 GHz to derive a set of spectral-index distributions up to a frequency of 90 GHz. These were based on simultaneous measurements made at 15.2 GHz with the Ryle telescope and at 22 and 43 GHz with the Karl G. Jansky Very Large Array (VLA). We used these distributions to make empirical estimates of source counts at 22, 30, 43, 70 and 90 GHz. In a later paper (2013), we took data at 15.7 GHz from the Arcminute Microkelvin Imager (AMI) and data at 93.2 GHz from the Combined Array for Research in Millimetre-wave Astronomy (CARMA) and estimated the source count at 93.2 GHz. In this paper, we re-examine the data used in both papers and now believe that the VLA flux densities we measured at 43 GHz were significantly in error, being on average only about 70 per cent of their correct values. Here, we present strong evidence for this conclusion and discuss the effect on the source-count estimates made in the 2007 paper. The source-count prediction in the 2013 paper is also revised. We make comparisons with spectral-index distributions and source counts from other telescopes, in particular with a recent deep 95 GHz source count measured by the South Pole Telescope. We investigate reasons for the problem of the low VLA 43-GHz values and find a number of possible contributory factors, but none is sufficient on its own to account for such a large deficit.
Detector noise statistics in the non-linear regime
NASA Technical Reports Server (NTRS)
Shopbell, P. L.; Bland-Hawthorn, J.
1992-01-01
The statistical behavior of an idealized linear detector in the presence of threshold and saturation levels is examined. It is assumed that the noise is governed by the statistical fluctuations in the number of photons emitted by the source during an exposure. Since physical detectors cannot have infinite dynamic range, our model illustrates that all devices have non-linear regimes, particularly at high count rates. The primary effect is a decrease in the statistical variance about the mean signal due to a portion of the expected noise distribution being removed via clipping. Higher order statistical moments are also examined, in particular, skewness and kurtosis. In principle, the expected distortion in the detector noise characteristics can be calibrated using flatfield observations with count rates matched to the observations. For this purpose, some basic statistical methods that utilize Fourier analysis techniques are described.
Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Aongus; Collins, Robert J.; Krichel, Nils J.
2009-11-10
We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 {mu}W.
NASA Astrophysics Data System (ADS)
Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.
2017-12-01
We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
Pulse pileup statistics for energy discriminating photon counting x-ray detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir
Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less
Detecting fission from special nuclear material sources
Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA
2012-06-05
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.
NEMA NU 2-2007 performance measurements of the Siemens Inveon™ preclinical small animal PET system
Kemp, Brad J; Hruska, Carrie B; McFarland, Aaron R; Lenox, Mark W; Lowe, Val J
2010-01-01
National Electrical Manufacturers Association (NEMA) NU 2-2007 performance measurements were conducted on the Inveon™ preclinical small animal PET system developed by Siemens Medical Solutions. The scanner uses 1.51 × 1.51 × 10 mm LSO crystals grouped in 20 × 20 blocks; a tapered light guide couples the LSO crystals of a block to a position-sensitive photomultiplier tube. There are 80 rings with 320 crystals per ring and the ring diameter is 161 mm. The transaxial and axial fields of view (FOVs) are 100 and 127 mm, respectively. The scanner can be docked to a CT scanner; the performance characteristics of the CT component are not included herein. Performance measurements of spatial resolution, sensitivity, scatter fraction and count rate performance were obtained for different energy windows and coincidence timing window widths. For brevity, the results described here are for an energy window of 350–650 keV and a coincidence timing window of 3.43 ns. The spatial resolution at the center of the transaxial and axial FOVs was 1.56, 1.62 and 2.12 mm in the tangential, radial and axial directions, respectively, and the system sensitivity was 36.2 cps kBq−1 for a line source (7.2% for a point source). For mouse- and rat-sized phantoms, the scatter fraction was 5.7% and 14.6%, respectively. The peak noise equivalent count rate with a noisy randoms estimate was 1475 kcps at 130 MBq for the mouse-sized phantom and 583 kcps at 74 MBq for the rat-sized phantom. The performance measurements indicate that the Inveon™ PET scanner is a high-resolution tomograph with excellent sensitivity that is capable of imaging at a high count rate. PMID:19321924
EUVE Right Angle Program Observations of Late-Type Stars
NASA Astrophysics Data System (ADS)
Christian, D. J.; Mathioudakis, M.; Drake, J. J.
1995-12-01
The EUVE Right Angle Program (RAP) obtains photometric data in four bands centered at ~ 100 Angstroms (Lexan/B), ~ 200 Angstroms (Al/Ti/C), ~ 400 Angstroms (Ti/Sb/Al), and ~ 550 Angstroms (Sn/SiO). RAP observations are up to 20 times more sensitive than the all-sky survey. We present RAP observations of the late-type stars: BD+03 301, BD+05 300, HR 1262, BD+23 635, BD+22 669, Melotte 25 VA 334, Melotte 25 1366, Melotte 25 59, Melotte 25 65, theta (1) Tau, V834 Tau, GJ 2037, BD-21 1074, GJ 205, RE J0532-030, GJ 9287A, HT Vir, BD+46 1944, Proxima Cen, alpha Cen A/B, HR 6094, CPD-48 10901, and HR 8883. We derive fluxes and emission measures from Lexan/B and Al/Ti/C count rates. The time variability of the sources has been examined. Most of the sources show no significant variability at the 99% confidence level. Flares were detected from the K3V star V834 Tau (HD 29697) and the K0 star BD+22 669. The BD+22 669 count rate at the peak of the flare is a factor of 10 higher than the quiescent count rate with a peak Lexan/B luminosity of 7.9 x 10(29) erg s(-1) . The V834 Tau flare was detected in both Lexan/B and Al/Ti/C bands. The peak luminosity of the flare is 1.6 x 10(29) erg s(-1) and 8 x 10(28) ergs s(-1) for Lexan/B and Al/Ti/C, respectively. This is a factor of 4.3 higher than the quiescent luminosity in Lexan/B, and a factor of 4.6 in Al/Ti/C\\@. This work is supported by NASA contract NAS5-29298.
NEMA NU 2-2007 performance measurements of the Siemens Inveon™ preclinical small animal PET system
NASA Astrophysics Data System (ADS)
Kemp, Brad J.; Hruska, Carrie B.; McFarland, Aaron R.; Lenox, Mark W.; Lowe, Val J.
2009-04-01
National Electrical Manufacturers Association (NEMA) NU 2-2007 performance measurements were conducted on the Inveon™ preclinical small animal PET system developed by Siemens Medical Solutions. The scanner uses 1.51 × 1.51 × 10 mm LSO crystals grouped in 20 × 20 blocks; a tapered light guide couples the LSO crystals of a block to a position-sensitive photomultiplier tube. There are 80 rings with 320 crystals per ring and the ring diameter is 161 mm. The transaxial and axial fields of view (FOVs) are 100 and 127 mm, respectively. The scanner can be docked to a CT scanner; the performance characteristics of the CT component are not included herein. Performance measurements of spatial resolution, sensitivity, scatter fraction and count rate performance were obtained for different energy windows and coincidence timing window widths. For brevity, the results described here are for an energy window of 350-650 keV and a coincidence timing window of 3.43 ns. The spatial resolution at the center of the transaxial and axial FOVs was 1.56, 1.62 and 2.12 mm in the tangential, radial and axial directions, respectively, and the system sensitivity was 36.2 cps kBq-1 for a line source (7.2% for a point source). For mouse- and rat-sized phantoms, the scatter fraction was 5.7% and 14.6%, respectively. The peak noise equivalent count rate with a noisy randoms estimate was 1475 kcps at 130 MBq for the mouse-sized phantom and 583 kcps at 74 MBq for the rat-sized phantom. The performance measurements indicate that the Inveon™ PET scanner is a high-resolution tomograph with excellent sensitivity that is capable of imaging at a high count rate.
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Radiation detection and wireless networked early warning
NASA Astrophysics Data System (ADS)
Burns, David A.; Litz, Marc S.; Carroll, James J.; Katsis, Dimosthenis
2012-06-01
We have designed a compact, wireless, GPS-enabled array of inexpensive radiation sensors based on scintillation counting. Each sensor has a scintillator, photomultiplier tube, and pulse-counting circuit that includes a comparator, digital potentiometer and microcontroller. This design provides a high level of sensitivity and reliability. A 0.2 m2 PV panel powers each sensor providing a maintenance-free 24/7 energy source. The sensor can be mounted within a roadway light-post and monitor radiological activity along transport routes. Each sensor wirelessly transmits real-time data (as counts per second) up to 2 miles with a XBee radio module, and the data is received by a XBee receive-module on a computer. Data collection software logs the information from all sensors and provides real-time identification of radiation events. Measurements performed to-date demonstrate the ability of a sensor to detect a 20 μCi source at 3.5 meters when packaged with a PVT (plastic) scintillator, and 7 meters for a sensor with a CsI crystal (more expensive but ~5 times more sensitive). It is calculated that the sensor-architecture can detect sources moving as fast as 130 km/h based on the current data rate and statistical bounds of 3-sigma threshold detection. The sensor array is suitable for identifying and tracking a radiation threat from a dirty bomb along roadways.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
NASA Technical Reports Server (NTRS)
Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.;
2012-01-01
We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.
Swift/BAT sees MAXI J1535-571 declining in 15-50 keV
NASA Astrophysics Data System (ADS)
Palmer, D. M.; Krimm, H. A.; Swift/BAT Team
2017-09-01
BAT monitoring of MAXI J1535-571 shows that, in the 15-50 keV band, the source reached a peak around 2017 Sept 9 and has since begun to decline. At peak, the BAT count rate was 0.43 +/- 0.015 ct s-1 cm-2, or approximately twice the mean flux of the Crab.
Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions
Talamo, A.; Gohar, Y.; Gabrielli, F.; ...
2017-02-01
This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less
ERIC Educational Resources Information Center
Le Corre, Mathieu; Carey, Susan
2007-01-01
Since the publication of [Gelman, R., & Gallistel, C. R. (1978). "The child's understanding of number." Cambridge, MA: Harvard University Press.] seminal work on the development of verbal counting as a representation of number, the nature of the ontogenetic sources of the verbal counting principles has been intensely debated. The present…
NASA Astrophysics Data System (ADS)
Piscitelli, F.; Mauri, G.; Messi, F.; Anastasopoulos, M.; Arnold, T.; Glavic, A.; Höglund, C.; Ilves, T.; Lopez Higuera, I.; Pazmandi, P.; Raspino, D.; Robinson, L.; Schmidt, S.; Svensson, P.; Varga, D.; Hall-Wilton, R.
2018-05-01
The Multi-Blade is a Boron-10-based gaseous thermal neutron detector developed to face the challenge arising in neutron reflectometry at neutron sources. Neutron reflectometers are challenging instruments in terms of instantaneous counting rate and spatial resolution. This detector has been designed according to the requirements given by the reflectometers at the European Spallation Source (ESS) in Sweden. The Multi-Blade has been installed and tested on the CRISP reflectometer at the ISIS neutron and muon source in U.K.. The results on the detailed detector characterization are discussed in this manuscript.
NEMA NU-04-based performance characteristics of the LabPET-8™ small animal PET scanner.
Prasad, Rameshwar; Ratib, Osman; Zaidi, Habib
2011-10-21
The objective of this study is to characterize the performance of the preclinical avalanche photodiode (APD)-based LabPET-8™ subsystem of the fully integrated trimodality PET/SPECT/CT Triumph™ scanner using the National Electrical Manufacturers Association (NEMA) NU 04-2008 protocol. The characterized performance parameters include the spatial resolution, sensitivity, scatter fraction, counts rate performance and image-quality characteristics. The PET system is fully digital using APD-based detector modules with highly integrated electronics. The detector assembly consists of phoswich pairs of Lu(1.9)Y(0.1)SiO(5) (LYSO) and Lu(0.4)Gd(1.6)SiO(5) (LGSO) crystals with dimensions of 2 × 2 × 14 mm(3) having 7.5 cm axial and 10 cm transverse field of view (FOV). The spatial resolution and sensitivity were measured using a small (22)Na point source at different positions in the scanner's FOV. The scatter fraction and count rate characteristics were measured using mouse- and rat-sized phantoms fitted with an (18)F line source. The overall imaging capabilities of the scanner were assessed using the NEMA image-quality phantom and laboratory animal studies. The NEMA-based radial and tangential spatial resolution ranged from 1.7 mm at the center of the FOV to 2.59 mm at a radial offset of 2.5 cm and from 1.85 mm at the center of the FOV to 1.76 mm at a radial offset of 2.5 cm, respectively. Iterative reconstruction improved the spatial resolution to 0.84 mm at the center of the FOV. The total absolute system sensitivity is 12.74% for an energy window of 250-650 keV. For the mouse-sized phantom, the peak noise equivalent count rate (NECR) is 183 kcps at 2.07 MBq cc(-1), whereas the peak true count rate is 320 kcps at 2.5 MBq cc(-1) with a scatter fraction of 19%. The rat-sized phantom had a scatter fraction of 31%, with a peak NECR of 67 kcps at 0.23 MBq cc(-1) and a peak true count rate of 186 kcps at 0.27 MBq cc(-1). The average activity concentration and percentage standard deviation were 126.97 kBq ml(-1) and 7%, respectively. The performance of the LabPET-8™ scanner was characterized based on the NEMA NU 04-2008 standards. The all in all performance demonstrates that the LabPET-8™ system is able to produce high-quality and highly contrasted images in a reasonable time, and as such it is well suited for preclinical molecular imaging-based research.
NEMA NU-04-based performance characteristics of the LabPET-8™ small animal PET scanner
NASA Astrophysics Data System (ADS)
Prasad, Rameshwar; Ratib, Osman; Zaidi, Habib
2011-10-01
The objective of this study is to characterize the performance of the preclinical avalanche photodiode (APD)-based LabPET-8™ subsystem of the fully integrated trimodality PET/SPECT/CT Triumph™ scanner using the National Electrical Manufacturers Association (NEMA) NU 04-2008 protocol. The characterized performance parameters include the spatial resolution, sensitivity, scatter fraction, counts rate performance and image-quality characteristics. The PET system is fully digital using APD-based detector modules with highly integrated electronics. The detector assembly consists of phoswich pairs of Lu1.9Y0.1SiO5 (LYSO) and Lu0.4Gd1.6SiO5 (LGSO) crystals with dimensions of 2 × 2 × 14 mm3 having 7.5 cm axial and 10 cm transverse field of view (FOV). The spatial resolution and sensitivity were measured using a small 22Na point source at different positions in the scanner's FOV. The scatter fraction and count rate characteristics were measured using mouse- and rat-sized phantoms fitted with an18F line source. The overall imaging capabilities of the scanner were assessed using the NEMA image-quality phantom and laboratory animal studies. The NEMA-based radial and tangential spatial resolution ranged from 1.7 mm at the center of the FOV to 2.59 mm at a radial offset of 2.5 cm and from 1.85 mm at the center of the FOV to 1.76 mm at a radial offset of 2.5 cm, respectively. Iterative reconstruction improved the spatial resolution to 0.84 mm at the center of the FOV. The total absolute system sensitivity is 12.74% for an energy window of 250-650 keV. For the mouse-sized phantom, the peak noise equivalent count rate (NECR) is 183 kcps at 2.07 MBq cc-1, whereas the peak true count rate is 320 kcps at 2.5 MBq cc-1 with a scatter fraction of 19%. The rat-sized phantom had a scatter fraction of 31%, with a peak NECR of 67 kcps at 0.23 MBq cc-1 and a peak true count rate of 186 kcps at 0.27 MBq cc-1. The average activity concentration and percentage standard deviation were 126.97 kBq ml-1 and 7%, respectively. The performance of the LabPET-8™ scanner was characterized based on the NEMA NU 04-2008 standards. The all in all performance demonstrates that the LabPET-8™ system is able to produce high-quality and highly contrasted images in a reasonable time, and as such it is well suited for preclinical molecular imaging-based research.
NASA Astrophysics Data System (ADS)
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-01
Photon counting x-ray detectors (PCXDs) offer several advantages compared to standard energy-integrating x-ray detectors, but also face significant challenges. One key challenge is the high count rates required in CT. At high count rates, PCXDs exhibit count rate loss and show reduced detective quantum efficiency in signal-rich (or high flux) measurements. In order to reduce count rate requirements, a dynamic beam-shaping filter can be used to redistribute flux incident on the patient. We study the piecewise-linear attenuator in conjunction with PCXDs without energy discrimination capabilities. We examined three detector models: the classic nonparalyzable and paralyzable detector models, and a ‘hybrid’ detector model which is a weighted average of the two which approximates an existing, real detector (Taguchi et al 2011 Med. Phys. 38 1089-102 ). We derive analytic expressions for the variance of the CT measurements for these detectors. These expressions are used with raw data estimated from DICOM image files of an abdomen and a thorax to estimate variance in reconstructed images for both the dynamic attenuator and a static beam-shaping (‘bowtie’) filter. By redistributing flux, the dynamic attenuator reduces dose by 40% without increasing peak variance for the ideal detector. For non-ideal PCXDs, the impact of count rate loss is also reduced. The nonparalyzable detector shows little impact from count rate loss, but with the paralyzable model, count rate loss leads to noise streaks that can be controlled with the dynamic attenuator. With the hybrid model, the characteristic count rates required before noise streaks dominate the reconstruction are reduced by a factor of 2 to 3. We conclude that the piecewise-linear attenuator can reduce the count rate requirements of the PCXD in addition to improving dose efficiency. The magnitude of this reduction depends on the detector, with paralyzable detectors showing much greater benefit than nonparalyzable detectors.
NASA Astrophysics Data System (ADS)
Eriksson, L.; Wienhard, K.; Eriksson, M.; Casey, M. E.; Knoess, C.; Bruckbauer, T.; Hamill, J.; Mulnix, T.; Vollmar, S.; Bendriem, B.; Heiss, W. D.; Nutt, R.
2002-06-01
The first and second generation of the Exact and Exact HR family of scanners has been evaluated in terms of noise equivalent count rate (NEC) and count-rate capabilities. The new National Electrical Manufacturers Association standard was used for the evaluation. In spite of improved electronics and improved count-rate capabilities, the peak NEC was found to be fairly constant between the generations. The results are discussed in terms of the different electronic solutions for the two generations and its implications on system dead time and NEC count-rate capability.
AMI-LA observations of the SuperCLASS supercluster
NASA Astrophysics Data System (ADS)
Riseley, C. J.; Grainge, K. J. B.; Perrott, Y. C.; Scaife, A. M. M.; Battye, R. A.; Beswick, R. J.; Birkinshaw, M.; Brown, M. L.; Casey, C. M.; Demetroullas, C.; Hales, C. A.; Harrison, I.; Hung, C.-L.; Jackson, N. J.; Muxlow, T.; Watson, B.; Cantwell, T. M.; Carey, S. H.; Elwood, P. J.; Hickish, J.; Jin, T. Z.; Razavi-Ghods, N.; Scott, P. F.; Titterington, D. J.
2018-03-01
We present a deep survey of the Super-Cluster Assisted Shear Survey (SuperCLASS) supercluster - a region of sky known to contain five Abell clusters at redshift z ˜ 0.2 - performed using the Arcminute Microkelvin Imager (AMI) Large Array (LA) at 15.5 GHz. Our survey covers an area of approximately 0.9 deg2. We achieve a nominal sensitivity of 32.0 μJy beam-1 towards the field centre, finding 80 sources above a 5σ threshold. We derive the radio colour-colour distribution for sources common to three surveys that cover the field and identify three sources with strongly curved spectra - a high-frequency-peaked source and two GHz-peaked-spectrum sources. The differential source count (i) agrees well with previous deep radio source counts, (ii) exhibits no evidence of an emerging population of star-forming galaxies, down to a limit of 0.24 mJy, and (iii) disagrees with some models of the 15 GHz source population. However, our source count is in agreement with recent work that provides an analytical correction to the source count from the Square Kilometre Array Design Study (SKADS) Simulated Sky, supporting the suggestion that this discrepancy is caused by an abundance of flat-spectrum galaxy cores as yet not included in source population models.
Compensated count-rate circuit for radiation survey meter
Todd, Richard A.
1981-01-01
A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for counting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensated circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.
Prevalence of plagiarism among medical students.
Bilić-Zulle, Lidija; Frković, Vedran; Turk, Tamara; Azman, Josip; Petrovecki, Mladen
2005-02-01
To determine the prevalence of plagiarism among medical students in writing essays. During two academic years, 198 second year medical students attending Medical Informatics course wrote an essay on one of four offered articles. Two of the source articles were available in an electronic form and two in printed form. Two (one electronic and one paper article) were considered less complex and the other two more complex. The essays were examined using plagiarism detection software "WCopyfind," which counted the number of matching phrases with six or more words. Plagiarism rate, expressed as the percentage of the plagiarized text, was calculated as a ratio of the absolute number of matching words and the total number of words in the essay. Only 17 (9%) of students did not plagiarize at all and 68 (34%) plagiarized less than 10% of the text. The average plagiarism rate (% of plagiarized text) was 19% (5-95% percentile=0-88). Students who were strictly warned not to plagiarize had a higher total word count in their essays than students who were not warned (P=0.002) but there was no difference between them in the rate of plagiarism. Students with higher grades in Medical Informatics exam plagiarized less than those with lower grades (P=0.015). Gender, subject source, and complexity had no influence on the plagiarism rate. Plagiarism in writing essays is common among medical students. An explicit warning is not enough to deter students from plagiarism. Detection software can be used to trace and evaluate the rate of plagiarism in written student assays.
Revised SNAP III Training Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, Calvin Elroy; Gonzales, Samuel M.; Myers, William L.
The Shielded Neutron Assay Probe (SNAP) technique was developed to determine the leakage neutron source strength of a radioactive object. The original system consisted of an Eberline TM Mini-scaler and discrete neutron detector. The system was operated by obtaining the count rate with the Eberline TM instrument, determining the absolute efficiency from a graph, and calculating the neutron source strength by hand. In 2003 the SNAP III, shown in Figure 1, was designed and built. It required the operator to position the SNAP, and then measure the source-to-detector and detectorto- reflector distances. Next the operator entered the distance measurements andmore » started the data acquisition. The SNAP acquired the required count rate and then calculated and displayed the leakage neutron source strength (NSS). The original design of the SNAP III is described in SNAP III Training Manual (ER-TRN-PLN-0258, Rev. 0, January 2004, prepared by William Baird) This report describes some changes that have been made to the SNAP III. One important change is the addition of a LEMO connector to provide neutron detection output pulses for input to the MC-15. This feature is useful in active interrogation with a neutron generator because the MC-15 has the capability to only record data when it is not gated off by a pulse from the neutron generator. This avoids recording of a lot of data during the generator pulses that are not useful. Another change was the replacement of the infrared RS-232 serial communication output by a similar output via a 4-pin LEMO connector. The current document includes a more complete explanation of how to estimate the amount of moderation around a neutron-emitting source.« less
NASA Astrophysics Data System (ADS)
Popota, Fotini D.; Aguiar, Pablo; Herance, J. Raúl; Pareto, Deborah; Rojas, Santiago; Ros, Domènec; Pavia, Javier; Gispert, Juan Domingo
2012-10-01
The purpose of this work was to evaluate the performance of the microPET R4 system for rodents according to the NU 4-2008 standards of the National Electrical Manufacturers Association (NEMA) for small-animal positron emission tomography (PET) systems and to compare it against its previous evaluation according the adapted clinical NEMA NU 2-2001. The performance parameters evaluated here were spatial resolution, sensitivity, scatter fraction, counting rates for rat- and mouse-sized phantoms, and image quality. Spatial resolution and sensitivity were measured with a 22Na point source, while scatter fraction and count rate performance were determined using a mouse and rat phantoms with an 18F line source. The image quality of the system was assessed using the NEMA image quality phantom. Assessment of attenuation correction was performed using γ-ray transmission and computed tomography (CT)-based attenuation correction methods. At the center of the field of view, a spatial resolution of 2.12 mm at full width at half maximum (FWHM) (radial), 2.66 mm FWHM (tangential), and 2.23 mm FWHM (axial) was measured. The absolute sensitivity was found to be 1.9% at the center of the scanner. Scatter fraction for mouse-sized phantoms was 8.5 %, and the peak count rate was 311 kcps at 153.5 MBq. The rat scatter fraction was 22%, and the peak count rate was 117 kcps at 123.24 MBq. Image uniformity showed better results with 2-D filtered back projection (FBP), while an overestimation of the recovery coefficients was observed when using 2-D and 3-D OSEM MAP reconstruction algorithm. All measurements were made for an energy window of 350-650 keV and a coincidence window of 6 ns. Histogramming and reconstruction parameters were used according to the manufacturer's recommendations. The microPET R4 scanner was fully characterized according to the NEMA NU 4-2008 standards. Our results diverge considerably from those previously reported with an adapted version of the NEMA NU 2-2001 clinical standards. These discrepancies can be attributed to the modifications in NEMA methodology, thereby highlighting the relevance of specific small-animal standards for the performance evaluation of PET systems.
Accelerating fissile material detection with a neutron source
Rowland, Mark S.; Snyderman, Neal J.
2018-01-30
A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly to count neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a Poisson neutron generator for in-beam interrogation of a possible fissile neutron source and a DC power supply that exhibits electrical ripple on the order of less than one part per million. Certain voltage multiplier circuits, such as Cockroft-Walton voltage multipliers, are used to enhance the effective of series resistor-inductor circuits components to reduce the ripple associated with traditional AC rectified, high voltage DC power supplies.
Relationship between salivary flow rates and Candida albicans counts.
Navazesh, M; Wood, G J; Brightman, V J
1995-09-01
Seventy-one persons (48 women, 23 men; mean age, 51.76 years) were evaluated for salivary flow rates and Candida albicans counts. Each person was seen on three different occasions. Samples of unstimulated whole, chewing-stimulated whole, acid-stimulated parotid, and candy-stimulated parotid saliva were collected under standardized conditions. An oral rinse was also obtained and evaluated for Candida albicans counts. Unstimulated and chewing-stimulated whole flow rates were negatively and significantly (p < 0.001) related to the Candida counts. Unstimulated whole saliva significantly (p < 0.05) differed in persons with Candida counts of 0 versus <500 versus < or = 500. Chewing-stimulated saliva was significantly (p < 0.05) different in persons with 0 counts compared with those with a > or = 500 count. Differences in stimulated parotid flow rates were not significant among different levels of Candida counts. The results of this study reveal that whole saliva is a better predictor than parotid saliva in identification of persons with high Candida albicans counts.
Crewe, Tara L; Taylor, Philip D; Lepage, Denis
2015-01-01
The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration) to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling) to reduce the probability that individuals will be detected on more than one occasion.
Park, Hye Jung; Lee, Jae-Hyun; Park, Kyung Hee; Kim, Kyu Rang; Han, Mae Ja; Choe, Hosoeng
2016-01-01
Purpose The occurrence of pollen allergy is subject to exposure to pollen, which shows regional and temporal variations. We evaluated the changes in pollen counts and skin positivity rates for 6 years, and explored the correlation between their annual rates of change. Materials and Methods We assessed the number of pollen grains collected in Seoul, and retrospectively reviewed the results of 4442 skin-prick tests conducted at the Severance Hospital Allergy-Asthma Clinic from January 1, 2008 to December 31, 2013. Results For 6 years, the mean monthly total pollen count showed two peaks, one in May and the other in September. Pollen count for grasses also showed the same trend. The pollen counts for trees, grasses, and weeds changed annually, but the changes were not significant. The annual skin positivity rates in response to pollen from grasses and weeds increased significantly over the 6 years. Among trees, the skin positivity rates in response to pollen from walnut, popular, elm, and alder significantly increased over the 6 years. Further, there was a significant correlation between the annual rate of change in pollen count and the rate of change in skin positivity rate for oak and hop Japanese. Conclusion The pollen counts and skin positivity rates should be monitored, as they have changed annually. Oak and hop Japanese, which showed a significant correlation with the annual rate of change in pollen count and the rate of change in skin positivity rate over the 6 years may be considered the major allergens in Korea. PMID:26996572
Park, Hye Jung; Lee, Jae-Hyun; Park, Kyung Hee; Kim, Kyu Rang; Han, Mae Ja; Choe, Hosoeng; Oh, Jae-Won; Hong, Chein-Soo
2016-05-01
The occurrence of pollen allergy is subject to exposure to pollen, which shows regional and temporal variations. We evaluated the changes in pollen counts and skin positivity rates for 6 years, and explored the correlation between their annual rates of change. We assessed the number of pollen grains collected in Seoul, and retrospectively reviewed the results of 4442 skin-prick tests conducted at the Severance Hospital Allergy-Asthma Clinic from January 1, 2008 to December 31, 2013. For 6 years, the mean monthly total pollen count showed two peaks, one in May and the other in September. Pollen count for grasses also showed the same trend. The pollen counts for trees, grasses, and weeds changed annually, but the changes were not significant. The annual skin positivity rates in response to pollen from grasses and weeds increased significantly over the 6 years. Among trees, the skin positivity rates in response to pollen from walnut, popular, elm, and alder significantly increased over the 6 years. Further, there was a significant correlation between the annual rate of change in pollen count and the rate of change in skin positivity rate for oak and hop Japanese. The pollen counts and skin positivity rates should be monitored, as they have changed annually. Oak and hop Japanese, which showed a significant correlation with the annual rate of change in pollen count and the rate of change in skin positivity rate over the 6 years may be considered the major allergens in Korea.
Westfall, J M; McGloin, J
2001-05-01
Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double counting and transfer bias should be considered when conducting and interpreting research on ischemic heart disease, particularly in rural regions.
Quantitative NDA of isotopic neutron sources.
Lakosi, L; Nguyen, C T; Bagi, J
2005-01-01
A non-destructive method for assaying transuranic neutron sources was developed, using a combination of gamma-spectrometry and neutron correlation technique. Source strength or actinide content of a number of PuBe, AmBe, AmLi, (244)Cm, and (252)Cf sources was assessed, both as a safety issue and with respect to combating illicit trafficking. A passive neutron coincidence collar was designed with (3)He counters embedded in a polyethylene moderator (lined with Cd) surrounding the sources to be measured. The electronics consist of independent channels of pulse amplifiers and discriminators as well as a shift register for coincidence counting. The neutron output of the sources was determined by gross neutron counting, and the actinide content was found out by adopting specific spontaneous fission and (alpha,n) reaction yields of individual isotopes from the literature. Identification of an unknown source type and constituents can be made by gamma-spectrometry. The coincidences are due to spontaneous fission in the case of Cm and Cf sources, while they are mostly due to neutron-induced fission of the Pu isotopes (i.e. self-multiplication) and the (9)Be(n,2n)(8)Be reaction in Be-containing sources. Recording coincidence rate offers a potential for calibration, exploiting a correlation between the Pu amount and the coincidence-to-total ratio. The method and the equipment were tested in an in-field demonstration exercise, with participation of national public authorities and foreign observers. Seizure of the illicit transport of a PuBe source was simulated in the exercise, and the Pu content of the source was determined. It is expected that the method could be used for identification and assay of illicit, found, or not documented neutron sources.
Shrestha, Karuna; Shrestha, Pramod; Walsh, Kerry B; Harrower, Keith M; Midmore, David J
2011-09-01
Microbially enhanced compost extracts ('compost tea') are being used in commercial agriculture as a source of nutrients and for their perceived benefit to soil microbiology, including plant disease suppression. Rumen content material is a waste of cattle abattoirs, which can be value-added by conversion to compost and 'compost tea'. A system for compost extraction and microbial enhancement was characterised. Molasses amendment increased bacterial count 10-fold, while amendment based on molasses and 'fish and kelp hydrolysate' increased fungal count 10-fold. Compost extract incubated at 1:10 (w/v) dilution showed the highest microbial load, activity and humic/fulvic acid content compared to other dilutions. Aeration increased the extraction efficiency of soluble metabolites, and microbial growth rate, as did extraction of compost without the use of a constraining bag. A protocol of 1:10 dilution and aerated incubation with kelp and molasses amendments is recommended to optimise microbial load and fungal-to-bacterial ratio for this inoculum source. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hill, Andrew; Kelly, Eliza; Horswill, Mark S; Watson, Marcus O
2018-02-01
To investigate whether awareness of manual respiratory rate monitoring affects respiratory rate in adults, and whether count duration influences respiratory rate estimates. Nursing textbooks typically suggest that the patient should ideally be unaware of respiratory rate observations; however, there is little published evidence of the effect of awareness on respiratory rate, and none specific to manual measurement. In addition, recommendations about the length of the respiratory rate count vary from text to text, and the relevant empirical evidence is scant, inconsistent and subject to substantial methodological limitations. Experimental study with awareness of respiration monitoring (aware, unaware; randomised between-subjects) and count duration (60 s, 30 s, 15 s; within-subjects) as the independent variables. Respiratory rate (breaths/minute) was the dependent variable. Eighty-two adult volunteers were randomly assigned to aware and unaware conditions. In the baseline block, no live monitoring occurred. In the subsequent experimental block, the researcher informed aware participants that their respiratory rate would be counted, and did so. Respirations were captured throughout via video recording, and counted by blind raters viewing 60-, 30- and 15-s extracts. The data were collected in 2015. There was no baseline difference between the groups. During the experimental block, the respiratory rates of participants in the aware condition were an average of 2.13 breaths/minute lower compared to unaware participants. Reducing the count duration from 1 min to 15 s caused respiratory rate to be underestimated by an average of 2.19 breaths/minute (and 0.95 breaths/minute for 30-s counts). The awareness effect did not depend on count duration. Awareness of monitoring appears to reduce respiratory rate, and shorter monitoring durations yield systematically lower respiratory rate estimates. When interpreting and acting upon respiratory rate data, clinicians should consider the potential influence of these factors, including cumulative effects. © 2017 The Authors. Journal of Clinical Nursing Published by John Wiley & Sons Ltd.
Microbial Source Module (MSM): Documenting the Science ...
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consumed and produced by the MSM which is based on the HSPF (Bicknell et al., 1997) Bacterial Indicator Tool (EPA, 2013b, 2013c). Non-point sources include numbers, locations, and shedding rates of domestic agricultural animals (dairy and beef cows, swine, poultry, etc.) and wildlife (deer, duck, raccoon, etc.). Monthly maximum microbial storage and accumulation rates on the land surface, adjusted for die-off, are computed over an entire season for four land-use types (cropland, pasture, forest, and urbanized/mixed-use) for each subwatershed. Monthly point source microbial loadings to instream locations (i.e., stream segments that drain individual sub-watersheds) are combined and determined for septic systems, direct instream shedding by cattle, and POTWs/WWTPs (Publicly Owned Treatment Works/Wastewater Treatment Plants). The MSM functions within a larger modeling system that characterizes human-health risk resulting from ingestion of water contaminated with pathogens. The loading estimates produced by the MSM are input to the HSPF model that simulates flow and microbial fate/transport within a watershed. Microbial counts within recreational waters are then input to the MRA-IT model (Soller et
The 2-24 μm source counts from the AKARI North Ecliptic Pole survey
NASA Astrophysics Data System (ADS)
Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.
2014-11-01
We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.
Statistical modeling of dental unit water bacterial test kit performance.
Cohen, Mark E; Harte, Jennifer A; Stone, Mark E; O'Connor, Karen H; Coen, Michael L; Cullum, Malford E
2007-01-01
While it is important to monitor dental water quality, it is unclear whether in-office test kits provide bacterial counts comparable to the gold standard method (R2A). Studies were conducted on specimens with known bacterial concentrations, and from dental units, to evaluate test kit accuracy across a range of bacterial types and loads. Colony forming units (CFU) were counted for samples from each source, using R2A and two types of test kits, and conformity to Poisson distribution expectations was evaluated. Poisson regression was used to test for effects of source and device, and to estimate rate ratios for kits relative to R2A. For all devices, distributions were Poisson for low CFU/mL when only beige-pigmented bacteria were considered. For higher counts, R2A remained Poisson, but kits exhibited over-dispersion. Both kits undercounted relative to R2A, but the degree of undercounting was reasonably stable. Kits did not grow pink-pigmented bacteria from dental-unit water identified as Methylobacterium rhodesianum. Only one of the test kits provided results with adequate reliability at higher bacterial concentrations. Undercount bias could be estimated for this device and used to adjust test kit results. Insensitivity to methylobacteria spp. is problematic.
Bayesian approach for counting experiment statistics applied to a neutrino point source analysis
NASA Astrophysics Data System (ADS)
Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.
2013-12-01
In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.
NASA Astrophysics Data System (ADS)
Hirayama, Hideo; Kondo, Kenjiro; Suzuki, Seishiro; Hamamoto, Shimpei; Iwanaga, Kohei
2017-09-01
Pulse height distributions were measured using a LaBr3 detector set in a 1 cm lead collimator to investigate main radiation source at the operation floor of Fukushima Daiichi Nuclear Power Station Unit 4. It was confirmed that main radiation source above the reactor well was Co-60 from the activated steam dryer in the DS pool (Dryer-Separator pool) and that at the standby area was Cs-134 and Cs-137 from contaminated buildings and debris at the lower floor. Full energy peak count rate of Co-60 was reduced about 1/3 by 12mm lead sheet placed on the floor of the fuel handling machine.
VizieR Online Data Catalog: ROSAT detected quasars. I. (Brinkmann+ 1997)
NASA Astrophysics Data System (ADS)
Brinkmann, W.; Yuan, W.
1996-09-01
We have compiled a sample of all quasars with measured radio emission from the Veron-Cetty - Veron catalogue (1993, VV93
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Hill, B.R.; Hill, J.R.; Nolan, K.M.
1988-01-01
Data were collected during a 4-yr study of sediment sources in four drainage basins tributary to Lake Tahoe, California-Nevada. The study areas include the Blackwood, General, Edgewood, and Logan House Creek basins. Data include changes in bank and bed positions at channel cross sections; results of stream-channel mapping; analyses of bank and bed material samples; tabulations of bed material point counts; measured rates of hillslope erosion; dimensions of gullies; suspended-sediment data collected during synoptic snowmelt sampling; and physiographic data for the four study basins. (USGS)
Hill, B.R.; Hill, J.R.; Nolan, K.M.
1990-01-01
Data were collected during a 5-year study of sediment sources in four drainage basins tributary to Lake Tahoe, California-Nevada. The study areas include the Blackwood Creek, General Creek, Edgewood Creek, and Logan House Creek basins. Data include changes in bank and bed positions at channel cross sections; results of stream-channel inventories; analyses of bank and bed material samples; tabulations of bed-material pebble counts; measured rates of hillslope erosion; dimensions of gullies; suspended-sediment data collected during synoptic snowmelt sampling; and physiographic data for the four study basins. (USGS)
Ultrabright femtosecond source of biphotons based on a spatial mode inverter.
Jarutis, Vygandas; Juodkazis, Saulius; Mizeikis, Vygantas; Sasaki, Keiji; Misawa, Hiroaki
2005-02-01
A method of enhancing the efficiency of entangled biphoton sources based on a type II femtosecond spontaneous parametric downconversion (SPDC) process is proposed and implemented experimentally. Enhancement is obtained by mode inversion of one of the SPDC output beams, which allows the beams to overlap completely, thus maximizing the number of SPDC photon pairs with optimum spatiotemporal overlap. By use of this method, biphoton count rates as high as 16 kHz from a single 0.5-mm-long beta-barium borate crystal pumped by second-harmonic radiation from a Ti:sapphire laser were obtained.
Radio Sources Toward Galaxy Clusters at 30 GHz
NASA Technical Reports Server (NTRS)
Coble, K.; Bonamente, M.; Carlstrom, J. E.; Dawson, K.; Hasler, N.; Holzapfel, W.; Joy, M.; LaRoque, S.; Marrone, D. P.; Reese, E. D.
2007-01-01
Extra-galactic radio sources are a significant contaminant in cosmic microwave background and Sunyaev-Zeldovich effect experiments. Deep interferometric observations with the BIMA and OVRO arrays are used to characterize the spatial, spectral, and flux distributions of radio sources toward massive galaxy clusters at 28.5 GHz. We compute counts of mJy source fluxes from 89 fields centered on known massive galaxy clusters and 8 non-cluster fields. We find that source counts in the inner regions of the cluster fields (within 0.5 arcmin of the cluster center) are a factor of 8.9 (+4.2 to -3.8) times higher than counts in the outer regions of the cluster fields (radius greater than 0.5 arcmin). Counts in the outer regions of the cluster fields are in turn a factor of 3.3 (+4.1 -1.8) greater than those in the noncluster fields. Counts in the non-cluster fields are consistent with extrapolations from the results of other surveys. We compute spectral indices of mJy sources in cluster fields between 1.4 and 28.5 GHz and find a mean spectral index of al[ja = 0.66 with an rms dispersion of 0.36, where flux S varies as upsilon(sup -alpha). The distribution is skewed, with a median spectral index of 0.72 and 25th and 75th percentiles of 0.51 and 0.92, respectively. This is steeper than the spectral indices of stronger field sources measured by other surveys.
Study of a nTHGEM-based thermal neutron detector
NASA Astrophysics Data System (ADS)
Li, Ke; Zhou, Jian-Rong; Wang, Xiao-Dong; Xiong, Tao; Zhang, Ying; Xie, Yu-Guang; Zhou, Liang; Xu, Hong; Yang, Gui-An; Wang, Yan-Feng; Wang, Yan; Wu, Jin-Jie; Sun, Zhi-Jia; Hu, Bi-Tao
2016-07-01
With new generation neutron sources, traditional neutron detectors cannot satisfy the demands of the applications, especially under high flux. Furthermore, facing the global crisis in 3He gas supply, research on new types of neutron detector as an alternative to 3He is a research hotspot in the field of particle detection. GEM (Gaseous Electron Multiplier) neutron detectors have high counting rate, good spatial and time resolution, and could be one future direction of the development of neutron detectors. In this paper, the physical process of neutron detection is simulated with Geant4 code, studying the relations between thermal conversion efficiency, boron thickness and number of boron layers. Due to the special characteristics of neutron detection, we have developed a novel type of special ceramic nTHGEM (neutron THick GEM) for neutron detection. The performance of the nTHGEM working in different Ar/CO2 mixtures is presented, including measurements of the gain and the count rate plateau using a copper target X-ray source. A detector with a single nTHGEM has been tested for 2-D imaging using a 252Cf neutron source. The key parameters of the performance of the nTHGEM detector have been obtained, providing necessary experimental data as a reference for further research on this detector. Supported by National Natural Science Foundation of China (11127508, 11175199, 11205253, 11405191), Key Laboratory of Neutron Physics, CAEP (2013DB06, 2013BB04) and CAS (YZ201512)
Harris, Matthew; Marti, Joachim; Watt, Hillary; Bhatti, Yasser; Macinko, James; Darzi, Ara W
2017-11-01
Unconscious bias may interfere with the interpretation of research from some settings, particularly from lower-income countries. Most studies of this phenomenon have relied on indirect outcomes such as article citation counts and publication rates; few have addressed or proven the effect of unconscious bias in evidence interpretation. In this randomized, blinded crossover experiment in a sample of 347 English clinicians, we demonstrate that changing the source of a research abstract from a low- to a high-income country significantly improves how it is viewed, all else being equal. Using fixed-effects models, we measured differences in ratings for strength of evidence, relevance, and likelihood of referral to a peer. Having a high-income-country source had a significant overall impact on respondents' ratings of relevance and recommendation to a peer. Unconscious bias can have far-reaching implications for the diffusion of knowledge and innovations from low-income countries.
Evaluation of Pulse Counting for the Mars Organic Mass Analyzer (MOMA) Ion Trap Detection Scheme
NASA Technical Reports Server (NTRS)
Van Amerom, Friso H.; Short, Tim; Brinckerhoff, William; Mahaffy, Paul; Kleyner, Igor; Cotter, Robert J.; Pinnick, Veronica; Hoffman, Lars; Danell, Ryan M.; Lyness, Eric I.
2011-01-01
The Mars Organic Mass Analyzer is being developed at Goddard Space Flight Center to identify organics and possible biological compounds on Mars. In the process of characterizing mass spectrometer size, weight, and power consumption, the use of pulse counting was considered for ion detection. Pulse counting has advantages over analog-mode amplification of the electron multiplier signal. Some advantages are reduced size of electronic components, low power consumption, ability to remotely characterize detector performance, and avoidance of analog circuit noise. The use of pulse counting as a detection method with ion trap instruments is relatively rare. However, with the recent development of high performance electrical components, this detection method is quite suitable and can demonstrate significant advantages over analog methods. Methods A prototype quadrupole ion trap mass spectrometer with an internal electron ionization source was used as a test setup to develop and evaluate the pulse-counting method. The anode signal from the electron multiplier was preamplified. The an1plified signal was fed into a fast comparator for pulse-level discrimination. The output of the comparator was fed directly into a Xilinx FPGA development board. Verilog HDL software was written to bin the counts at user-selectable intervals. This system was able to count pulses at rates in the GHz range. The stored ion count nun1ber per bin was transferred to custom ion trap control software. Pulse-counting mass spectra were compared with mass spectra obtained using the standard analog-mode ion detection. Prelin1inary Data Preliminary mass spectra have been obtained for both analog mode and pulse-counting mode under several sets of instrument operating conditions. Comparison of the spectra revealed better peak shapes for pulse-counting mode. Noise levels are as good as, or better than, analog-mode detection noise levels. To artificially force ion pile-up conditions, the ion trap was overfilled and ions were ejected at very high scan rates. Pile-up of ions was not significant for the ion trap under investigation even though the ions are ejected in so-called 'ion-micro packets'. It was found that pulse counting mode had higher dynamic range than analog mode, and that the first amplification stage in analog mode can distort mass peaks. The inherent speed of the pulse counting method also proved to be beneficial to ion trap operation and ion ejection characterization. Very high scan rates were possible with pulse counting since the digital circuitry response time is so much smaller than with the analog method. Careful investigation of the pulse-counting data also allowed observation of the applied resonant ejection frequency during mass analysis. Ejection of ion micro packets could be clearly observed in the binned data. A second oscillation frequency, much lower than the secular frequency, was also observed. Such an effect was earlier attributed to the oscillation of the total plasma cloud in the ion trap. While the components used to implement pulse counting are quite advanced, due to their prevalence in consumer electronics, the cost of this detection system is no more than that of an analog mode system. Total pulse-counting detection system electronics cost is under $250
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, A K; Koniczek, M; Antonuk, L E
Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit simulations and prototyping is expected. Partially supported by NIH grant R01-EB000558. This work was partially supported by NIH grant no. R01-EB000558.« less
Optical Communications With A Geiger Mode APD Array
2016-02-09
spurious fires from numerous sources, including crosstalk from other detectors in the same array . Additionally, after a 9 successful detection, the...be combined into arrays with large numbers of detectors , allowing for scaling of dynamic range with relatively little overhead on space and power...overall higher rate of dark counts than a single detector , this is more than compensated for by the extra detectors . A sufficiently large APD array could
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
A Multi-Contact, Low Capacitance HPGe Detector for High Rate Gamma Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Christopher
2014-12-04
The detection, identification and non-destructive assay of special nuclear materials and nuclear fission by-products are critically important activities in support of nuclear non-proliferation programs. Both national and international nuclear safeguard agencies recognize that current accounting methods for spent nuclear fuel are inadequate from a safeguards perspective. Radiation detection and analysis by gamma-ray spectroscopy is a key tool in this field, but no instrument exists that can deliver the required performance (energy resolution and detection sensitivity) in the presence of very high background count rates encountered in the nuclear safeguards arena. The work of this project addresses this critical need bymore » developing a unique gamma-ray detector based on high purity germanium that has the previously unachievable property of operating in the 1 million counts-per-second range while achieving state-of-the-art energy resolution necessary to identify and analyze the isotopes of interest. The technical approach was to design and fabricate a germanium detector with multiple segmented electrodes coupled to multi-channel high rate spectroscopy electronics. Dividing the germanium detector’s signal electrode into smaller sections offers two advantages; firstly, the energy resolution of the detector is potentially improved, and secondly, the detector is able to operate at higher count rates. The design challenges included the following; determining the optimum electrode configuration to meet the stringent energy resolution and count rate requirements; determining the electronic noise (and therefore energy resolution) of the completed system after multiple signals are recombined; designing the germanium crystal housing and vacuum cryostat; and customizing electronics to perform the signal recombination function in real time. In this phase I work, commercial off-the-shelf electrostatic modeling software was used to develop the segmented germanium crystal geometry, which underwent several iterations before an optimal electrode configuration was found. The model was tested and validated against real-world measurements with existing germanium detectors. Extensive modeling of electronic noise was conducted using established formulae, and real-world measurements were performed on candidate front-end electronic components. This initial work proved the feasibility of the design with respect to expected high count rate and energy resolution performance. Phase I also delivered the mechanical design of the detector housing and vacuum cryostat to be built in Phase II. Finally, a Monte Carlo simulation was created to show the response of the complete design to a Cs-137 source. This development presents a significant advance for nuclear safeguards instrumentation with increased speed and accuracy of detection and identification of special nuclear materials. Other significant applications are foreseen for a gamma-ray detector that delivers high energy resolution (1keV FWHM noise) at high count rate (1 Mcps), especially in the areas of physics research and materials analysis.« less
Single Photon Counting Detectors for Low Light Level Imaging Applications
NASA Astrophysics Data System (ADS)
Kolb, Kimberly
2015-10-01
This dissertation presents the current state-of-the-art of semiconductor-based photon counting detector technologies. HgCdTe linear-mode avalanche photodiodes (LM-APDs), silicon Geiger-mode avalanche photodiodes (GM-APDs), and electron-multiplying CCDs (EMCCDs) are compared via their present and future performance in various astronomy applications. LM-APDs are studied in theory, based on work done at the University of Hawaii. EMCCDs are studied in theory and experimentally, with a device at NASA's Jet Propulsion Lab. The emphasis of the research is on GM-APD imaging arrays, developed at MIT Lincoln Laboratory and tested at the RIT Center for Detectors. The GM-APD research includes a theoretical analysis of SNR and various performance metrics, including dark count rate, afterpulsing, photon detection efficiency, and intrapixel sensitivity. The effects of radiation damage on the GM-APD were also characterized by introducing a cumulative dose of 50 krad(Si) via 60 MeV protons. Extensive development of Monte Carlo simulations and practical observation simulations was completed, including simulated astronomical imaging and adaptive optics wavefront sensing. Based on theoretical models and experimental testing, both the current state-of-the-art performance and projected future performance of each detector are compared for various applications. LM-APD performance is currently not competitive with other photon counting technologies, and are left out of the application-based comparisons. In the current state-of-the-art, EMCCDs in photon counting mode out-perform GM-APDs for long exposure scenarios, though GM-APDs are better for short exposure scenarios (fast readout) due to clock-induced-charge (CIC) in EMCCDs. In the long term, small improvements in GM-APD dark current will make them superior in both long and short exposure scenarios for extremely low flux. The efficiency of GM-APDs will likely always be less than EMCCDs, however, which is particularly disadvantageous for moderate to high flux rates where dark noise and CIC are insignificant noise sources. Research into decreasing the dark count rate of GM-APDs will lead to development of imaging arrays that are competitive for low light level imaging and spectroscopy applications in the near future.
Photon Counting Detectors for the 1.0 - 2.0 Micron Wavelength Range
NASA Technical Reports Server (NTRS)
Krainak, Michael A.
2004-01-01
We describe results on the development of greater than 200 micron diameter, single-element photon-counting detectors for the 1-2 micron wavelength range. The technical goals include quantum efficiency in the range 10-70%; detector diameter greater than 200 microns; dark count rate below 100 kilo counts-per-second (cps), and maximum count rate above 10 Mcps.
Walker, R.S.; Novare, A.J.; Nichols, J.D.
2000-01-01
Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.
Inconsistencies in authoritative national paediatric workforce data sources.
Allen, Amy R; Doherty, Richard; Hilton, Andrew M; Freed, Gary L
2017-12-01
Objective National health workforce data are used in workforce projections, policy and planning. If data to measure the current effective clinical medical workforce are not consistent, accurate and reliable, policy options pursued may not be aligned with Australia's actual needs. The aim of the present study was to identify any inconsistencies and contradictions in the numerical count of paediatric specialists in Australia, and discuss issues related to the accuracy of collection and analysis of medical workforce data. Methods This study compared respected national data sources regarding the number of medical practitioners in eight fields of paediatric speciality medical (non-surgical) practice. It also counted the number of doctors listed on the websites of speciality paediatric hospitals and clinics as practicing in these eight fields. Results Counts of medical practitioners varied markedly for all specialties across the data sources examined. In some fields examined, the range of variability across data sources exceeded 450%. Conclusions The national datasets currently available from federal and speciality sources do not provide consistent or reliable counts of the number of medical practitioners. The lack of an adequate baseline for the workforce prevents accurate predictions of future needs to provide the best possible care of children in Australia. What is known about the topic? Various national data sources contain counts of the number of medical practitioners in Australia. These data are used in health workforce projections, policy and planning. What does this paper add? The present study found that the current data sources do not provide consistent or reliable counts of the number of practitioners in eight selected fields of paediatric speciality practice. There are several potential issues in the way workforce data are collected or analysed that cause the variation between sources to occur. What are the implications for practitioners? Without accurate data on which to base decision making, policy options may not be aligned with the actual needs of children with various medical needs, in various geographic areas or the nation as a whole.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
500-514 N. Peshtigo Ct, May 2018, Lindsay Light Radiological Survey
maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,800 cpm - 5,000 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
550 E. Illinois, May 2018, Lindsay Light Radiological Survey
Maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,250 cpm to 4,880 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
Quantitative non-destructive assay of PuBe neutron sources
NASA Astrophysics Data System (ADS)
Lakosi, László; Bagi, János; Nguyen, Cong Tam
2006-02-01
PuBe neutron sources were assayed, using a combination of high resolution γ-spectrometry (HRGS) and neutron correlation technique. In a previous publication [J. Bagi, C. Tam Nguyen, L. Lakosi, Nucl. Instr. and Meth. B 222 (2004) 242] a passive neutron well-counter was reported with 3He tubes embedded in a polyamide (TERRAMID) moderator (lined inside with Cd) surrounding the sources to be measured. Gross and coincidence neutron counting was performed, and the Pu content of the sources was found out from isotope analysis and by adopting specific (α, n) reaction yields of the Pu isotopes and 241Am in Be, based on supplier's information and literature data. The method was further developed and refined. Evaluation algorithm was more precisely worked out. The contribution of secondary (correlated) neutrons to the total neutron output was derived from the coincidence (doubles) count rate and taken into account in assessing the Pu content. A new evaluation of former results was performed. Assay was extended to other PuBe sources, and new results were added. In order to attain higher detection efficiency, a more efficient moderator was also applied, with and without Cd shielding around the assay chamber. Calibration seems possible using neutron measurements only (without γ-spectrometry), based on a correlation between the Pu amount and the coincidence-to-total ratio. It is expected that the method could be used for Pu accountancy and safeguards verification as well as identification and assay of seized, found, or not documented PuBe neutron sources.
A Calibration of NICMOS Camera 2 for Low Count Rates
NASA Astrophysics Data System (ADS)
Rubin, D.; Aldering, G.; Amanullah, R.; Barbary, K.; Dawson, K. S.; Deustua, S.; Faccioli, L.; Fadeyev, V.; Fakhouri, H. K.; Fruchter, A. S.; Gladders, M. D.; de Jong, R. S.; Koekemoer, A.; Krechmer, E.; Lidman, C.; Meyers, J.; Nordin, J.; Perlmutter, S.; Ripoche, P.; Schlegel, D. J.; Spadafora, A.; Suzuki, N.
2015-05-01
NICMOS 2 observations are crucial for constraining distances to most of the existing sample of z\\gt 1 SNe Ia. Unlike conventional calibration programs, these observations involve long exposure times and low count rates. Reciprocity failure is known to exist in HgCdTe devices and a correction for this effect has already been implemented for high and medium count rates. However, observations at faint count rates rely on extrapolations. Here instead, we provide a new zero-point calibration directly applicable to faint sources. This is obtained via inter-calibration of NIC2 F110W/F160W with the Wide Field Camera 3 (WFC3) in the low count-rate regime using z∼ 1 elliptical galaxies as tertiary calibrators. These objects have relatively simple near-IR spectral energy distributions, uniform colors, and their extended nature gives a superior signal-to-noise ratio at the same count rate than would stars. The use of extended objects also allows greater tolerances on point-spread function profiles. We find space telescope magnitude zero points (after the installation of the NICMOS cooling system, NCS) of 25.296\\+/- 0.022 for F110W and 25.803\\+/- 0.023 for F160W, both in agreement with the calibration extrapolated from count rates ≳1000 times larger (25.262 and 25.799). Before the installation of the NCS, we find 24.843\\+/- 0.025 for F110W and 25.498\\+/- 0.021 for F160W, also in agreement with the high-count-rate calibration (24.815 and 25.470). We also check the standard bandpasses of WFC3 and NICMOS 2 using a range of stars and galaxies at different colors and find mild tension for WFC3, limiting the accuracy of the zero points. To avoid human bias, our cross-calibration was “blinded” in that the fitted zero-point differences were hidden until the analysis was finalized. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555, under programs SM2/NIC-7049, SM2/NIC-7152, CAL/NIC-7607, CAL/NIC-7691, CAL/NIC-7693, GO-7887, CAL/NIC-7902, CAL/NIC-7904, GO/DD-7941, SM3/NIC-8983, SM3/NIC-8986, GTO/ACS-9290, ENG/NIC-9324, CAL/NIC-9325, GO-9352, GO-9375, SNAP-9485, CAL/NIC-9639, GO-9717, GO-9834, GO-9856, CAL/NIC-9995, CAL/NIC-9997, GO-10189, GO-10258, CAL/NIC-10381, CAL/NIC-10454, GO-10496, CAL/NIC-10725, CAL/NIC-10726, GO-10886, CAL/NIC-11060, CAL/NIC-11061, GO-11135, GO-11143, GO-11202, CAL/NIC-11319, GO/DD-11359, SM4/WFC3-11439, SM4/WFC3-11451, GO-11557, GO-11591, GO-11600, GO/DD-11799, CAL/WFC3-11921, CAL/WFC3-11926, GO/DD-12051, GO-12061, GO-12062, GO-12177, CAL/WFC3-12333, CAL/WFC3-12334, CAL/WFC3-12341, GO-12443, GO-12444, GO-12445, CAL/WFC3-12698, CAL/WFC3-12699, GO-12874, CAL/WFC3-13088, and CAL/WFC3-13089.
VizieR Online Data Catalog: ROSAT HRI Pointed Observations (1RXH) (ROSAT Team, 2000)
NASA Astrophysics Data System (ADS)
ROSAT Scientific Team
2000-05-01
The hricat.dat table contains a list of sources detected by the Standard Analysis Software System (SASS) in reprocessed, public High Resolution Imager (HRI) datasets. In addition to the parameters returned by SASS (like position, count rate, signal-to-noise, etc.) each source in the table has associated with it a set of source and sequence "flags". These flags are provided by the ROSAT data centers in the US, Germany and the UK to help the user of the ROSHRI database judge the reliability of a given source. These data have been screened by ROSAT data centers in the US, Germany, and the UK as a step in the production of the Rosat Results Archive (RRA). The RRA contains extracted source and associated products with an indication of reliability for the primary parameters. (3 data files).
NASA Astrophysics Data System (ADS)
Bártová, H.; Kučera, J.; Musílek, L.; Trojek, T.
2014-11-01
In order to evaluate the age from the equivalent dose and to obtain an optimized and efficient procedure for thermoluminescence (TL) dating, it is necessary to obtain the values of both the internal and the external dose rates from dated samples and from their environment. The measurements described and compared in this paper refer to bricks from historic buildings and a fine-grain dating method. The external doses are therefore negligible, if the samples are taken from a sufficient depth in the wall. However, both the alpha dose rate and the beta and gamma dose rates must be taken into account in the internal dose. The internal dose rate to fine-grain samples is caused by the concentrations of natural radionuclides 238U, 235U, 232Th and members of their decay chains, and by 40K concentrations. Various methods can be used for determining trace concentrations of these natural radionuclides and their contributions to the dose rate. The dose rate fraction from 238U and 232Th can be calculated, e.g., from the alpha count rate, or from the concentrations of 238U and 232Th, measured by neutron activation analysis (NAA). The dose rate fraction from 40K can be calculated from the concentration of potassium measured, e.g., by X-ray fluorescence analysis (XRF) or by NAA. Alpha counting and XRF are relatively simple and are accessible for an ordinary laboratory. NAA can be considered as a more accurate method, but it is more demanding regarding time and costs, since it needs a nuclear reactor as a neutron source. A comparison of these methods allows us to decide whether the time- and cost-saving simpler techniques introduce uncertainty that is still acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aima, M; Viscariello, N; Patton, T
Purpose: The aim of this work is to propose a method to optimize radioactive source localization (RSL) for non-palpable breast cancer surgery. RSL is commonly used as a guiding technique during surgery for excision of non-palpable tumors. A collimated hand-held detector is used to localize radioactive sources implanted in tumors. Incisions made by the surgeon are based on maximum observed detector counts, and tumors are subsequently resected based on an arbitrary estimate of the counts expected at the surgical margin boundary. This work focuses on building a framework to predict detector counts expected throughout the procedure to improve surgical margins.more » Methods: A gamma detection system called the Neoprobe GDS was used for this work. The probe consists of a cesium zinc telluride crystal and a collimator. For this work, an I-125 Best Medical model 2301 source was used. The source was placed in three different phantoms, a PMMA, a Breast (25%- glandular tissue/75%- adipose tissue) and a Breast (75-25) phantom with a backscatter thickness of 6 cm. Counts detected by the probe were recorded with varying amounts of phantom thicknesses placed on top of the source. A calibration curve was generated using MATLAB based on the counts recorded for the calibration dataset acquired with the PMMA phantom. Results: The observed detector counts data used as the validation set was accurately predicted to within ±3.2%, ±6.9%, ±8.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. The average difference between predicted and observed counts was −0.4%, 2.4%, 1.4% with a standard deviation of 1.2 %, 1.8%, 3.4% for the PMMA, Breast (75-25), Breast (25–75) phantom respectively. Conclusion: The results of this work provide a basis for characterization of a detector used for RSL. Counts were predicted to within ±9% for three different phantoms without the application of a density correction factor.« less
Compensated count-rate circuit for radiation survey meter
Todd, R.A.
1980-05-12
A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for couting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensation circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.
Multiparameter linear least-squares fitting to Poisson data one count at a time
NASA Technical Reports Server (NTRS)
Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.
1995-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.
Sources influencing patients in their HIV medication decisions.
Meredith, K L; Jeffe, D B; Mundy, L M; Fraser, V J
2001-02-01
The authors surveyed 202 patients (54.5% male; 62.4% African American) enrolled at St. Louis HIV clinics to identify the importance of various sources of influence in their HIV medication decisions. Physicians were the most important source for 122 (60.4%) respondents, whereas prayer was most important for 24 respondents (11.9%). In multivariate tests controlling for CD4 counts, Caucasian men were more likely than Caucasian women and African Americans of both genders to select a physician as the most important source. African Americans were more likely than Caucasians to mention prayer as the most important source. Caucasians and those rating physicians as the most important source were more likely to be using antiretroviral medications. Respondents identified multiple important influences-hence the potential for conflicting messages about HIV medications. These findings have implications for health education practices and behavioral research in the medical setting.
SCUBA-2 follow-up of Herschel-SPIRE observed Planck overdensities
NASA Astrophysics Data System (ADS)
MacKenzie, Todd P.; Scott, Douglas; Bianconi, Matteo; Clements, David L.; Dole, Herve A.; Flores-Cacho, Inés; Guery, David; Kneissl, Ruediger; Lagache, Guilaine; Marleau, Francine R.; Montier, Ludovic; Nesvadba, Nicole P. H.; Pointecouteau, Etienne; Soucail, Genevieve
2017-07-01
We present SCUBA-2 follow-up of 61 candidate high-redshift Planck sources. Of these, 10 are confirmed strong gravitational lenses and comprise some of the brightest such submm sources on the observed sky, while 51 are candidate proto-cluster fields undergoing massive starburst events. With the accompanying Herschel-Spectral and Photometric Imaging Receiver observations and assuming an empirical dust temperature prior of 34^{+13}_{-9} K, we provide photometric redshift and far-IR luminosity estimates for 172 SCUBA-2-selected sources within these Planck overdensity fields. The redshift distribution of the sources peak between a redshift of 2 and 4, with one-third of the sources having S500/S350 > 1. For the majority of the sources, we find far-IR luminosities of approximately 1013 L⊙, corresponding to star formation rates of around 1000 M⊙ yr-1. For S850 > 8 mJy sources, we show that there is up to an order of magnitude increase in star formation rate density and an increase in uncorrected number counts of 6 for S850 > 8 mJy when compared to typical cosmological survey fields. The sources detected with SCUBA-2 account for only approximately 5 per cent of the Planck flux at 353 GHz, and thus many more fainter sources are expected in these fields.
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
Probing Majorana bound states via counting statistics of a single electron transistor
NASA Astrophysics Data System (ADS)
Li, Zeng-Zhao; Lam, Chi-Hang; You, J. Q.
2015-06-01
We propose an approach for probing Majorana bound states (MBSs) in a nanowire via counting statistics of a nearby charge detector in the form of a single-electron transistor (SET). We consider the impacts on the counting statistics by both the local coupling between the detector and an adjacent MBS at one end of a nanowire and the nonlocal coupling to the MBS at the other end. We show that the Fano factor and the skewness of the SET current are minimized for a symmetric SET configuration in the absence of the MBSs or when coupled to a fermionic state. However, the minimum points of operation are shifted appreciably in the presence of the MBSs to asymmetric SET configurations with a higher tunnel rate at the drain than at the source. This feature persists even when varying the nonlocal coupling and the pairing energy between the two MBSs. We expect that these MBS-induced shifts can be measured experimentally with available technologies and can serve as important signatures of the MBSs.
Shokrollahi, Borhan; Mansouri, Marouf; Amanlou, Hamid
2013-06-01
Thirty male and female (n = 15 for each one) Markhoz newborn goat kids (aged 7 ± 3 days) were distributed in a randomized block design in a 2 × 2 + 1 factorial arrangement: two levels of sodium selenite as a source of selenium (0.2 or 0.3 ppm Se), two levels of α-tocopherol acetate as a source of vitamin E (150 or 200 IU Vit E), and one control treatment with six repetitions per treatment (each replicate included three male and three female kids). Animals were fed daily by Se-Vit E-enriched milk (Se-Vit E treatments) or non-enriched milk (control treatment). Growth rate, hematology, and serum biological parameters were measured. The levels of serum albumin (P < 0.01), serum globulin (P < 0.05), total serum protein levels (P < 0.01), erythrocyte counts (RBC) (P < 0.001), hemoglobin (P < 0.001), hematocrit (P < 0.001), leukocyte counts (WBC) (P < 0.001), IgA (P < 0.05), IgG (P < 0.01), and IgM (P < 0.01) significantly differed among treatments, while no significant differences were observed for calcium, lymphocyte, neutrophil average daily gain and body weight among treatments. Kids feeding by enriched milk with 0.3 ppm Se and 200 IU Vit E had significantly higher serum total protein, globulin, RBC, IgA, IgG, and IgM compared to control and those fed by enriched milk to 0.2 ppm Se and 200 IU Vit E had significantly higher WBC counts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpert, B. K.; Horansky, R. D.; Bennett, D. A.
Microcalorimeter sensors operated near 0.1 K can measure the energy of individual x- and gamma-ray photons with significantly more precision than conventional semiconductor technologies. Both microcalorimeter arrays and higher per pixel count rates are desirable to increase the total throughput of spectrometers based on these devices. The millisecond recovery time of gamma-ray microcalorimeters and the resulting pulse pileup are significant obstacles to high per pixel count rates. Here, we demonstrate operation of a microcalorimeter detector at elevated count rates by use of convolution filters designed to be orthogonal to the exponential tail of a preceding pulse. These filters allow operationmore » at 50% higher count rates than conventional filters while largely preserving sensor energy resolution.« less
On the single-photon-counting (SPC) modes of imaging using an XFEL source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
On the single-photon-counting (SPC) modes of imaging using an XFEL source
Wang, Zhehui
2015-12-14
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
Gruskin, Sofia; Coull, Brent A.
2017-01-01
Background Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to “legal intervention” in accordance with the International Classification of Diseases–10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. Methods and findings We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture–recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS only, 487 reported in both lists, and an estimated 44 (95% CI: 31, 62) not reported in either source. The NVSS documented 44.9% (95% CI: 44.2%, 45.4%) of the total number of deaths, and The Counted documented 93.1% (95% CI: 91.7%, 94.2%). In a multivariable mixed-effects logistic model that controlled for all individual- and county-level covariates, decedents injured by non-firearm mechanisms had higher odds of underreporting in the NVSS than those injured by firearms (odds ratio [OR]: 68.2; 95% CI: 15.7, 297.5; p < 0.01), and underreporting was also more likely outside of the highest-income-quintile counties (OR for the lowest versus highest income quintile: 10.1; 95% CI: 2.4, 42.8; p < 0.01). There was no statistically significant difference in the odds of underreporting in the NVSS for deaths certified by coroners compared to medical examiners, and the odds of underreporting did not vary by race/ethnicity. One limitation of our analyses is that we were unable to examine the characteristics of cases that were unreported in The Counted. Conclusions The media-based source, The Counted, reported a considerably higher proportion of law-enforcement-related deaths than the NVSS, which failed to report a majority of these incidents. For the NVSS, rates of underreporting were higher in lower income counties and for decedents killed by non-firearm mechanisms. There was no evidence suggesting that underreporting varied by death investigator type (medical examiner versus coroner) or race/ethnicity. PMID:29016598
Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy
2017-10-01
Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to "legal intervention" in accordance with the International Classification of Diseases-10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture-recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS only, 487 reported in both lists, and an estimated 44 (95% CI: 31, 62) not reported in either source. The NVSS documented 44.9% (95% CI: 44.2%, 45.4%) of the total number of deaths, and The Counted documented 93.1% (95% CI: 91.7%, 94.2%). In a multivariable mixed-effects logistic model that controlled for all individual- and county-level covariates, decedents injured by non-firearm mechanisms had higher odds of underreporting in the NVSS than those injured by firearms (odds ratio [OR]: 68.2; 95% CI: 15.7, 297.5; p < 0.01), and underreporting was also more likely outside of the highest-income-quintile counties (OR for the lowest versus highest income quintile: 10.1; 95% CI: 2.4, 42.8; p < 0.01). There was no statistically significant difference in the odds of underreporting in the NVSS for deaths certified by coroners compared to medical examiners, and the odds of underreporting did not vary by race/ethnicity. One limitation of our analyses is that we were unable to examine the characteristics of cases that were unreported in The Counted. The media-based source, The Counted, reported a considerably higher proportion of law-enforcement-related deaths than the NVSS, which failed to report a majority of these incidents. For the NVSS, rates of underreporting were higher in lower income counties and for decedents killed by non-firearm mechanisms. There was no evidence suggesting that underreporting varied by death investigator type (medical examiner versus coroner) or race/ethnicity.
NASA Technical Reports Server (NTRS)
Siemiginowska, Aneta
2001-01-01
The predicted counts for ASCA observation was much higher than actually observed counts in the quasar. However, there are three weak hard x-ray sources in the GIS field. We are adding them to the source counts in modeling of hard x-ray background. The work is in progress. We have published a paper in Ap.J. on the luminosity function and the quasar evolution. Based on the theory described in this paper we are predicting a number of sources and their contribution to the x-ray background at different redshifts. These model predictions will be compared to the observed data in the final paper.
A Prototype {sup 212}Pb Medical Dose Calibrator for Alpha Radioimmunotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, W.F.; Patil, A.; Russ, W.R.
AREVA Med, an AREVA group subsidiary, is developing innovative cancer-fighting therapies involving the use of {sup 212}Pb for alpha radioimmunotherapy. Canberra Industries, the nuclear measurement subsidiary of AREVA, has been working with AREVA Med to develop a prototype measurement system to assay syringes containing a {sup 212}Pb solution following production by an elution system. The relative fraction of emitted radiation from the source associated directly with the {sup 212}Pb remains dynamic for approximately 6 hours after the parent is chemically purified. A significant challenge for this measurement application is that the short half-life of the parent nuclide requires assay priormore » to reaching equilibrium with progeny nuclides. A gross counting detector was developed to minimize system costs and meet the large dynamic range of source activities. Prior to equilibrium, a gross counting system must include the period since the {sup 212}Pb was pure to calculate the count rate attributable to the parent rather than the progeny. The dynamic state is determined by solving the set of differential equations, or Bateman equations, describing the source decay behavior while also applying the component measurement efficiencies for each nuclide. The efficiencies were initially estimated using mathematical modeling (MCNP) but were then benchmarked with source measurements. The goal accuracy of the system was required to be within 5%. Independent measurements of the source using a high resolution spectroscopic detector have shown good agreement with the prototype system results. The prototype design was driven by cost, compactness and simplicity. The detector development costs were minimized by using existing electronics and firmware with a Geiger-Mueller tube derived from Canberra's EcoGamma environmental monitoring product. The acquisition electronics, communications and interface were controlled using Python with the EcoGamma software development kit on a Raspberry Pi Linux computer mounted inside a standard project box. The results of initial calibration measurements are presented. (authors)« less
Penadés, M; Arnau-Bonachera, A; García-Quirós, A; Viana, D; Selva, L; Corpa, J M; Pascual, J J
2017-12-11
Genetic selection and nutrition management have played a central role in the development of commercial rabbitry industry over the last few decades, being able to affect productive and immunological traits of the animals. However, the implication of different energy sources in animals from diverse genetic lines achieving such evolutionary success remains still unknown. Therefore, in this work, 203 female rabbits housed and bred in the same conditions were used from their first artificial insemination until their fifth weaning. The animals belonged to three different genetic types diverging greatly on breeding goals (H line, hyper-prolific (n=66); LP line, robust (n=67) and R line, selected for growth rate (n=67), and were assigned to two experimental diets, promoting major differences in energy source (cereal starch or animal fat)). The aims of this work were to: (1) characterize and describe blood leucocyte populations of three lines of rabbit does in different physiological stages during their reproductive period: first artificial insemination, first weaning, second parturition and fifth weaning; and (2) study the possible influence of two different experimental diets on the leucocyte populations in peripheral blood. Flow cytometry analyses were performed on blood samples taken from females at each different sampling stade. Lymphocyte populations at both weanings were characterized by significantly lower counts of total, CD5+ and CD8+ lymphocytes (-19.8, -21.7 and -44.6%; P<0.05), and higher counts of monocytes and granulocytes (+49.2 and +26.2%; P<0.05) than in the other stages. Females had higher blood counts of lymphocytes B, CD8+ and CD25+ and lower counts of CD4+ at first than at fifth weaning (+55.6, +85.8, +57.5, -14.5%; P<0.05). G/L ratio was higher at both weanings (P<0.05), and CD4+/CD8+ ratio increased progressively from the 1AI to the 5 W (P<0.001). Regarding the effect of genetic type in blood leucocyte counts, LP animals presented the highest counts for total, B, CD5+ and CD8+ lymphocytes (+16.7, +31.8, +24.5 and +38.7; P<0.05), but R rabbits showed the highest counts for monocytes and granulocytes (+25.3 and +27.6; P<0.05). The type of diet given during the reproductive life did not affect the leucocyte population counts. These results indicate that there are detectable variations in the leucocyte profile depending on the reproductive stage of the animal (parturition, weaning or none of them). Moreover, foundation for reproductive longevity criteria allows animals to be more capable of adapting to the challenges of the reproductive cycle from an immunological viewpoint.
Cryogenic, high-resolution x-ray detector with high count rate capability
Frank, Matthias; Mears, Carl A.; Labov, Simon E.; Hiller, Larry J.; Barfknecht, Andrew T.
2003-03-04
A cryogenic, high-resolution X-ray detector with high count rate capability has been invented. The new X-ray detector is based on superconducting tunnel junctions (STJs), and operates without thermal stabilization at or below 500 mK. The X-ray detector exhibits good resolution (.about.5-20 eV FWHM) for soft X-rays in the keV region, and is capable of counting at count rates of more than 20,000 counts per second (cps). Simple, FET-based charge amplifiers, current amplifiers, or conventional spectroscopy shaping amplifiers can provide the electronic readout of this X-ray detector.
Kurosaki, Hiromu; Mueller, Rebecca J.; Lambert, Susan B.; ...
2016-07-15
An alternate method of preparing actinide alpha counting sources was developed in place of electrodeposition or lanthanide fluoride micro-precipitation. The method uses lanthanide hydroxide micro-precipitation to avoid the use of hazardous hydrofluoric acid. Lastly, it provides a quicker, simpler, and safer way of preparing actinide alpha counting sources in routine, production-type laboratories that process many samples daily.
NICER discovers millisecond pulsations from the neutron star LMXB IGR J17379-3747
NASA Astrophysics Data System (ADS)
Strohmayer, T. E.; Ray, P. S.; Gendreau, K. C.; Bult, P. M.; Guillot, S.; Mahmoodifar, S.; Jaisawal, G. K.; Arzoumanian, Z.; Altamirano, D.; Bogdanov, S.; Chakrabarty, D.; Enoto, T.; Markwardt, C. B.; Ozel, F.; Ransom, S. M.
2018-04-01
Following a 2018 March 19 MAXI alert of a new outburst of the neutron star low-mass X-ray binary IGR J17379-3747 (ATel #11447), NICER has observed the source daily since 2018 March 29. From that date onward, the mean count rates detected each day through April 1 were 12.9, 11.0, 8.7, and 4.7 ct/s (0.5-12 keV), respectively.
Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging
Iwanczyk, Jan S.; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C.; Hartsough, Neal E.; Malakhov, Nail; Wessel, Jan C.
2009-01-01
The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm2/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a 57Co source. An output rate of 6×106 counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown. PMID:19920884
Photon Counting Energy Dispersive Detector Arrays for X-ray Imaging.
Iwanczyk, Jan S; Nygård, Einar; Meirav, Oded; Arenson, Jerry; Barber, William C; Hartsough, Neal E; Malakhov, Nail; Wessel, Jan C
2009-01-01
The development of an innovative detector technology for photon-counting in X-ray imaging is reported. This new generation of detectors, based on pixellated cadmium telluride (CdTe) and cadmium zinc telluride (CZT) detector arrays electrically connected to application specific integrated circuits (ASICs) for readout, will produce fast and highly efficient photon-counting and energy-dispersive X-ray imaging. There are a number of applications that can greatly benefit from these novel imagers including mammography, planar radiography, and computed tomography (CT). Systems based on this new detector technology can provide compositional analysis of tissue through spectroscopic X-ray imaging, significantly improve overall image quality, and may significantly reduce X-ray dose to the patient. A very high X-ray flux is utilized in many of these applications. For example, CT scanners can produce ~100 Mphotons/mm(2)/s in the unattenuated beam. High flux is required in order to collect sufficient photon statistics in the measurement of the transmitted flux (attenuated beam) during the very short time frame of a CT scan. This high count rate combined with a need for high detection efficiency requires the development of detector structures that can provide a response signal much faster than the transit time of carriers over the whole detector thickness. We have developed CdTe and CZT detector array structures which are 3 mm thick with 16×16 pixels and a 1 mm pixel pitch. These structures, in the two different implementations presented here, utilize either a small pixel effect or a drift phenomenon. An energy resolution of 4.75% at 122 keV has been obtained with a 30 ns peaking time using discrete electronics and a (57)Co source. An output rate of 6×10(6) counts per second per individual pixel has been obtained with our ASIC readout electronics and a clinical CT X-ray tube. Additionally, the first clinical CT images, taken with several of our prototype photon-counting and energy-dispersive detector modules, are shown.
State traffic volume systems council estimation process.
DOT National Transportation Integrated Search
2004-10-01
The Kentucky Transportation Cabinet has an immense traffic data collection program that is an essential source for many other programs. The Division of Planning processes traffic volume counts annually. These counts are maintained in the Counts Datab...
A General Formulation of the Source Confusion Statistics and Application to Infrared Galaxy Surveys
NASA Astrophysics Data System (ADS)
Takeuchi, Tsutomu T.; Ishii, Takako T.
2004-03-01
Source confusion has been a long-standing problem in the astronomical history. In the previous formulation of the confusion problem, sources are assumed to be distributed homogeneously on the sky. This fundamental assumption is, however, not realistic in many applications. In this work, by making use of the point field theory, we derive general analytic formulae for the confusion problems with arbitrary distribution and correlation functions. As a typical example, we apply these new formulae to the source confusion of infrared galaxies. We first calculate the confusion statistics for power-law galaxy number counts as a test case. When the slope of differential number counts, γ, is steep, the confusion limits become much brighter and the probability distribution function (PDF) of the fluctuation field is strongly distorted. Then we estimate the PDF and confusion limits based on the realistic number count model for infrared galaxies. The gradual flattening of the slope of the source counts makes the clustering effect rather mild. Clustering effects result in an increase of the limiting flux density with ~10%. In this case, the peak probability of the PDF decreases up to ~15% and its tail becomes heavier. Although the effects are relatively small, they will be strong enough to affect the estimation of galaxy evolution from number count or fluctuation statistics. We also comment on future submillimeter observations.
Increasing X-Ray Brightness of HBL Source 1ES 1727+650
NASA Astrophysics Data System (ADS)
Kapanadze, Bidzina
2017-02-01
The nearby TeV-detected HBL object 1ES 1727+502 (1Zw 187, z=0.055) has been targeted 111 times by X-ray Telescope (XRT) onboard Swift since 2010 April 2. During this monitoring, the 0.3-10 keV count rate varied by a factor of 17.4 (see http://www.swift.psu.edu/monitoring/source.php?source=QSOB1727+502) and showed a prolonged X-ray flaring activity during 2015 March - 2016 February, revealed mainly via the Target of Opportunity observations performed in the framework of our request of different urgencies (Request Number 6571, 6606, 6717, 6809, 6927, 7322, 7355, 7379, 7390, 7404, 7430, 7441, 7516, 7565; see Kapanadze et al. 2015, Atel #8224, #7342).
DC KIDS COUNT e-Databook Indicators
ERIC Educational Resources Information Center
DC Action for Children, 2012
2012-01-01
This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…
Sources and magnitude of sampling error in redd counts for bull trout
Jason B. Dunham; Bruce Rieman
2001-01-01
Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...
2015-07-17
This figure shows how the Alice instrument count rate changed over time during the sunset and sunrise observations. The count rate is largest when the line of sight to the sun is outside of the atmosphere at the start and end times. Molecular nitrogen (N2) starts absorbing sunlight in the upper reaches of Pluto's atmosphere, decreasing as the spacecraft approaches the planet's shadow. As the occultation progresses, atmospheric methane and hydrocarbons can also absorb the sunlight and further decrease the count rate. When the spacecraft is totally in Pluto's shadow the count rate goes to zero. As the spacecraft emerges from Pluto's shadow into sunrise, the process is reversed. By plotting the observed count rate in the reverse time direction, it is seen that the atmospheres on opposite sides of Pluto are nearly identical. http://photojournal.jpl.nasa.gov/catalog/PIA19716
NASA Astrophysics Data System (ADS)
Subashchandran, Shanthi; Okamoto, Ryo; Zhang, Labao; Tanaka, Akira; Okano, Masayuki; Kang, Lin; Chen, Jian; Wu, Peiheng; Takeuchi, Shigeki
2013-10-01
The realization of an ultralow-dark-count rate (DCR) along with the conservation of high detection efficiency (DE) is critical for many applications using single photon detectors in quantum information technologies, material sciences, and biological sensing. For this purpose, a fiber-coupled superconducting nanowire single-photon detector (SNSPD) with a meander-type niobium nitride nanowire (width: 50 nm) is studied. Precise measurements of the bias current dependence of DE are carried out for a wide spectral range (from 500 to 1650 nm in steps of 50 nm) using a white light source and a laser line Bragg tunable band-pass filter. An ultralow DCR (0.0015 cps) and high DE (32%) are simultaneously achieved by the SNSPD at a wavelength of 500 nm.
NASA Astrophysics Data System (ADS)
Hiratama, Hideo; Kondo, Kenjiro; Suzuki, Seishiro; Tanimura, Yoshihiko; Iwanaga, Kohei; Nagata, Hiroshi
2017-09-01
Pulse height distributions were measured using a CdZnTe detector inside a lead collimator to investigate main source producing high dose rates above the shield plugs of Unit 3 at Fukushima Daiichi Nuclear Power Station. It was confirmed that low energy photons are dominant. Concentrations of Cs-137 under 60 cm concrete of the shield plug were estimated to be between 8.1E+9 and 5.7E+10 Bq/cm2 from the measured peak count rate of 0.662 MeV photons. If Cs-137 was distributed on the surfaces of the gaps with radius 6m and with the averaged concentration of 5 points, 2.6E+10 Bq/cm2, total amount of Cs-137 is estimated to be 30 PBq.
The displacement of the sun from the galactic plane using IRAS and faust source counts
NASA Technical Reports Server (NTRS)
Cohen, Martin
1995-01-01
I determine the displacement of the Sun from the Galactic plane by interpreting IRAS point-source counts at 12 and 25 microns in the Galactic polar caps using the latest version of the SKY model for the point-source sky (Cohen 1994). A value of solar zenith = 15.5 +/- 0.7 pc north of the plane provides the best match to the ensemble of useful IRAS data. Shallow K counts in the north Galactic pole are also best fitted by this offset, while limited FAUST far-ultraviolet counts at 1660 A near the same pole favor a value near 14 pc. Combining the many IRAS determinations with the few FAUST values suggests that a value of solar zenith = 15.0 +/- 0.5 pc (internal error only) would satisfy these high-latitude sets of data in both wavelength regimes, within the context of the SKY model.
Vehicle and cargo container inspection system for drugs
NASA Astrophysics Data System (ADS)
Verbinski, Victor V.; Orphan, Victor J.
1999-06-01
A vehicle and cargo container inspection system has been developed which uses gamma-ray radiography to produce digital images useful for detection of drugs and other contraband. The system is comprised of a 1 Ci Cs137 gamma-ray source collimated into a fan beam which is aligned with a linear array of NaI gamma-ray detectors located on the opposite side of the container. The NaI detectors are operated in the pulse-counting mode. A digital image of the vehicle or container is obtained by moving the aligned source and detector array relative to the object. Systems have been demonstrated in which the object is stationary (source and detector array move on parallel tracks) and in which the object moves past a stationary source and detector array. Scanning speeds of ˜30 cm/s with a pixel size (at the object) of ˜1 cm have been achieved. Faster scanning speeds of ˜2 m/s have been demonstrated on railcars with more modest spatial resolution (4 cm pixels). Digital radiographic images are generated from the detector count rates. These images, recorded on a PC-based data acquisition and display system, are shown from several applications: 1) inspection of trucks and containers at a border crossing, 2) inspection of railcars at a border crossing, 3) inspection of outbound cargo containers for stolen automobiles, and 4) inspection of trucks and cars for terrorist bombs.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Kids Count in Indiana: 1996 Data Book.
ERIC Educational Resources Information Center
Erickson, Judith B.
This Kids Count report is the third in a series examining statewide trends in the well-being of Indiana's children. The report combines statistics of special concern in Indiana with 10 national Kids Count well-being indicators: (1) percent low birthweight; (2) infant mortality rate; (3) child death rate; (4) birth rate to unmarried teens ages 15…
The Hard X-Ray Emission from Scorpius X-1 Seen by INTEGRAL
NASA Technical Reports Server (NTRS)
Sturner, Steve; Shrader, C. R.
2008-01-01
We present the results of our hard X-ray and gamma-ray study of the LMXB Sco X-1 utilizing INTEGRAL data as well as contemporaneous RXTE PCA data. We have investigated the hard X-ray spectral properties of Sco X-1 including the nature of the high-energy, nonthermal component and its possible correlations with the location of the source on the soft X-ray color-color diagram. We find that Sco X-1 follows two distinct spectral tracks when the 20-40 keV count rate is greater than 130 counts/second. One state is a hard state which exhibits a significant high-energy, powerlaw tail to the lower energy thermal spectrum. The other state shows a much less significant high-energy component. We found suggestive evidence for a correlation of these hard and soft high-energy states with the position of Sco X-1 on the low-energy X-ray color-color diagram. We have searched for similar behavior in 2 other Z sources: GX 17+2 and GX 5-1 with negative results.
The Atacama Cosmology Telescope: Extragalactic Sources at 148 GHz in the 2008 Survey
NASA Technical Reports Server (NTRS)
Marriage, T. A.; Juin, J. B.; Lin, Y. T.; Marsden, D.; Nolta, M. R.; Partridge, B.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.;
2011-01-01
We report on extragalactic sources detected in a 455 square-degree map of the southern sky made with data at a frequency of 148 GHz from the Atacama Cosmology Telescope 2008 observing season. We provide a catalog of 157 sources with flux densities spanning two orders of magnitude: from 15 mJy to 1500 mJy. Comparison to other catalogs shows that 98% of the ACT detections correspond to sources detected at lower radio frequencies. Three of the sources appear to be associated with the brightest cluster galaxies of low redshift X-ray selected galaxy clusters. Estimates of the radio to mm-wave spectral indices and differential counts of the sources further bolster the hypothesis that they are nearly all radio sources, and that their emission is not dominated by re-emission from warm dust. In a bright (> 50 mJy) 148 GHz-selected sample with complete cross-identifications from the Australia Telescope 20 GHz survey, we observe an average steepening of the spectra between .5, 20, and 148 GHz with median spectral indices of alp[ha (sub 5-20) = -0.07 +/- 0.06, alpha (sub 20-148) -0.39 +/- 0.04, and alpha (sub 5-148) = -0.20 +/- 0.03. When the measured spectral indices are taken into account, the 148 GHz differential source counts are consistent with previous measurements at 30 GHz in the context of a source count model dominated by radio sources. Extrapolating with an appropriately rescaled model for the radio source counts, the Poisson contribution to the spatial power spectrum from synchrotron-dominated sources with flux density less than 20 mJy is C(sup Sync) = (2.8 +/- 0.3) x 1O (exp-6) micro K(exp 2).
Schumm, Walter R
2006-01-01
Background Accurate reporting of adverse events occurring after vaccination is an important component of determining risk-benefit ratios for vaccinations. Controversy has developed over alleged underreporting of adverse events within U.S. military samples. This report examines the accuracy of adverse event rates recently published for headaches, and examines the issue of underreporting of headaches as a function of civilian or military sources and as a function of passive versus active surveillance. Methods A report by Sejvar et al was examined closely for accuracy with respect to the reporting of neurologic adverse events associated with smallpox vaccination in the United States. Rates for headaches were reported by several scholarly sources, in addition to Sejvar et al, permitting a comparison of reporting rates as a function of source and type of surveillance. Results Several major errors or omissions were identified in Sejvar et al. The count of civilian subjects vaccinated and the totals of both civilians and military personnel vaccinated were reported incorrectly by Sejvar et al. Counts of headaches reported in VAERS were lower (n = 95) for Sejvar et al than for Casey et al (n = 111) even though the former allegedly used 665,000 subjects while the latter used fewer than 40,000 subjects, with both using approximately the same civilian sources. Consequently, rates of nearly 20 neurologic adverse events reported by Sejvar et al were also incorrectly calculated. Underreporting of headaches after smallpox vaccination appears to increase for military samples and for passive adverse event reporting systems. Conclusion Until revised or corrected, the rates of neurologic adverse events after smallpox vaccinated reported by Sejvar et al must be deemed invalid. The concept of determining overall rates of adverse events by combining small civilian samples with large military samples appears to be invalid. Reports of headaches as adverse events after smallpox vaccination appear to be have occurred much less frequently using passive surveillance systems and by members of the U.S. military compared to civilians, especially those employed in healthcare occupations. Such concerns impact risk-benefit ratios associated with vaccines and weigh against making vaccinations mandatory, without informed consent, even among military members. Because of the issues raised here, adverse event rates derived solely or primarily from U.S. Department of Defense reporting systems, especially passive surveillance systems, should not be used, given better alternatives, for making public health policy decisions. PMID:17096855
NASA Astrophysics Data System (ADS)
Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bethermin, M.; Bhatia, R.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Chamballu, A.; Chary, R.-R.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colafrancesco, S.; Colombi, S.; Colombo, L. P. L.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Fosalba, P.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Jaffe, T. R.; Jaffe, A. H.; Jagemann, T.; Jones, W. C.; Juvela, M.; Keihänen, E.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Knox, L.; Kunz, M.; Kurinsky, N.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lawrence, C. R.; Leonardi, R.; Lilje, P. B.; López-Caniego, M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschènes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sajina, A.; Sandri, M.; Savini, G.; Scott, D.; Smoot, G. F.; Starck, J.-L.; Sudiwala, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Türler, M.; Valenziano, L.; Van Tent, B.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; White, M.; Yvon, D.; Zacchei, A.; Zonca, A.
2013-02-01
We make use of the Planck all-sky survey to derive number counts and spectral indices of extragalactic sources - infrared and radio sources - from the Planck Early Release Compact Source Catalogue (ERCSC) at 100 to 857 GHz (3 mm to 350 μm). Three zones (deep, medium and shallow) of approximately homogeneous coverage are used to permit a clean and controlled correction for incompleteness, which was explicitly not done for the ERCSC, as it was aimed at providing lists of sources to be followed up. Our sample, prior to the 80% completeness cut, contains between 217 sources at 100 GHz and 1058 sources at 857 GHz over about 12 800 to 16 550 deg2 (31 to 40% of the sky). After the 80% completeness cut, between 122 and 452 and sources remain, with flux densities above 0.3 and 1.9 Jy at 100 and 857 GHz. The sample so defined can be used for statistical analysis. Using the multi-frequency coverage of the Planck High Frequency Instrument, all the sources have been classified as either dust-dominated (infrared galaxies) or synchrotron-dominated (radio galaxies) on the basis of their spectral energy distributions (SED). Our sample is thus complete, flux-limited and color-selected to differentiate between the two populations. We find an approximately equal number of synchrotron and dusty sources between 217 and 353 GHz; at 353 GHz or higher (or 217 GHz and lower) frequencies, the number is dominated by dusty (synchrotron) sources, as expected. For most of the sources, the spectral indices are also derived. We provide for the first time counts of bright sources from 353 to 857 GHz and the contributions from dusty and synchrotron sources at all HFI frequencies in the key spectral range where these spectra are crossing. The observed counts are in the Euclidean regime. The number counts are compared to previously published data (from earlier Planck results, Herschel, BLAST, SCUBA, LABOCA, SPT, and ACT) and models taking into account both radio or infrared galaxies, and covering a large range of flux densities. We derive the multi-frequency Euclidean level - the plateau in the normalised differential counts at high flux-density - and compare it to WMAP, Spitzer and IRAS results. The submillimetre number counts are not well reproduced by current evolution models of dusty galaxies, whereas the millimetre part appears reasonably well fitted by the most recent model for synchrotron-dominated sources. Finally we provide estimates of the local luminosity density of dusty galaxies, providing the first such measurements at 545 and 857 GHz. Appendices are available in electronic form at http://www.aanda.orgCorresponding author: herve.dole@ias.u-psud.fr
Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L
2014-01-01
To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
Bacteria in deep coastal plain sediments of Maryland: A possible source of CO2 to groundwater
NASA Astrophysics Data System (ADS)
Chapelle, Francis H.; Zelibor, Joseph L., Jr.; Grimes, D. Jay; Knobel, Leroy L.
1987-08-01
Nineteen cores of unconsolidated Coastal Plain sediments obtained from depths of 14 to 182 m below land surface near Waldorf, Maryland, were collected and examined for metabolically active bacteria. The age of the sediments cored range from Miocene to Early Cretaceous. Acridine orange direct counts of total (viable and nonviable) bacteria in core subsamples ranged from 108 to 104 bacteria/g of dry sediment. Direct counts of viable bacteria ranged from 106 to 103 bacteria/g of dry sediment. Three cores contained viable methanogenic bacteria, and seven cores contained viable sulfate-reducing bacteria. The observed presence of bacteria in these sediments suggest that heterotrophic bacterial metabolism, with lignitic organic material as the primary substrate, is a plausible source of CO2 to groundwater. However, the possibility that abiotic processes also produce CO2 cannot be ruled out. Estimated rates of CO2 production in the noncalcareous Magothy/Upper Patapsco and Lower Patapsco aquifers based on mass balance of dissolved inorganic carbon, groundwater flow rates, and flow path segment lengths are in the range 10-3 to 10-5 mmol L-1 yr-1. Isotope balance calculations suggest that aquifer-generated CO2 is much heavier isotopically (˜—10 to + 5 per mil) than lignite (˜-24 per mil) present in these sediments. This may reflect isotopic fractionation during methanogenesis and possibly other bacterially mediated processes.
A Persistent Disk Wind in GRS 1915+105 with NICER
NASA Astrophysics Data System (ADS)
Neilsen, J.; Cackett, E.; Remillard, R. A.; Homan, J.; Steiner, J. F.; Gendreau, K.; Arzoumanian, Z.; Prigozhin, G.; LaMarr, B.; Doty, J.; Eikenberry, S.; Tombesi, F.; Ludlam, R.; Kara, E.; Altamirano, D.; Fabian, A. C.
2018-06-01
The bright, erratic black hole X-ray binary GRS 1915+105 has long been a target for studies of disk instabilities, radio/infrared jets, and accretion disk winds, with implications that often apply to sources that do not exhibit its exotic X-ray variability. With the launch of the Neutron star Interior Composition Explorer (NICER), we have a new opportunity to study the disk wind in GRS 1915+105 and its variability on short and long timescales. Here we present our analysis of 39 NICER observations of GRS 1915+105 collected during five months of the mission data validation and verification phase, focusing on Fe XXV and Fe XXVI absorption. We report the detection of strong Fe XXVI in 32 (>80%) of these observations, with another four marginal detections; Fe XXV is less common, but both likely arise in the well-known disk wind. We explore how the properties of this wind depend on broad characteristics of the X-ray lightcurve: mean count rate, hardness ratio, and fractional rms variability. The trends with count rate and rms are consistent with an average wind column density that is fairly steady between observations but varies rapidly with the source on timescales of seconds. The line dependence on spectral hardness echoes the known behavior of disk winds in outbursts of Galactic black holes; these results clearly indicate that NICER is a powerful tool for studying black hole winds.
NASA Astrophysics Data System (ADS)
Hawdon, Aaron; McJannet, David; Wallace, Jim
2014-06-01
The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.
The NSLS 100 element solid state array detector
Furenlid, L.R.; Kraner, H.W.; Rogers, L.C.; Cramer, S.P.; Stephani, D.; Beuttenmuller, R.H.; Beren, J.
2015-01-01
X-ray absorption studies of dilute samples require fluorescence detection techniques. Since signal-to-noise ratios are governed by the ratio of fluorescent to scattered photons counted by a detector, solid state detectors which can discriminate between fluorescence and scattered photons have become the instruments of choice for trace element measurements. Commercially available 13 element Ge array detectors permitting total count rates < 500000 counts per second are now in routine use. Since X-ray absorption beamlines at high brightness synchrotron sources can already illuminate most dilute samples with enough flux to saturate the current generation of solid state detectors, the development of next-generation instruments with significantly higher total count rates is essential. We present the design and current status of the 100 element Si array detector being developed in a collaboration between the NSLS and the Instrumentation Division at Brookhaven National Laboratory. The detecting array consists of a 10×10 matrix of 4 mm×4 mm elements laid out on a single piece of ultrahigh purity silicon mounted at the front end of a liquid nitrogen dewar assembly. A matrix of charge sensitive integrating preamplifiers feed signals to an array of shaping amplifiers, single channel analyzers, and scalers. An electronic switch, delay amplifier, linear gate, digital scope, peak sensing A/D converter, and histogramining memory module provide for complete diagnostics and channel calibration. The entire instrument is controlled by a LabView 2 application on a MacII ci; the software also provides full control over beamline hardware and performs the data collection. PMID:26722135
Critical values in hematology of 862 institutions in China.
Ye, Y Y; Zhao, H J; Fei, Y; Wang, W; He, F L; Zhong, K; Yuan, S; Wang, Z G
2017-10-01
A national survey on critical values in hematology of China laboratories was conducted to determine the current practice and assess the quality indicators so as to obtain a quality improvement. Laboratories participating were asked to submit the general information, the practice of critical value reporting, and the status of timeliness of critical value reporting. A total of 862 laboratories submitted the results. The majority of participants have included white blood cell count, blood platelet count, hemoglobin, prothrombin time, and activated partial thromboplastin time in their critical value lists. Many sources are used for establishing a critical value policy, and some of the laboratories consult with clinicians. The unreported critical value rate, late critical value reporting rate, and clinically unacknowledged rate in China are relatively low, and the median of critical value reporting time is 8-9 minutes. There exists a wide variety for critical value reporting in hematology in China. Laboratories should establish a policy of critical value reporting suited for their own situations and consult with clinicians to set critical value lists. Critical values are generally reported in a timely manner in China, but some measures should be taken to further improve the timeliness of critical value reporting. © 2017 John Wiley & Sons Ltd.
Tanaka, Naoaki; Papadelis, Christos; Tamilia, Eleonora; Madsen, Joseph R; Pearl, Phillip L; Stufflebeam, Steven M
2018-04-27
This study evaluates magnetoencephalographic (MEG) spike population as compared with intracranial electroencephalographic (IEEG) spikes using a quantitative method based on distributed source analysis. We retrospectively studied eight patients with medically intractable epilepsy who had an MEG and subsequent IEEG monitoring. Fifty MEG spikes were analyzed in each patient using minimum norm estimate. For individual spikes, each vertex in the source space was considered activated when its source amplitude at the peak latency was higher than a threshold, which was set at 50% of the maximum amplitude over all vertices. We mapped the total count of activation at each vertex. We also analyzed 50 IEEG spikes in the same manner over the intracranial electrodes and created the activation count map. The location of the electrodes was obtained in the MEG source space by coregistering postimplantation computed tomography to MRI. We estimated the MEG- and IEEG-active regions associated with the spike populations using the vertices/electrodes with a count over 25. The activation count maps of MEG spikes demonstrated the localization associated with the spike population by variable count values at each vertex. The MEG-active region overlapped with 65 to 85% of the IEEG-active region in our patient group. Mapping the MEG spike population is valid for demonstrating the trend of spikes clustering in patients with epilepsy. In addition, comparison of MEG and IEEG spikes quantitatively may be informative for understanding their relationship.
Davis, Letitia; Wellman, Helen; Hart, James; Cleary, Robert; Gardstein, Betsey M; Sciuchetti, Paul
2004-09-01
This study examined whether a state surveillance system for work-related carpal tunnel syndrome (WR-CTS) based on workers' compensation claims (Sentinel Event Notification System for Occupational Risks, SENSOR) and the Annual Survey of Occupational Injuries and Illnesses (SOII) identified the same industries, occupations, sources of injury, and populations for intervention. Trends in counts, rates, and female/male ratios of WR-CTS during 1994-1997, and age distributions were compared across three data sources: SENSOR, Massachusetts SOII, and National SOII. SENSOR and National SOII data on WR-CTS were compared by industry, occupation, and injury source. Due to small sample size and subsequent gaps in available information, state SOII data on WR-CTS were of little use in identifying specific industries and occupations for intervention. SENSOR and National SOII data on the frequency of WR-CTS cases identified many similar occupations and industries, and both surveillance systems pointed to computer use as a risk factor for WR-CTS. Some high rate industries identified by SENSOR were not identified using National SOII rates even when national findings were restricted to take into account the distribution of the Massachusetts workforce. Use of national SOII data on rates of WR-CTS for identifying state industry priorities for WR-CTS prevention should be undertaken with caution. Options for improving state SOII data and use of other state data systems should be pursued.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
The association of trail use with weather-related factors on an urban greenway.
Burchfield, Ryan A; Fitzhugh, Eugene C; Bassett, David R
2012-02-01
To study the association between weather-related measures and objectively measured trail use across 3 seasons. Weather has been reported as a barrier to outdoor physical activity (PA), but previous studies have explained only a small amount of the variance in PA using weather-related measures. The dependent variable of this study was trail use measured as mean hourly trail counts by an infrared trail counter located on a greenway. Each trail count represents 1 person breaking the infrared beam of the trail counter. Two sources of weather-related measures were obtained by a site-specific weather station and a public domain weather source. Temperature, relative humidity, and precipitation were significantly correlated with trail counts recorded during daylight hours. More precise hourly weather-related measures explained 42% of the variance in trail counts, regardless of the weather data source with temperature alone explaining 18% of the variance in trail counts. After controlling for all seasonal and weekly factors, every 1°F increase in temperature was associated with an increase of 1.1 trail counts/hr up to 76°F, at which point trail use began to slightly decrease. Weather-related factors have a moderate association with trail use along an urban greenway.
Takemoto, Kazuya; Nambu, Yoshihiro; Miyazawa, Toshiyuki; Sakuma, Yoshiki; Yamamoto, Tsuyoshi; Yorozu, Shinichi; Arakawa, Yasuhiko
2015-09-25
Advances in single-photon sources (SPSs) and single-photon detectors (SPDs) promise unique applications in the field of quantum information technology. In this paper, we report long-distance quantum key distribution (QKD) by using state-of-the-art devices: a quantum-dot SPS (QD SPS) emitting a photon in the telecom band of 1.5 μm and a superconducting nanowire SPD (SNSPD). At the distance of 100 km, we obtained the maximal secure key rate of 27.6 bps without using decoy states, which is at least threefold larger than the rate obtained in the previously reported 50-km-long QKD experiment. We also succeeded in transmitting secure keys at the rate of 0.307 bps over 120 km. This is the longest QKD distance yet reported by using known true SPSs. The ultralow multiphoton emissions of our SPS and ultralow dark count of the SNSPD contributed to this result. The experimental results demonstrate the potential applicability of QD SPSs to practical telecom QKD networks.
Pneumotachometer counts respiration rate of human subject
NASA Technical Reports Server (NTRS)
Graham, O.
1964-01-01
To monitor breaths per minute, two rate-to-analog converters are alternately used to read and count the respiratory rate from an impedance pneumograph sequentially displayed numerically on electroluminescent matrices.
Within-site variability in surveys of wildlife populations
Link, William A.; Barker, Richard J.; Sauer, John R.; Droege, Sam
1994-01-01
Most large-scale surveys of animal populations are based on counts of individuals observed during a sampling period, which are used as indexes to the population. The variability in these indexes not only reflects variability in population sizes among sites but also variability due to the inexactness of the counts. Repeated counts at survey sites can be used to document this additional source of variability and, in some applications, to mitigate its effects. We present models for evaluating the proportion of total variability in counts that is attributable to this within-site variability and apply them in the analysis of data from repeated counts on routes from the North American Breeding Bird Survey. We analyzed data on 98 species, obtaining estimates of these percentages, which ranged from 3.5 to 100% with a mean of 36.25%. For at least 14 of the species, more than half of the variation in counts was attributable to within-site sources. Counts for species with lower average counts had a higher percentage of within-site variability. We discuss the relative cost efficiency of replicating sites or initiating new sites for several objectives, concluding that it is frequently better to initiate new sites than to attempt to replicate existing sites.
Mbaeyi, Chukwuma; Mohamed, Abdinoor; Owino, Brian Ogola; Mengistu, Kumlachew F; Ehrhardt, Derek; Elsayed, Eltayeb Ahmed
2018-03-02
Surveillance for cases of acute flaccid paralysis (AFP) is a key strategy adopted for the eradication of polio. Detection of poliovirus circulation is often predicated on the ability to identify AFP cases and test their stool specimens for poliovirus infection in a timely manner. The Village Polio Volunteers (VPV) program was established in 2013 in a bid to strengthen polio eradication activities in Somalia, including AFP surveillance, given the country's vulnerability to polio outbreaks. To assess the impact of the VPV program on AFP surveillance, we determined case counts, case-reporting sources, and non-polio AFP rates in the years before and after program introduction, i.e., 2011-2016. We also compared the stool adequacy and timeliness of cases reported by VPVs to those reported by other sources. In the years following program introduction, VPVs accounted for a high proportion of AFP cases reported in Somalia. AFP case counts rose from 148 cases in 2012, the year before program introduction, to 279 cases in 2015, during which VPVs accounted for 40% of reported cases. Further, the non-polio AFP rate improved from 2.8 cases in 2012 to 4.8 cases per 100,000 persons <15 years by 2015. Stool adequacy rates have been consistently high and AFP cases have been detected in a timelier manner since the program was introduced. Given the impact of the VPV program on improving AFP surveillance indicators in Somalia, similar community-based programs could play a crucial role in enhancing surveillance activities in countries with limited healthcare infrastructure.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vieira, J. D.; Crawford, T. M.; Switzer, E. R.
2010-08-10
We report the results of an 87 deg{sup 2} point-source survey centered at R.A. 5{sup h}30{sup m}, decl. -55{sup 0} taken with the South Pole Telescope at 1.4 and 2.0 mm wavelengths with arcminute resolution and milli-Jansky depth. Based on the ratio of flux in the two bands, we separate the detected sources into two populations, one consistent with synchrotron emission from active galactic nuclei and the other consistent with thermal emission from dust. We present source counts for each population from 11 to 640 mJy at 1.4 mm and from 4.4 to 800 mJy at 2.0 mm. The 2.0more » mm counts are dominated by synchrotron-dominated sources across our reported flux range; the 1.4 mm counts are dominated by synchrotron-dominated sources above {approx}15 mJy and by dust-dominated sources below that flux level. We detect 141 synchrotron-dominated sources and 47 dust-dominated sources at signal-to-noise ratio S/N >4.5 in at least one band. All of the most significantly detected members of the synchrotron-dominated population are associated with sources in previously published radio catalogs. Some of the dust-dominated sources are associated with nearby (z << 1) galaxies whose dust emission is also detected by the Infrared Astronomy Satellite. However, most of the bright, dust-dominated sources have no counterparts in any existing catalogs. We argue that these sources represent the rarest and brightest members of the population commonly referred to as submillimeter galaxies (SMGs). Because these sources are selected at longer wavelengths than in typical SMG surveys, they are expected to have a higher mean redshift distribution and may provide a new window on galaxy formation in the early universe.« less
Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses
Myers, Risa B.; Herskovic, Jorge R.
2011-01-01
Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292
Accommodating Binary and Count Variables in Mediation: A Case for Conditional Indirect Effects
ERIC Educational Resources Information Center
Geldhof, G. John; Anthony, Katherine P.; Selig, James P.; Mendez-Luck, Carolyn A.
2018-01-01
The existence of several accessible sources has led to a proliferation of mediation models in the applied research literature. Most of these sources assume endogenous variables (e.g., M, and Y) have normally distributed residuals, precluding models of binary and/or count data. Although a growing body of literature has expanded mediation models to…
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick
2015-05-01
Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Whitcher, R; Page, R D; Cole, P R
2014-06-01
The characteristics of alpha radiation have for decades been demonstrated in UK schools using small sealed (241)Am sources. There is a small but steady number of schools who report a considerable reduction in the alpha count rate detected by an end-window GM detector compared with when the source was new. This cannot be explained by incorrect apparatus or set-up, foil surface contamination, or degradation of the GM detector. The University of Liverpool and CLEAPSS collaborated to research the cause of this performance degradation. The aim was to find what was causing the performance degradation and the ramifications for both the useful and safe service life of the sources. The research shows that these foil sources have greater energy straggling with a corresponding reduction in spectral peak energy. A likely cause for this increase in straggling is a significant diffusion of the metals over time. There was no evidence to suggest the foils have become unsafe, but precautionary checks should be made on old sources.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Calibration of the Accuscan II In Vivo System for I-125 Thyroid Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovard R. Perry; David L. Georgeson
2011-07-01
This report describes the March 2011 calibration of the Accuscan II HpGe In Vivo system for I-125 thyroid counting. The source used for the calibration was a DOE manufactured Am-241/Eu-152 source contained in a 22 ml vial BEA Am-241/Eu-152 RMC II-1 with energies from 26 keV to 344 keV. The center of the detector housing was positioned 64 inches from the vault floor. This position places the approximate center line of the detector housing at the center line of the source in the phantom thyroid tube. The energy and efficiency calibration were performed using an RMC II phantom (Appendix J).more » Performance testing was conducted using source BEA Am-241/Eu-152 RMC II-1 and Validation testing was performed using an I-125 source in a 30 ml vial (I-125 BEA Thyroid 002) and an ANSI N44.3 phantom (Appendix I). This report includes an overview introduction and records for the energy/FWHM and efficiency calibration including performance verification and validation counting. The Accuscan II system was successfully calibrated for counting the thyroid for I-125 and verified in accordance with ANSI/HPS N13.30-1996 criteria.« less
NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.
Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron
2018-06-14
This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, Jose A; Uckan, Taner; Gunning, John E
2010-01-01
The expected increased demand in fuel for nuclear power plants, combined with the fact that a significant portion of the current supply from the blend down of weapons-source material will soon be coming to an end, has led to the need for new sources of enriched uranium for nuclear fuel. As a result, a number of countries have announced plans, or are currently building, gaseous centrifuge enrichment plants (GCEPs) to supply this material. GCEPs have the potential to produce uranium at enrichments above the level necessary for nuclear fuel purposes-enrichments that make the uranium potentially usable for nuclear weapons. Asmore » a result, there is a critical need to monitor these facilities to ensure that nuclear material is not inappropriately enriched or diverted for unintended use. Significant advances have been made in instrument capability since the current International Atomic Energy Agency (IAEA) monitoring methods were developed. In numerous cases, advances have been made in other fields that have the potential, with modest development, to be applied in safeguards applications at enrichment facilities. A particular example of one of these advances is the flow and enrichment monitor (FEMO). (See Gunning, J. E. et al., 'FEMO: A Flow and Enrichment Monitor for Verifying Compliance with International Safeguards Requirements at a Gas Centrifuge Enrichment Facility,' Proceedings of the 8th International Conference on Facility Operations - Safeguards Interface. Portland, Oregon, March 30-April 4th, 2008.) The FEMO is a conceptual instrument capable of continuously measuring, unattended, the enrichment and mass flow of {sup 235}U in pipes at a GCEP, and consequently increase the probability that the potential production of HEU and/or diversion of fissile material will be detected. The FEMO requires no piping penetrations and can be installed on pipes containing the flow of uranium hexafluoride (UF{sub 6}) at a GCEP. This FEMO consists of separate parts, a flow monitor (FM) and an enrichment monitor (EM). Development of the FM is primarily the responsibility of Oak Ridge National Laboratory, and development of the EM is primarily the responsibility of Los Alamos National Laboratory. The FM will measure {sup 235}U mass flow rate by combining information from measuring the UF{sub 6} volumetric flow rate and the {sup 235}U density. The UF{sub 6} flow rate will be measured using characteristics of the process pumps used in product and tail UF{sub 6} header process lines of many GCEPs, and the {sup 235}U density will be measured using commercially available sodium iodide (NaI) gamma ray scintillation detectors. This report describes the calibration of the portion of the FM that measures the {sup 235}U density. Research has been performed to define a methodology and collect data necessary to perform this calibration without the need for plant declarations. The {sup 235}U density detector is a commercially available system (GammaRad made by Amptek, www.amptek.com) that contains the NaI crystal, photomultiplier tube, signal conditioning electronics, and a multichannel analyzer (MCA). Measurements were made with the detector system installed near four {sup 235}U sources. Two of the sources were made of solid uranium, and the other two were in the form of UF{sub 6} gas in aluminum piping. One of the UF{sub 6} gas sources was located at ORNL and the other at LANL. The ORNL source consisted of two pipe sections (schedule 40 aluminum pipe of 4-inch and 8-inch outside diameter) with 5.36% {sup 235}U enrichment, and the LANL source was a 4-inch schedule 40 aluminum pipe with 3.3% {sup 235}U enrichment. The configurations of the detector on these test sources, as well as on long straight pipe configurations expected to exist at GCEPs, were modeled using the computer code MCNP. The results of the MCNP calculations were used to define geometric correction factors between the test source and the GCEP application. Using these geometric correction factors, the experimental 186 keV counts in the test geometry were extrapolated to the expected GCEP geometry, and calibration curves were developed. A unique method to analyze the measurement was also developed that separated the detector spectrum into the five detectable decay gamma rays emitted by {sup 235}U in the 120 to 200 keV energy range. This analysis facilitated the assignment of a consistent value for the detector counts originating from {sup 235}U decays at 186 keV. This value is also more accurate because it includes the counts from gamma energies other than 186 keV, which results in increased counting statistics for the same measurement time. The 186 keV counts expected as a function of pressure and enrichment are presented in the body of this report. The main result of this research is a calibration factor for 4-inch and 8-inch schedule 40 aluminum pipes. For 4-inch pipes, the {sup 235}U density is 0.62 {sup 235}U g/m{sup 3} per each measured 186 keV count.« less
Long-distance quantum key distribution with imperfect devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lo Piparo, Nicoló; Razavi, Mohsen
2014-12-04
Quantum key distribution over probabilistic quantum repeaters is addressed. We compare, under practical assumptions, two such schemes in terms of their secure key generation rate per memory, R{sub QKD}. The two schemes under investigation are the one proposed by Duan et al. in [Nat. 414, 413 (2001)] and that of Sangouard et al. proposed in [Phys. Rev. A 76, 050301 (2007)]. We consider various sources of imperfections in the latter protocol, such as a nonzero double-photon probability for the source, dark count per pulse, channel loss and inefficiencies in photodetectors and memories, to find the rate for different nesting levels.more » We determine the maximum value of the double-photon probability beyond which it is not possible to share a secret key anymore. We find the crossover distance for up to three nesting levels. We finally compare the two protocols.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, Jeter C.; Aalseth, Craig E.; Bonicalzi, Ricco
Age dating groundwater and seawater using 39Ar/Ar ratios is an important tool to understand water mass flow rates and mean residence time. For modern or contemporary argon, the 39Ar activity is 1.8 mBq per liter of argon. Radiation measurements at these activity levels require ultra low-background detectors. Low-background proportional counters have been developed at Pacific Northwest National Laboratory. These detectors use traditional mixtures of argon and methane as counting gas, and the residual 39Ar from commercial argon has become a predominant source of background activity in these detectors. We demonstrated sensitivity to 39Ar by using geological or ancient argon frommore » gas wells in place of commercial argon. The low level counting performance of these proportional counters is then demonstrated for sensitivities to 39Ar/Ar ratios sufficient to date water masses as old as 1000 years.« less
X-ray detection of the symbiotic star AG Draconis
NASA Technical Reports Server (NTRS)
Anderson, C. M.; Cassinelli, J. P.; Sanders, W. T.
1981-01-01
The detection of the yellow symbiotic star AG Draconis is reported. The star was detected by an imaging proportional counter at a count rate of 0.27 counts/s. The object is an intense source of very soft X-rays, and the X-ray luminosity is estimated to be equal to approximately 10 to the 32nd ergs/s, with a temperature of less than 2,000,000 K. If an interstellar column density of 3 x 10 to the 20th/sq cm is assumed, the emission measure is deduced to be greater than 3 x 10 to the 55th/ cu cm from the X-ray data, and less than 3 x 10 to the 59th/cu cm from the optical data. The observation is discussed in the context of various models of the symbiotic stars.
Cerebellar pathology in childhood-onset vs. adult-onset essential tremor.
Louis, Elan D; Kuo, Sheng-Han; Tate, William J; Kelly, Geoffrey C; Faust, Phyllis L
2017-10-17
Although the incidence of ET increases with advancing age, the disease may begin at any age, including childhood. The question arises as to whether childhood-onset ET cases manifest the same sets of pathological changes in the cerebellum as those whose onset is during adult life. We quantified a broad range of postmortem features (Purkinje cell [PC] counts, PC axonal torpedoes, a host of associated axonal changes [PC axonal recurrent collateral count, PC thickened axonal profile count, PC axonal branching count], heterotopic PCs, and basket cell rating) in 60 ET cases (11 childhood-onset and 49 adult-onset) and 30 controls. Compared to controls, childhood-onset ET cases had lower PC counts, higher torpedo counts, higher heterotopic PC counts, higher basket cell plexus rating, and marginally higher PC axonal recurrent collateral counts. The median PC thickened axonal profile count and median PC axonal branching count were two to five times higher in childhood-onset ET than controls, but the differences did not reach statistical significance. Childhood-onset and adult-onset ET had similar PC counts, torpedo counts, heterotopic PC counts, basket cell plexus rating, PC axonal recurrent collateral counts, PC thickened axonal profile count and PC axonal branching count. In conclusion, we found that childhood-onset and adult-onset ET shared similar pathological changes in the cerebellum. The data suggest that pathological changes we have observed in the cerebellum in ET are a part of the pathophysiological cascade of events in both forms of the disease and that both groups seem to reach the same pathological endpoints at a similar age of death. Copyright © 2017 Elsevier B.V. All rights reserved.
XMM-Newton 13H deep field - I. X-ray sources
NASA Astrophysics Data System (ADS)
Loaring, N. S.; Dwelly, T.; Page, M. J.; Mason, K.; McHardy, I.; Gunn, K.; Moss, D.; Seymour, N.; Newsam, A. M.; Takata, T.; Sekguchi, K.; Sasseen, T.; Cordova, F.
2005-10-01
We present the results of a deep X-ray survey conducted with XMM-Newton, centred on the UK ROSAT13H deep field area. This region covers 0.18 deg2, and is the first of the two areas covered with XMM-Newton as part of an extensive multiwavelength survey designed to study the nature and evolution of the faint X-ray source population. We have produced detailed Monte Carlo simulations to obtain a quantitative characterization of the source detection procedure and to assess the reliability of the resultant sourcelist. We use the simulations to establish a likelihood threshold, above which we expect less than seven (3 per cent) of our sources to be spurious. We present the final catalogue of 225 sources. Within the central 9 arcmin, 68 per cent of source positions are accurate to 2 arcsec, making optical follow-up relatively straightforward. We construct the N(>S) relation in four energy bands: 0.2-0.5, 0.5-2, 2-5 and 5-10 keV. In all but our highest energy band we find that the source counts can be represented by a double power law with a bright-end slope consistent with the Euclidean case and a break around 10-14yergcm-2s-1. Below this flux, the counts exhibit a flattening. Our source counts reach densities of 700, 1300, 900 and 300 deg-2 at fluxes of 4.1 × 10-16,4.5 × 10-16,1.1 × 10-15 and 5.3 × 10-15ergcm-2s-1 in the 0.2-0.5, 0.5-2, 2-5 and 5-10 keV energy bands, respectively. We have compared our source counts with those in the two Chandra deep fields and Lockman hole, and found our source counts to be amongst the highest of these fields in all energy bands. We resolve >51 per cent (>50 per cent) of the X-ray background emission in the 1-2 keV (2-5 keV) energy bands.
HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thorat, K.; Subrahmanyan, R.; Saripalli, L.
2013-01-01
The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less
Recommended methods for monitoring change in bird populations by counting and capture of migrants
David J. T. Hussell; C. John Ralph
2005-01-01
Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...
Monitoring trends in bird populations: addressing background levels of annual variability in counts
Jared Verner; Kathryn L. Purcell; Jennifer G. Turner
1996-01-01
Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...
Alonso Roldán, Virginia; Bossio, Luisina; Galván, David E
2015-01-01
In species showing distributions attached to particular features of the landscape or conspicuous signs, counts are commonly made by making focal observations where animals concentrate. However, to obtain density estimates for a given area, independent searching for signs and occupancy rates of suitable sites is needed. In both cases, it is important to estimate detection probability and other possible sources of variation to avoid confounding effects on measurements of abundance variation. Our objective was to assess possible bias and sources of variation in a two-step protocol in which random designs were applied to search for signs while continuously recording video cameras were used to perform abundance counts where animals are concentrated, using mara (Dolichotis patagonum) as a case study. The protocol was successfully applied to maras within the Península Valdés protected area, given that the protocol was logistically suitable, allowed warrens to be found, the associated adults to be counted, and the detection probability to be estimated. Variability was documented in both components of the two-step protocol. These sources of variation should be taken into account when applying this protocol. Warren detectability was approximately 80% with little variation. Factors related to false positive detection were more important than imperfect detection. The detectability for individuals was approximately 90% using the entire day of observations. The shortest sampling period with a similar detection capacity than a day was approximately 10 hours, and during this period, the visiting dynamic did not show trends. For individual mara, the detection capacity of the camera was not significantly different from the observer during fieldwork. The presence of the camera did not affect the visiting behavior of adults to the warren. Application of this protocol will allow monitoring of the near-threatened mara providing a minimum local population size and a baseline for measuring long-term trends.
Probing Majorana bound states via counting statistics of a single electron transistor
Li, Zeng-Zhao; Lam, Chi-Hang; You, J. Q.
2015-01-01
We propose an approach for probing Majorana bound states (MBSs) in a nanowire via counting statistics of a nearby charge detector in the form of a single-electron transistor (SET). We consider the impacts on the counting statistics by both the local coupling between the detector and an adjacent MBS at one end of a nanowire and the nonlocal coupling to the MBS at the other end. We show that the Fano factor and the skewness of the SET current are minimized for a symmetric SET configuration in the absence of the MBSs or when coupled to a fermionic state. However, the minimum points of operation are shifted appreciably in the presence of the MBSs to asymmetric SET configurations with a higher tunnel rate at the drain than at the source. This feature persists even when varying the nonlocal coupling and the pairing energy between the two MBSs. We expect that these MBS-induced shifts can be measured experimentally with available technologies and can serve as important signatures of the MBSs. PMID:26098973
Quantifying Data Quality for Clinical Trials Using Electronic Data Capture
Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.
2008-01-01
Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958
Probing Jupiter's Radiation Environment with Juno-UVS
NASA Astrophysics Data System (ADS)
Kammer, J.; Gladstone, R.; Greathouse, T. K.; Hue, V.; Versteeg, M. H.; Davis, M. W.; Santos-Costa, D.; Becker, H. N.; Bolton, S. J.; Connerney, J. E. P.; Levin, S.
2017-12-01
While primarily designed to observe photon emission from the Jovian aurora, Juno's Ultraviolet Spectrograph (Juno-UVS) has also measured background count rates associated with penetrating high-energy radiation. These background counts are distinguishable from photon events, as they are generally spread evenly across the entire array of the Juno-UVS detector, and as the spacecraft spins, they set a baseline count rate higher than the sky background rate. During eight perijove passes, this background radiation signature has varied significantly on both short (spin-modulated) timescales, as well as longer timescales ( minutes to hours). We present comparisons of the Juno-UVS data across each of the eight perijove passes, with a focus on the count rate that can be clearly attributed to radiation effects rather than photon events. Once calibrated to determine the relationship between count rate and penetrating high-energy radiation (e.g., using existing GEANT models), these in situ measurements by Juno-UVS will provide additional constraints to radiation belt models close to the planet.
NASA Astrophysics Data System (ADS)
Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.
2018-04-01
The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Astrophysics Data System (ADS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lonsdale, C.J.; Hacking, P.B.
1989-04-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less
Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations
NASA Technical Reports Server (NTRS)
Lonsdale, Carol J.; Hacking, Perry B.
1989-01-01
Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.
SPITZER 70 AND 160 {mu}m OBSERVATIONS OF THE COSMOS FIELD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frayer, D. T.; Huynh, M. T.; Bhattacharya, B.
2009-11-15
We present Spitzer 70 and 160 {mu}m observations of the COSMOS Spitzer survey (S-COSMOS). The data processing techniques are discussed for the publicly released products consisting of images and source catalogs. We present accurate 70 and 160 {mu}m source counts of the COSMOS field and find reasonable agreement with measurements in other fields and with model predictions. The previously reported counts for GOODS-North and the extragalactic First Look Survey are updated with the latest calibration, and counts are measured based on the large area SWIRE survey to constrain the bright source counts. We measure an extragalactic confusion noise level ofmore » {sigma} {sub c} = 9.4 {+-} 3.3 mJy (q = 5) for the MIPS 160 {mu}m band based on the deep S-COSMOS data and report an updated confusion noise level of {sigma} {sub c} = 0.35 {+-} 0.15 mJy (q = 5) for the MIPS 70 {mu}m band.« less
NASA Astrophysics Data System (ADS)
Altamirano, D.; Degenaar, N.; Heinke, C. O.; Homan, J.; Pooley, D.; Sivakoff, G. R.; Wijnands, R.
2011-10-01
Following the detection of an X-ray outburst in the direction of Terzan 5 (ATEL #3714), we obtained a Swift observation and additional RXTE observations. The XRT aboard Swift observed Terzan 5 on Oct. 26, 2011 in imaging mode for a total exposure time of 967 s. The source was detected at high count rates causing significant pile-up (the core is saturated), and a bad column intersects the point-spread function.
A new product for photon-limited imaging
NASA Astrophysics Data System (ADS)
Gonsiorowski, Thomas
1986-01-01
A new commercial low-light imaging detector, the Photon Digitizing Camera (PDC), is based on the PAPA detector developed at Harvard University. The PDC generates (x, y, t)-coordinate data of each detected photoevent. Because the positional address computation is performed optically, very high counting rates are achieved even at full spatial resolution. Careful optomechanical and electronic design results in a compact, rugged detector with superb performance. The PDC will be used for speckle imaging of astronomical sources and other astronomical and low-light applications.
Techniques for Microwave Near-Field Quantum Control of Trapped Ions
2013-01-31
counts. Each DDS (Analog Devices AD9858) can generate signals at frequencies to 400 MHz with a frequency resolution of 0.233 Hz and phase resolution...fast, two- channel DAC is used to generate arbitrary waveforms with a 50-MHz update rate, a voltage range from −10 V to 10 V, and a resolution of 0.305...mV. This DAC is programed via USB and triggered by the data acquisition FPGA . We use three DDS modules as sources for three frequency octupling
The particle background observed by the X-ray detectors onboard Copernicus
NASA Technical Reports Server (NTRS)
Davison, P. J. N.
1974-01-01
The design and characteristics of low energy detectors on the Copernicus satellite are described. The functions of the sensors in obtaining data on the particle background. The procedure for processing the data obtained by the satellite is examined. The most significant positive deviations are caused by known weak X-ray sources in the field of view. In addition to small systemic effects, occasional random effects where the count rate increases suddenly and decreases within a few frames are analyzed.
NASA Technical Reports Server (NTRS)
Boynton, W. V.; Droege, G. F.; Mitrofanov, I. G.; McClanahan, T. P.; Sanin, A. B.; Litvak, M. L.; Schaffner, M.; Chin, G.; Evans, L. G.; Garvin, J. B.;
2012-01-01
The data from the collimated sensors of the LEND instrument are shown to be of exceptionally high quality. Counting uncertainties are about 0.3% relative and are shown to be the only significant source of random error, thus conclusions based on small differences in count rates are valid. By comparison with the topography of Shoemaker crater, the spatial resolution of the instrument is shown to be consistent with the design value of 5 km for the radius of the circle over which half the counts from the lunar surface would be determined. The observed epithermal-neutron suppression factor due to the hydrogen deposit in Shoemaker crater of 0.25 plus or minus 0.04 cps is consistent with the collimated field-of-view rate of 1.7 cps estimated by Mitrofanov et al. (2010a). The statistical significance of the neutron suppressed regions (NSRs) relative to the larger surrounding polar region is demonstrated, and it is shown that they are not closely related to the permanently shadowed regions. There is a significant increase in H content in the polar regions independent of the H content of the NSRs. The non-NSR H content increases directly with latitude, and the rate of increase is virtually identical at both poles. There is little or no increase with latitude outside the polar region. Various mechanisms to explain this steep increase in the non-NSR polar H with latitude are investigated, and it is suggested that thermal volatilization is responsible for the increase because it is minimized at the low surface temperatures close to the poles.
Results of the first continuous meteor head echo survey at polar latitudes
NASA Astrophysics Data System (ADS)
Schult, Carsten; Stober, Gunter; Janches, Diego; Chau, Jorge L.
2017-11-01
We present the first quasi continuous meteor head echo measurements obtained during a period of over two years using the Middle Atmosphere ALOMAR Radar System (MAARSY). The measurements yield information on the altitude, trajectory, vector velocity, radar cross section, deceleration and dynamical mass of every single event. The large statistical amount of nearly one million meteor head detections provide an excellent overview of the elevation, altitude, velocity and daily count rate distributions during different times of the year at polar latitudes. Only 40% of the meteors were detected within the full width half maximum of the specific sporadic meteor sources. Our observation of the sporadic meteors are compared to the observations with other radar systems and a meteor input function (MIF). The best way to compare different radar systems is by comparing the radar cross section (RCS), which is the main detection criterion for each system. In this study we aim to compare our observations with a MIF, which provides information only about the meteoroid mass. Thus, we are using a statistical approach for the elevation and velocity dependent visibility and a specific mass selection. The predicted absolute count rates from the MIF are in a good agreement with the observation when it is assumed that the radar system is only sensitive to meteoroids with masses higher than one microgram. The analysis of the dynamic masses seems to be consistent with this assumption since the count rate of events with smaller masses are low and decrease even more by using events with relatively small errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cammin, Jochen, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Taguchi, Katsuyuki, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Xu, Jennifer
Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra andmore » count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi et al., “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COV{sub W}), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COV{sub W}'s less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COV{sub W} was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE.« less
Cammin, Jochen; Xu, Jennifer; Barber, William C.; Iwanczyk, Jan S.; Hartsough, Neal E.; Taguchi, Katsuyuki
2014-01-01
Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra and count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi , “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COVW), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COVW's less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COVW was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE. PMID:24694136
X-Ray Observations of High-Energy Pulsars: PSR B1951+32 and Geminga
NASA Astrophysics Data System (ADS)
Ho, Cheng
Observations at frequencies across a wide range of electromagnetic spectra are key to the understanding of the origin and mechanisms of high-energy emissions from pulsars. We propose to observe the high-energy pulsars PSR B1951+32 and Geminga with XTE. These two sources emit X-rays at low enough count rate that we can acquire high resolution timing and spectral data, allowing us to perform detailed analysis on the ground. Staring integration of 10 ksec for each source is requested. Data obtained in these observations, together with those from ROSAT, GRO and a planned project for optical counterpart study at Los Alamos, will provide crucial information to advance high-energy pulsar research.
2013 Kids Count in Colorado! Community Matters
ERIC Educational Resources Information Center
Colorado Children's Campaign, 2013
2013-01-01
"Kids Count in Colorado!" is an annual publication of the Children's Campaign, providing state and county level data on child well-being factors including child health, education, and economic status. Since its first release 20 years ago, "Kids Count in Colorado!" has become the most trusted source for data and information on…
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
NASA Astrophysics Data System (ADS)
Quarles, C. A.; Sheffield, Thomas; Stacy, Scott; Yang, Chun
2009-03-01
The uniformity of rubber-carbon black composite materials has been investigated with positron Doppler Broadening Spectroscopy (DBS). The number of grams of carbon black (CB) mixed into one hundred grams of rubber, phr, is used to characterize a sample. A typical concentration for rubber in tires is 50 phr. The S parameter measured by DBS has been found to depend on the phr of the sample as well as the type of rubber and carbon black. The variation in carbon black concentration within a surface area of about 5 mm diameter can be measured by moving a standard Na-22 or Ge-68 positron source over an extended sample. The precision of the concentration measurement depends on the dwell time at a point on the sample. The time required to determine uniformity over an extended sample can be reduced by running with much higher counting rate than is typical in DBS and correcting for the systematic variation of S parameter with counting rate. Variation in CB concentration with mixing time at the level of about 0.5% has been observed.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.
Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi
2010-04-01
The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.
Are the birch trees in Southern England a source of Betula pollen for North London?
Skjøth, C A; Smith, M; Brandt, J; Emberlin, J
2009-01-01
Birch pollen is highly allergenic. Knowledge of daily variations, atmospheric transport and source areas of birch pollen is important for exposure studies and for warnings to the public, especially for large cities such as London. Our results show that broad-leaved forests with high birch tree densities are located to the south and west of London. Bi-hourly Betula pollen concentrations for all the days included in the study, and for all available days with high birch pollen counts (daily average birch pollen counts>80 grains/m3), show that, on average, there is a peak between 1400 hours and 1600 hours. Back-trajectory analysis showed that, on days with high birch pollen counts (n=60), 80% of air masses arriving at the time of peak diurnal birch pollen count approached North London from the south in a 180 degree arc from due east to due west. Detailed investigations of three Betula pollen episodes, with distinctly different diurnal patterns compared to the mean daily cycle, were used to illustrate how night-time maxima (2200-0400 hours) in Betula pollen counts could be the result of transport from distant sources or long transport times caused by slow moving air masses. We conclude that the Betula pollen recorded in North London could originate from sources found to the west and south of the city and not just trees within London itself. Possible sources outside the city include Continental Europe and the Betula trees within the broad-leaved forests of Southern England.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
NASA Technical Reports Server (NTRS)
Gridley, D. S.; Pecaut, M. J.; Miller, G. M.; Moyers, M. F.; Nelson, G. A.
2001-01-01
The goal of part II of this study was to evaluate the effects of gamma-radiation on circulating blood cells, functional characteristics of splenocytes, and cytokine expression after whole-body irradiation at varying total doses and at low- and high-dose-rates (LDR, HDR). Young adult C57BL/6 mice (n = 75) were irradiated with either 1 cGy/min or 80 cGy/min photons from a 60Co source to cumulative doses of 0.5, 1.5, and 3.0 Gy. The animals were euthanized at 4 days post-exposure for in vitro assays. Significant dose- (but not dose-rate-) dependent decreases were observed in erythrocyte and blood leukocyte counts, hemoglobin, hematocrit, lipopolysaccharide (LPS)-induced 3H-thymidine incorporation, and interleukin-2 (IL-2) secretion by activated spleen cells when compared to sham-irradiated controls (p < 0.05). Basal proliferation of leukocytes in the blood and spleen increased significantly with increasing dose (p < 0.05). Significant dose rate effects were observed only in thrombocyte counts. Plasma levels of transforming growth factor-beta 1 (TGF-beta 1) and splenocyte secretion of tumor necrosis factor-alpha (TNF-alpha) were not affected by either the dose or dose rate of radiation. The data demonstrate that the responses of blood and spleen were largely dependent upon the total dose of radiation employed and that an 80-fold difference in the dose rate was not a significant factor in the great majority of measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W; Kappadath, S
2014-06-01
Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less
2011-01-01
Background It is unclear whether antiretroviral (ART) naive HIV-positive individuals with high CD4 counts have a raised mortality risk compared with the general population, but this is relevant for considering earlier initiation of antiretroviral therapy. Methods Pooling data from 23 European and North American cohorts, we calculated country-, age-, sex-, and year-standardised mortality ratios (SMRs), stratifying by risk group. Included patients had at least one pre-ART CD4 count above 350 cells/mm3. The association between CD4 count and death rate was evaluated using Poisson regression methods. Findings Of 40,830 patients contributing 80,682 person-years of follow up with CD4 count above 350 cells/mm3, 419 (1.0%) died. The SMRs (95% confidence interval) were 1.30 (1.06-1.58) in homosexual men, and 2.94 (2.28-3.73) and 9.37 (8.13-10.75) in the heterosexual and IDU risk groups respectively. CD4 count above 500 cells/mm3 was associated with a lower death rate than 350-499 cells/mm3: adjusted rate ratios (95% confidence intervals) for 500-699 cells/mm3 and above 700 cells/mm3 were 0.77 (0.61-0.95) and 0.66 (0.52-0.85) respectively. Interpretation In HIV-infected ART-naive patients with high CD4 counts, death rates were raised compared with the general population. In homosexual men this was modest, suggesting that a proportion of the increased risk in other groups is due to confounding by other factors. Even in this high CD4 count range, lower CD4 count was associated with raised mortality. PMID:20638118
A Prescription for List-Mode Data Processing Conventions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef
There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less
Investigations of Wind/WAVES Dust Impacts
NASA Astrophysics Data System (ADS)
St Cyr, O. C.; Wilson, L. B., III; Rockcliffe, K.; Mills, A.; Nieves-Chinchilla, T.; Adrian, M. L.; Malaspina, D.
2017-12-01
The Wind spacecraft launched in November 1994 with a primary goal to observe and understand the interaction between the solar wind and Earth's magnetosphere. The waveform capture detector, TDS, of the radio and plasma wave investigation, WAVES [Bougeret et al., 1995], onboard Wind incidentally detected micron-sized dust as electric field pulses from the recollection of the impact plasma clouds (an unintended objective). TDS has detected over 100,000 dust impacts spanning almost two solar cycles; a dataset of these impacts has been created and was described in Malaspina & Wilson [2016]. The spacecraft continues to collect data about plasma, energetic particles, and interplanetary dust impacts. Here we report on two investigations recently conducted on the Wind/WAVES TDS database of dust impacts. One possible source of dust particles is the annually-recurring meteor showers. Using the nine major showers defined by the American Meteor Society, we compared dust count rates before, during, and after the peak of the showers using averaging windows of varying duration. However, we found no statistically significant change in the dust count rates due to major meteor showers. This appears to be an expected result since smaller grains, like the micron particles that Wind is sensitive to, are affected by electromagnetic interactions and Poynting-Robertson drag, and so are scattered away from their initial orbits. Larger grains tend to be more gravitationally dominated and stay on the initial trajectory of the parent body so that only the largest dust grains (those that create streaks as they burn up in the atmosphere) are left in the orbit of the parent body. Ragot and Kahler [2003] predicted that coronal mass ejections (CMEs) near the Sun could effectively scatter dust grains of comparable size to those observed by Wind. Thus, we examined the dust count rates immediately before, during, and after the passage of the 350 interplanetary CMEs observed by Wind over its 20+ year lifetime. We found a statistically significant and consistent trend of count rate deficits during the ICMEs compared to the periods immediately before and after the ICMEs. These preliminary results suggest that ICMEs may scatter micron-sized dust, or that they may exclude it during their initiation.
NASA Astrophysics Data System (ADS)
Singh, Arvind; Desai, Shraddha; Kumar, Arvind; Topkar, Anita
2018-05-01
A novel approach of using thin epitaxial silicon PIN detectors for thermal neutron measurements with reduced γ sensitivity has been presented. Monte Carlo simulations showed that there is a significant reduction in the gamma sensitivity for thin detectors with the thickness of 10- 25 μm compared to a detector of thickness of 300 μm. Epitaxial PIN silicon detectors with the thickness of 10 μm, 15 μm and 25 μm were fabricated using a custom process. The detectors exhibited low leakage currents of a few nano-amperes. The gamma sensitivity of the detectors was experimentally studied using a 33 μCi, 662 keV, 137Cs source. Considering the count rates, compared to a 300 μm thick detector, the gamma sensitivity of the 10 μm, 15 μm and 25 μm thick detectors was reduced by factors of 1874, 187 and 18 respectively. The detector performance for thermal neutrons was subsequently investigated with a thermal neutron beam using an enriched 10B film as a neutron converter layer. The thermal neutron spectra for all three detectors exhibited three distinct regions corresponding to the 4He and 7Li charge products released in the 10B-n reaction. With a 10B converter, the count rates were 1466 cps, 3170 cps and 2980 cps for the detectors of thicknesses of 10 μm, 25 μm and 300 μm respectively. The thermal neutron response of thin detectors with 10 μm and 25 μm thickness showed significant reduction in the gamma sensitivity compared to that observed for the 300 μm thick detector. Considering the total count rate obtained for thermal neutrons with a 10B converter film, the count rate without the converter layer were about 4%, 7% and 36% for detectors with thicknesses of 10 μm, 25 μm and 300 μm respectively. The detector with 10 μm thickness showed negligible gamma sensitivity of 4 cps, but higher electronic noise and reduced pulse heights. The detector with 25 μm thickness demonstrated the best performance with respect to electronic noise, thermal neutron response and gamma sensitivity.
NASA Technical Reports Server (NTRS)
Smith, S. J.; Adams, J. S.; Bandler, S. R.; Betancourt-Martinez, G. L.; Chervenak, J. A.; Chiao, M. P.; Eckart, M. E.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.;
2016-01-01
The focal plane of the X-ray integral field unit (X-IFU) for ESA's Athena X-ray observatory will consist of approximately 4000 transition edge sensor (TES) x-ray microcalorimeters optimized for the energy range of 0.2 to 12 kiloelectronvolts. The instrument will provide unprecedented spectral resolution of approximately 2.5 electronvolts at energies of up to 7 kiloelectronvolts and will accommodate photon fluxes of 1 milliCrab (90 counts per second) for point source observations. The baseline configuration is a uniform large pixel array (LPA) of 4.28 arcseconds pixels that is read out using frequency domain multiplexing (FDM). However, an alternative configuration under study incorporates an 18 by × 18 small pixel array (SPA) of 2 arcseconds pixels in the central approximately 36 arcseconds region. This hybrid array configuration could be designed to accommodate higher fluxes of up to 10 milliCrabs (900 counts per second) or alternately for improved spectral performance (less than 1.5 electronvolts) at low count-rates. In this paper we report on the TES pixel designs that are being optimized to meet these proposed LPA and SPA configurations. In particular we describe details of how important TES parameters are chosen to meet the specific mission criteria such as energy resolution, count-rate and quantum efficiency, and highlight performance trade-offs between designs. The basis of the pixel parameter selection is discussed in the context of existing TES arrays that are being developed for solar and x-ray astronomy applications. We describe the latest results on DC biased diagnostic arrays as well as large format kilo-pixel arrays and discuss the technical challenges associated with integrating different array types on to a single detector die.
Population demographics of two local South Carolina mourning dove populations
McGowan, D.P.; Otis, D.L.
1998-01-01
The mourning dove (Zenaida macroura) call-count index had a significant (P 2,300 doves and examined >6,000 individuals during harvest bag checks. An age-specific band recovery model with time- and area-specific recovery rates, and constant survival rates, was chosen for estimation via Akaike's Information Criterion (AIC), likelihood ratio, and goodness-of-fit criteria. After-hatching-year (AHY) annual survival rate was 0.359 (SE = 0.056), and hatching-year (HY) annual survival rate was 0.118 (SE = 0.042). Average estimated recruitment per adult female into the prehunting season population was 3.40 (SE = 1.25) and 2.32 (SE = 0.46) for the 2 study areas. Our movement data support earlier hypotheses of nonmigratory breeding and harvested populations in South Carolina. Low survival rates and estimated population growth rate in the study areas may be representative only of small-scale areas that are heavily managed for dove hunting. Source-sink theory was used to develop a model of region-wide populations that is composed of source areas with positive growth rates and sink areas of declining growth. We suggest management of mourning doves in the Southeast might benefit from improved understanding of local population dynamics, as opposed to regional-scale population demographics.
Rausch, Ivo; Cal-González, Jacobo; Dapra, David; Gallowitsch, Hans Jürgen; Lind, Peter; Beyer, Thomas; Minear, Gregory
2015-12-01
The purpose of the study is to evaluate the physical performance of a Biograph mCT Flow 64-4R PET/CT system (Siemens Healthcare, Germany) and to compare clinical image quality in step-and-shoot (SS) and continuous table motion (CTM) acquisitions. The spatial resolution, sensitivity, count rate curves, and Image Quality (IQ) parameters following the National Electrical Manufactures Association (NEMA) NU2-2012 standard were evaluated. For resolution measurements, an (18)F point source inside a glass capillary tube was used. Sensitivity measurements were based on a 70-cm-long polyethylene tube, filled with 4.5 MBq of FDG. Scatter fraction and count rates were measured using a 70-cm-long polyethylene cylinder with a diameter of 20 cm and a line source (1.04 GBq of FDG) inserted axially into the cylinder 4.5 cm off-centered. A NEMA IQ phantom containing six spheres (10- to 37-mm diameter) was used for the evaluation of the image quality. First, a single-bed scan was acquired (NEMA standard), followed by a two-bed scan (4 min each) of the IQ phantom with the image plane containing the spheres centered in the overlap region of the two bed positions. In addition, a scan of the same region in CTM mode was performed with a table speed of 0.6 mm/s. Furthermore, two patient scans were performed in CTM and SS mode. Image contrasts and patient images were compared between SS and CTM acquisitions. Full Width Half Maximum (FWHM) of the spatial resolution ranged from 4.3 to 7.8 mm (radial distance 1 to 20 cm). The measured sensitivity was 9.6 kcps/MBq, both at the center of the FOV and 10 cm off-center. The measured noise equivalent count rate (NECR) peak was 185 kcps at 29.0 kBq/ml. The scatter fraction was 33.5 %. Image contrast recovery values (sphere-to-background of 8:1) were between 42 % (10-mm sphere) to 79 % (37-mm sphere). The background variability was between 2.1 and 5.3 % (SS) and between 2.4 and 6.9 % (CTM). No significant difference in image quality was observed between SS and CTM mode. The spatial resolution, sensitivity, scatter fraction, and count rates were in concordance with the published values for the predecessor system, the Biograph mCT. Contrast recovery values as well as image quality obtained in SS and CTM acquisition modes were similar.
On-demand generation of background-free single photons from a solid-state source
NASA Astrophysics Data System (ADS)
Schweickert, Lucas; Jöns, Klaus D.; Zeuner, Katharina D.; Covre da Silva, Saimon Filipe; Huang, Huiying; Lettner, Thomas; Reindl, Marcus; Zichi, Julien; Trotta, Rinaldo; Rastelli, Armando; Zwiller, Val
2018-02-01
True on-demand high-repetition-rate single-photon sources are highly sought after for quantum information processing applications. However, any coherently driven two-level quantum system suffers from a finite re-excitation probability under pulsed excitation, causing undesirable multi-photon emission. Here, we present a solid-state source of on-demand single photons yielding a raw second-order coherence of g(2 )(0 )=(7.5 ±1.6 )×10-5 without any background subtraction or data processing. To this date, this is the lowest value of g(2 )(0 ) reported for any single-photon source even compared to the previously reported best background subtracted values. We achieve this result on GaAs/AlGaAs quantum dots embedded in a low-Q planar cavity by employing (i) a two-photon excitation process and (ii) a filtering and detection setup featuring two superconducting single-photon detectors with ultralow dark-count rates of (0.0056 ±0.0007 ) s-1 and (0.017 ±0.001 ) s-1, respectively. Re-excitation processes are dramatically suppressed by (i), while (ii) removes false coincidences resulting in a negligibly low noise floor.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Setup and Calibration of SLAC's Peripheral Monitoring Stations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooper, C.
2004-09-03
The goals of this project were to troubleshoot, repair, calibrate, and establish documentation regarding SLAC's (Stanford Linear Accelerator Center's) PMS (Peripheral Monitoring Station) system. The PMS system consists of seven PMSs that continuously monitor skyshine (neutron and photon) radiation levels in SLAC's environment. Each PMS consists of a boron trifluoride (BF{sub 3}) neutron detector (model RS-P1-0802-104 or NW-G-20-12) and a Geiger Moeller (GM) gamma ray detector (model TGM N107 or LND 719) together with their respective electronics. Electronics for each detector are housed in Nuclear Instrument Modules (NIMs) and are plugged into a NIM bin in the station. All communicationmore » lines from the stations to the Main Control Center (MCC) were tested prior to troubleshooting. To test communication with MCC, a pulse generator (Systron Donner model 100C) was connected to each channel in the PMS and data at MCC was checked for consistency. If MCC displayed no data, the communication cables to MCC or the CAMAC (Computer Automated Measurement and Control) crates were in need of repair. If MCC did display data, then it was known that the communication lines were intact. All electronics from each station were brought into the lab for troubleshooting. Troubleshooting usually consisted of connecting an oscilloscope or scaler (Ortec model 871 or 775) at different points in the circuit of each detector to record simulated pulses produced by a pulse generator; the input and output pulses were compared to establish the location of any problems in the circuit. Once any problems were isolated, repairs were done accordingly. The detectors and electronics were then calibrated in the field using radioactive sources. Calibration is a process that determines the response of the detector. Detector response is defined as the ratio of the number of counts per minute interpreted by the detector to the amount of dose equivalent rate (in mrem per hour, either calculated or measured). Detector response for both detectors is dependent upon the energy of the incident radiation; this trend had to be accounted for in the calibration of the BF{sub 3} detector. Energy dependence did not have to be taken into consideration when calibrating the GM detectors since GM detector response is only dependent on radiation energy below 100 keV; SLAC only produces a spectrum of gamma radiation above 100 keV. For the GM detector, calibration consisted of bringing a {sup 137}Cs source and a NIST-calibrated RADCAL Radiation Monitor Controller (model 9010) out to the field; the absolute dose rate was determined by the RADCAL device while simultaneously irradiating the GM detector to obtain a scaler reading corresponding to counts per minute. Detector response was then calculated. Calibration of the BF{sub 3} detector was done using NIST certified neutron sources of known emission rates and energies. Five neutron sources ({sup 238}PuBe, {sup 238}PuB, {sup 238}PuF4, {sup 238}PuLi and {sup 252}Cf) with different energies were used to account for the energy dependence of the response. The actual neutron dose rate was calculated by date-correcting NIST source data and considering the direct dose rate and scattered dose rate. Once the total dose rate (sum of the direct and scattered dose rates) was known, the response vs. energy curve was plotted. The first station calibrated (PMS6) was calibrated with these five neutron sources; all subsequent stations were calibrated with one neutron source and the energy dependence was assumed to be the same.« less
Grant, Evan H. Campbell; Zipkin, Elise; Scott, Sillett T.; Chandler, Richard; Royle, J. Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales.
Coates, Peter S.; Prochazka, Brian G.; Ricca, Mark A.; Halstead, Brian J.; Casazza, Michael L.; Blomberg, Erik J.; Brussee, Brianne E.; Wiechman, Lief; Tebbenkamp, Joel; Gardner, Scott C.; Reese, Kerry P.
2018-01-01
Consideration of ecological scale is fundamental to understanding and managing avian population growth and decline. Empirically driven models for population dynamics and demographic processes across multiple spatial scales can be powerful tools to help guide conservation actions. Integrated population models (IPMs) provide a framework for better parameter estimation by unifying multiple sources of data (e.g., count and demographic data). Hierarchical structure within such models that include random effects allow for varying degrees of data sharing across different spatiotemporal scales. We developed an IPM to investigate Greater Sage-Grouse (Centrocercus urophasianus) on the border of California and Nevada, known as the Bi-State Distinct Population Segment. Our analysis integrated 13 years of lek count data (n > 2,000) and intensive telemetry (VHF and GPS; n > 350 individuals) data across 6 subpopulations. Specifically, we identified the most parsimonious models among varying random effects and density-dependent terms for each population vital rate (e.g., nest survival). Using a joint likelihood process, we integrated the lek count data with the demographic models to estimate apparent abundance and refine vital rate parameter estimates. To investigate effects of climatic conditions, we extended the model to fit a precipitation covariate for instantaneous rate of change (r). At a metapopulation extent (i.e. Bi-State), annual population rate of change λ (er) did not favor an overall increasing or decreasing trend through the time series. However, annual changes in λ were driven by changes in precipitation (one-year lag effect). At subpopulation extents, we identified substantial variation in λ and demographic rates. One subpopulation clearly decoupled from the trend at the metapopulation extent and exhibited relatively high risk of extinction as a result of low egg fertility. These findings can inform localized, targeted management actions for specific areas, and status of the species for the larger Bi-State.
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...
2018-03-29
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
The Herschel-ATLAS data release 1 - I. Maps, catalogues and number counts
NASA Astrophysics Data System (ADS)
Valiante, E.; Smith, M. W. L.; Eales, S.; Maddox, S. J.; Ibar, E.; Hopwood, R.; Dunne, L.; Cigan, P. J.; Dye, S.; Pascale, E.; Rigby, E. E.; Bourne, N.; Furlanetto, C.; Ivison, R. J.
2016-11-01
We present the first major data release of the largest single key-project in area carried out in open time with the Herschel Space Observatory. The Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS) is a survey of 600 deg2 in five photometric bands - 100, 160, 250, 350 and 500 μm - with the Photoconductor Array Camera and Spectrometer and Spectral and Photometric Imaging Receiver (SPIRE) cameras. In this paper and the companion Paper II, we present the survey of three fields on the celestial equator, covering a total area of 161.6 deg2 and previously observed in the Galaxy and Mass Assembly (GAMA) spectroscopic survey. This paper describes the Herschel images and catalogues of the sources detected on the SPIRE 250 μm images. The 1σ noise for source detection, including both confusion and instrumental noise, is 7.4, 9.4 and 10.2 mJy at 250, 350 and 500 μm. Our catalogue includes 120 230 sources in total, with 113 995, 46 209 and 11 011 sources detected at >4σ at 250, 350 and 500 μm. The catalogue contains detections at >3σ at 100 and 160 μm for 4650 and 5685 sources, and the typical noise at these wavelengths is 44 and 49 mJy. We include estimates of the completeness of the survey and of the effects of flux bias and also describe a novel method for determining the true source counts. The H-ATLAS source counts are very similar to the source counts from the deeper HerMES survey at 250 and 350 μm, with a small difference at 500 μm. Appendix A provides a quick start in using the released data sets, including instructions and cautions on how to use them.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
NASA Astrophysics Data System (ADS)
Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.
2011-09-01
We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org
NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachetti, Matteo; Barret, Didier; Harrison, Fiona A.
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploitmore » the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.« less
[Study on effect of 3 types of drinking water emergent disinfection models in flood/waterlog areas].
Ban, Haiqun; Li, Jin; Li, Xinwu; Zhang, Liubo
2010-09-01
To establish 3 drinking water emergent disinfection processing models, separated medicate dispensing, specific duty medicate dispensing, and centralized filtering, in flood/waterlog areas, and compare the effects of these 3 models on the drinking water disinfection processing. From October to December, 2008, 18 villages were selected as the trial field in Yanglinwei town, Xiantao city, Hubei province, which were divided into three groups, separated medicate dispensing, specific duty medicate dispensing, and centralized filtering. Every 2 weeks, drinking water source water, yielding water of emergency central filtrate water equipment (ECFWE) and container water in the kitchen were sampled and microbe indices of the water sample, standard plate-count bacteria, total coliforms, thermotolerant coliform bacteria, Escherichia coli were measured. The microbe pollution of the water of these 3 water source groups are heavy, all failed. The eliminating rate of the standard plate-count bacteria of the drinking water emergent centralized processing equipment is 99.95%; those of the separate medicate dispensing, specific duty medicate dispensing and centralized filtering are 81.93%, 99.67%, and 98.28%, respectively. The passing rates of the microbe indice of the resident contained water are 13.33%, 70.00%, and 43.33%, respectively. The difference has statistical significance. The drinking water disinfection effects of the centralized filtering model and of the specific duty medicate dispensing model are better than that of the separated medicate dispensing model in the flood/waterlog areas.
Performance evaluation of the Ingenuity TF PET/CT scanner with a focus on high count-rate conditions
NASA Astrophysics Data System (ADS)
Kolthammer, Jeffrey A.; Su, Kuan-Hao; Grover, Anu; Narayanan, Manoj; Jordan, David W.; Muzic, Raymond F.
2014-07-01
This study evaluated the positron emission tomography (PET) imaging performance of the Ingenuity TF 128 PET/computed tomography (CT) scanner which has a PET component that was designed to support a wider radioactivity range than is possible with those of Gemini TF PET/CT and Ingenuity TF PET/MR. Spatial resolution, sensitivity, count rate characteristics and image quality were evaluated according to the NEMA NU 2-2007 standard and ACR phantom accreditation procedures; these were supplemented by additional measurements intended to characterize the system under conditions that would be encountered during quantitative cardiac imaging with 82Rb. Image quality was evaluated using a hot spheres phantom, and various contrast recovery and noise measurements were made from replicated images. Timing and energy resolution, dead time, and the linearity of the image activity concentration, were all measured over a wide range of count rates. Spatial resolution (4.8-5.1 mm FWHM), sensitivity (7.3 cps kBq-1), peak noise-equivalent count rate (124 kcps), and peak trues rate (365 kcps) were similar to those of the Gemini TF PET/CT. Contrast recovery was higher with a 2 mm, body-detail reconstruction than with a 4 mm, body reconstruction, although the precision was reduced. The noise equivalent count rate peak was broad (within 10% of peak from 241-609 MBq). The activity measured in phantom images was within 10% of the true activity for count rates up to those observed in 82Rb cardiac PET studies.
The SWIFT AGN and Cluster Survey. I. Number Counts of AGNs and Galaxy Clusters
NASA Astrophysics Data System (ADS)
Dai, Xinyu; Griffin, Rhiannon D.; Kochanek, Christopher S.; Nugent, Jenna M.; Bregman, Joel N.
2015-05-01
The Swift active galactic nucleus (AGN) and Cluster Survey (SACS) uses 125 deg2 of Swift X-ray Telescope serendipitous fields with variable depths surrounding γ-ray bursts to provide a medium depth (4× {{10}-15} erg cm-2 s-1) and area survey filling the gap between deep, narrow Chandra/XMM-Newton surveys and wide, shallow ROSAT surveys. Here, we present a catalog of 22,563 point sources and 442 extended sources and examine the number counts of the AGN and galaxy cluster populations. SACS provides excellent constraints on the AGN number counts at the bright end with negligible uncertainties due to cosmic variance, and these constraints are consistent with previous measurements. We use Wide-field Infrared Survey Explorer mid-infrared (MIR) colors to classify the sources. For AGNs we can roughly separate the point sources into MIR-red and MIR-blue AGNs, finding roughly equal numbers of each type in the soft X-ray band (0.5-2 keV), but fewer MIR-blue sources in the hard X-ray band (2-8 keV). The cluster number counts, with 5% uncertainties from cosmic variance, are also consistent with previous surveys but span a much larger continuous flux range. Deep optical or IR follow-up observations of this cluster sample will significantly increase the number of higher-redshift (z\\gt 0.5) X-ray-selected clusters.
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
Gravitational wave source counts at high redshift and in models with extra dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es
2016-07-01
Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less
Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S
2016-04-15
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12} ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}] ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope
Ackermann, M.; Ajello, M.; Albert, A.; ...
2016-04-14
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less
NASA Technical Reports Server (NTRS)
Kraft, Ralph P.; Burrows, David N.; Nousek, John A.
1991-01-01
Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.
Schmitz, Christoph; Eastwood, Brian S.; Tappan, Susan J.; Glaser, Jack R.; Peterson, Daniel A.; Hof, Patrick R.
2014-01-01
Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D) stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D) “cell counting” approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38 and 99% and false-positive rates from 3.6 to 82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections. PMID:24847213
Background Conditions for the October 29, 2003 Solar Flare by the AVS-F Apparatus Data
NASA Astrophysics Data System (ADS)
Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Lyapin, A. R.; Troitskaya, E. V.
The background model for AVS-F apparatus onboard CORONAS-F satellite for the October 29, 2003 X10-class solar flare is discussed in the presented work. This background model developed for AVS-F counts rate in the low- and high-energy spectral ranges in both individual channels and summarized. Count rate were approximated by polynomials of high order taking into account the mean count rate in the geomagnetic equatorial region at the different orbits parts and Kp-index averaged on 5 bins in time interval from -24 to -12 hours before the time of geomagnetic equator passing. The observed averaged counts rate on equator in the region of geomagnetic latitude ±5o and estimated minimum count rate values are in coincidence within statistical errors for all selected orbits parts used for background modeling. This model will used to refine the estimated energy of registered during the solar flare spectral features and detailed analysis of their temporal profiles behavior both in corresponding energy bands and in summarized energy range.
Determining X-ray source intensity and confidence bounds in crowded fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Primini, F. A.; Kashyap, V. L., E-mail: fap@head.cfa.harvard.edu
We present a rigorous description of the general problem of aperture photometry in high-energy astrophysics photon-count images, in which the statistical noise model is Poisson, not Gaussian. We compute the full posterior probability density function for the expected source intensity for various cases of interest, including the important cases in which both source and background apertures contain contributions from the source, and when multiple source apertures partially overlap. A Bayesian approach offers the advantages of allowing one to (1) include explicit prior information on source intensities, (2) propagate posterior distributions as priors for future observations, and (3) use Poisson likelihoods,more » making the treatment valid in the low-counts regime. Elements of this approach have been implemented in the Chandra Source Catalog.« less
Neutron Detection with Centrifugally-Tensioned Metastable Fluid Detectors (CMTFD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Y.; Smagacz, P.; Lapinskas, J.
2006-07-01
Tensioned metastable liquid states at room temperature were utilized to display sensitivity to impinging nuclear radiation, that manifests itself via audio-visual signals that one can see and hear. A centrifugally-tensioned metastable fluid detector (CTMFD), a diamond shaped spinning device rotating about its axis, was used to induce tension states, i.e. negative (sub-vacuum) pressures in liquids. In this device, radiation induced cavitation is audible due to liquid fracture and is visible from formed bubbles, so called hearing and seeing radiation. This type of detectors is selectively insensitive to Gamma rays and associated indication devices could be extremely simple, reliable and inexpensive.more » Furthermore, any liquids with large neutron interaction cross sections could be good candidates. Two liquids, isopentane and methanol, were tested with three neutron sources of Cf-252, PuBe and Pulsed Neutron Generator (PNG) under various configurations of neutron spectra and fluxes. The neutron count rates were measured using a liquid scintillation detector. The CTMFD was operated at preset values of rotating frequency and a response time was recorded when a cavitation occurred. Other parameters, including ambient temperature, ramp rate, delay time between two consecutive cavitations, were kept constant. The distance between the menisci of the liquid in the CTMFD was measured before and after each experiment. In general, the response of liquid molecules in a CTMFD varies with the neutron spectrum and flux. The response time follows an exponential trend with negative pressures for a given neutron count rate and spectra conditions. Isopentane was found to exhibit lower tension thresholds than methanol. On the other hand, methanol offered a larger tension metastability state variation for the various types of neutron sources, indicating the potential for offering significantly better energy resolution abilities for spectroscopic applications. (authors)« less
Crater Age and Hydrogen Content in Lunar Regolith from LEND Neutron Data
NASA Astrophysics Data System (ADS)
Sanin, Anton; Starr, Richard; Litvak, Maxim; Petro, Noah; Mitrofanov, Igor
2017-04-01
We are presenting an analysis of Lunar Exploration Neutron Detector (LEND) epithermal neutron count rates for a large set of mid-latitude craters. Epithermal neutron count rates for crater interiors measured by the LEND Sensor for Epithermal Neutrons (SETN) were compared to crater exteriors for 322 craters. An increase in relative count rate at about 9-sigma confidence level was found, consistent with a lower hydrogen content. A smaller subset of 31 craters, all located near three Copernican era craters, Jackson, Tycho, and Necho, also shows a significant increase in Optical Maturity parameter implying an immature regolith. The increase in SETN count rate for these craters is greater than the increase for the full set of craters by more than a factor of two.
Crater Age and Hydrogen Content in Lunar Regolith from LEND Neutron Data
NASA Technical Reports Server (NTRS)
Starr, Richard D.; Litvak, Maxim L.; Petro, Noah E.; Mitrofanov, Igor G.; Boynton, William V.; Chin, Gordon; Livengood, Timothy A.; McClanahan, Timothy P.; Sanin, Anton B.; Sagdeev, Roald Z.;
2017-01-01
Analysis of Lunar Exploration Neutron Detector (LEND) neutron count rates for a large set of mid-latitude craters provides evidence for lower hydrogen content in the crater interiors compared to typical highland values. Epithermal neutron count rates for crater interiors measured by the LEND Sensor for Epithermal Neutrons (SETN) were compared to crater exteriors for 301 craters and displayed an increase in mean count rate at the approx. 9-sigma confidence level, consistent with a lower hydrogen content. A smaller subset of 31 craters also shows a significant increase in Optical Maturity parameter implying an immature regolith. The increase in SETN count rate for these craters is greater than the increase for the full set of craters by more than a factor of two.
Alisauskas, R.T.; Drake, K.L.; Nichols, J.D.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
We consider use of recoveries of marked birds harvested by hunters, in conjunction with continental harvest estimates, for drawing inferences about continental abundance of a select number of goose species. We review assumptions of this method, a version of the Lincoln?Petersen approach, and consider its utility as a tool for making decisions about harvest management in comparison to current sources of information. Finally, we compare such estimates with existing count data, photographic estimates, or other abundance estimates. In most cases, Lincoln estimates are far higher than abundances assumed or perhaps accepted by many waterfowl biologists and managers. Nevertheless, depending on the geographic scope of inference, we suggest that this approach for abundance estimation of arctic geese may have usefulness for retrospective purposes or to assist with harvest management decisions for some species. Lincoln?s estimates may be as close or closer to truth than count, index, or photo data, and can be used with marking efforts currently in place for estimation of survival and harvest rates. Although there are bias issues associated with estimates of both harvest and harvest rate, some of the latter can be addressed with proper allocation of marks to spatially structured populations if subpopulations show heterogeneity in harvest rates.
Relationship of milking rate to somatic cell count.
Brown, C A; Rischette, S J; Schultz, L H
1986-03-01
Information on milking rate, monthly bucket somatic cell counts, mastitis treatment, and milk production was obtained from 284 lactations of Holstein cows separated into three lactation groups. Significant correlations between somatic cell count (linear score) and other parameters included production in lactation 1 (-.185), production in lactation 2 (-.267), and percent 2-min milk in lactation 2 (.251). Somatic cell count tended to increase with maximum milking rate in all lactations, but correlations were not statistically significant. Twenty-nine percent of cows with milking rate measurements were treated for clinical mastitis. Treated cows in each lactation group produced less milk than untreated cows. In the second and third lactation groups, treated cows had a shorter total milking time and a higher percent 2-min milk than untreated cows, but differences were not statistically significant. Overall, the data support the concept that faster milking cows tend to have higher cell counts and more mastitis treatments, particularly beyond first lactation. However, the magnitude of the relationship was small.
Bellei, Francesco; Cartwright, Alyssa P; McCaughan, Adam N; Dane, Andrew E; Najafi, Faraz; Zhao, Qingyuan; Berggren, Karl K
2016-02-22
This paper describes the construction of a cryostat and an optical system with a free-space coupling efficiency of 56.5% ± 3.4% to a superconducting nanowire single-photon detector (SNSPD) for infrared quantum communication and spectrum analysis. A 1K pot decreases the base temperature to T = 1.7 K from the 2.9 K reached by the cold head cooled by a pulse-tube cryocooler. The minimum spot size coupled to the detector chip was 6.6 ± 0.11 µm starting from a fiber source at wavelength, λ = 1.55 µm. We demonstrated photon counting on a detector with an 8 × 7.3 µm2 area. We measured a dark count rate of 95 ± 3.35 kcps and a system detection efficiency of 1.64% ± 0.13%. We explain the key steps that are required to improve further the coupling efficiency.
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
NASA Technical Reports Server (NTRS)
Degnan, John J.; Smith, David E. (Technical Monitor)
2000-01-01
We consider the optimum design of photon-counting microlaser altimeters operating from airborne and spaceborne platforms under both day and night conditions. Extremely compact Q-switched microlaser transmitters produce trains of low energy pulses at multi-kHz rates and can easily generate subnanosecond pulse-widths for precise ranging. To guide the design, we have modeled the solar noise background and developed simple algorithms, based on Post-Detection Poisson Filtering (PDPF), to optimally extract the weak altimeter signal from a high noise background during daytime operations. Practical technology issues, such as detector and/or receiver dead times, have also been considered in the analysis. We describe an airborne prototype, being developed under NASA's instrument Incubator Program, which is designed to operate at a 10 kHz rate from aircraft cruise altitudes up to 12 km with laser pulse energies on the order of a few microjoules. We also analyze a compact and power efficient system designed to operate from Mars orbit at an altitude of 300 km and sample the Martian surface at rates up to 4.3 kHz using a 1 watt laser transmitter and an 18 cm telescope. This yields a Power-Aperture Product of 0.24 W-square meter, corresponding to a value almost 4 times smaller than the Mars Orbiting Laser Altimeter (0. 88W-square meter), yet the sampling rate is roughly 400 times greater (4 kHz vs 10 Hz) Relative to conventional high power laser altimeters, advantages of photon-counting laser altimeters include: (1) a more efficient use of available laser photons providing up to two orders of magnitude greater surface sampling rates for a given laser power-telescope aperture product; (2) a simultaneous two order of magnitude reduction in the volume, cost and weight of the telescope system; (3) the unique ability to spatially resolve the source of the surface return in a photon counting mode through the use of pixellated or imaging detectors; and (4) improved vertical and transverse spatial resolution resulting from both (1) and (3). Furthermore, because of significantly lower laser pulse energies, the microaltimeter is inherently more eyesafe to observers on the ground and less prone to internal optical damage, which can terminate a space mission prematurely.
10C survey of radio sources at 15.7 GHz - II. First results
NASA Astrophysics Data System (ADS)
AMI Consortium; Davies, Mathhew L.; Franzen, Thomas M. O.; Waldram, Elizabeth M.; Grainge, Keith J. B.; Hobson, Michael P.; Hurley-Walker, Natasha; Lasenby, Anthony; Olamaie, Malak; Pooley, Guy G.; Riley, Julia M.; Rodríguez-Gonzálvez, Carmen; Saunders, Richard D. E.; Scaife, Anna M. M.; Schammel, Michel P.; Scott, Paul F.; Shimwell, Timothy W.; Titterington, David J.; Zwart, Jonathan T. L.
2011-08-01
In a previous paper (Paper I), the observational, mapping and source-extraction techniques used for the Tenth Cambridge (10C) Survey of Radio Sources were described. Here, the first results from the survey, carried out using the Arcminute Microkelvin Imager Large Array (LA) at an observing frequency of 15.7 GHz, are presented. The survey fields cover an area of ≈27 deg2 to a flux-density completeness of 1 mJy. Results for some deeper areas, covering ≈12 deg2, wholly contained within the total areas and complete to 0.5 mJy, are also presented. The completeness for both areas is estimated to be at least 93 per cent. The 10C survey is the deepest radio survey of any significant extent (≳0.2 deg2) above 1.4 GHz. The 10C source catalogue contains 1897 entries and is available online. The source catalogue has been combined with that of the Ninth Cambridge Survey to calculate the 15.7-GHz source counts. A broken power law is found to provide a good parametrization of the differential count between 0.5 mJy and 1 Jy. The measured source count has been compared with that predicted by de Zotti et al. - the model is found to display good agreement with the data at the highest flux densities. However, over the entire flux-density range of the measured count (0.5 mJy to 1 Jy), the model is found to underpredict the integrated count by ≈30 per cent. Entries from the source catalogue have been matched with those contained in the catalogues of the NRAO VLA Sky Survey and the Faint Images of the Radio Sky at Twenty-cm survey (both of which have observing frequencies of 1.4 GHz). This matching provides evidence for a shift in the typical 1.4-GHz spectral index to 15.7-GHz spectral index of the 15.7-GHz-selected source population with decreasing flux density towards sub-mJy levels - the spectra tend to become less steep. Automated methods for detecting extended sources, developed in Paper I, have been applied to the data; ≈5 per cent of the sources are found to be extended relative to the LA-synthesized beam of ≈30 arcsec. Investigations using higher resolution data showed that most of the genuinely extended sources at 15.7 GHz are classical doubles, although some nearby galaxies and twin-jet sources were also identified.
NASA Astrophysics Data System (ADS)
Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua
2016-03-01
Pixelated photon counting detectors with energy discrimination capabilities are of increasing clinical interest for x-ray imaging. Such detectors, presently in clinical use for mammography and under development for breast tomosynthesis and spectral CT, usually employ in-pixel circuits based on crystalline silicon - a semiconductor material that is generally not well-suited for economic manufacture of large-area devices. One interesting alternative semiconductor is polycrystalline silicon (poly-Si), a thin-film technology capable of creating very large-area, monolithic devices. Similar to crystalline silicon, poly-Si allows implementation of the type of fast, complex, in-pixel circuitry required for photon counting - operating at processing speeds that are not possible with amorphous silicon (the material currently used for large-area, active matrix, flat-panel imagers). The pixel circuits of two-dimensional photon counting arrays are generally comprised of four stages: amplifier, comparator, clock generator and counter. The analog front-end (in particular, the amplifier) strongly influences performance and is therefore of interest to study. In this paper, the relationship between incident and output count rate of the analog front-end is explored under diagnostic imaging conditions for a promising poly-Si based design. The input to the amplifier is modeled in the time domain assuming a realistic input x-ray spectrum. Simulations of circuits based on poly-Si thin-film transistors are used to determine the resulting output count rate as a function of input count rate, energy discrimination threshold and operating conditions.
The Chandra Source Catalog: Source Variability
NASA Astrophysics Data System (ADS)
Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.
The Chandra Source Catalog: Source Variability
NASA Astrophysics Data System (ADS)
Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY ...
VIEW OF A BODY COUNTING ROOM IN BUILDING 122. BODY COUNTING MEASURES RADIOACTIVE MATERIAL IN THE BODY. DESIGNED TO MINIMIZE EXTERNAL SOURCES OF RADIATION, BODY COUNTING ROOMS ARE CONSTRUCTED OF PRE-WORLD WAR II (WWII) STEEL. PRE-WWII STEEL, WHICH HAS NOT BEEN AFFECTED BY NUCLEAR FALLOUT, IS LOWER IS RADIOACTIVITY THAN STEEL CREATED AFTER WWII. (10/25/85) - Rocky Flats Plant, Emergency Medical Services Facility, Southwest corner of Central & Third Avenues, Golden, Jefferson County, CO
General relativistic corrections in density-shear correlations
NASA Astrophysics Data System (ADS)
Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena
2018-06-01
We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.
Savu, Anamaria; Schopflocher, Donald; Scholnick, Barry; Kaul, Padma
2016-01-13
We examined the association between personal bankruptcy filing and acute myocardial infarction (AMI) rates in Canada. Between 2002 and 2009, aggregate and yearly bankruptcy and AMI rates were estimated for 1,155 forward sortation areas of Canada. Scatter plot and correlations were used to assess the association of the aggregate rates. Cross-lagged structural equation models were used to explore the longitudinal relationship between bankruptcy and AMI after adjustment for socio-economic factors. A cross-lagged structural equation model estimated that on average, an increase of 100 in bankruptcy filing count is associated with an increase of 1.5 (p = 0.02) in AMI count in the following year, and an increase of 100 in AMI count is associated with an increase of 7 (p < 0.01) in bankruptcy filing count. We found that regions with higher rates of AMI corresponded to those with higher levels of economic and financial stress, as indicated by personal bankruptcy rate, and vice-versa.
Single-photon emitting diode in silicon carbide.
Lohrmann, A; Iwamoto, N; Bodrog, Z; Castelletto, S; Ohshima, T; Karle, T J; Gali, A; Prawer, S; McCallum, J C; Johnson, B C
2015-07-23
Electrically driven single-photon emitting devices have immediate applications in quantum cryptography, quantum computation and single-photon metrology. Mature device fabrication protocols and the recent observations of single defect systems with quantum functionalities make silicon carbide an ideal material to build such devices. Here, we demonstrate the fabrication of bright single-photon emitting diodes. The electrically driven emitters display fully polarized output, superior photon statistics (with a count rate of >300 kHz) and stability in both continuous and pulsed modes, all at room temperature. The atomic origin of the single-photon source is proposed. These results provide a foundation for the large scale integration of single-photon sources into a broad range of applications, such as quantum cryptography or linear optics quantum computing.
High count-rate study of two TES x-ray microcalorimeters with different transition temperatures
NASA Astrophysics Data System (ADS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.; Sadleir, John E.; Smith, Stephen J.; Wassell, Edward J.
2017-10-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures (T c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T cs had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.3 eV at 6 keV from lower and higher T c devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the so-called event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96% throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T c (faster) device, and 5.8 eV FWHM with 97% throughput with the lower T c (slower) device at 722 Hz.
Takemoto, Kazuya; Nambu, Yoshihiro; Miyazawa, Toshiyuki; Sakuma, Yoshiki; Yamamoto, Tsuyoshi; Yorozu, Shinichi; Arakawa, Yasuhiko
2015-01-01
Advances in single-photon sources (SPSs) and single-photon detectors (SPDs) promise unique applications in the field of quantum information technology. In this paper, we report long-distance quantum key distribution (QKD) by using state-of-the-art devices: a quantum-dot SPS (QD SPS) emitting a photon in the telecom band of 1.5 μm and a superconducting nanowire SPD (SNSPD). At the distance of 100 km, we obtained the maximal secure key rate of 27.6 bps without using decoy states, which is at least threefold larger than the rate obtained in the previously reported 50-km-long QKD experiment. We also succeeded in transmitting secure keys at the rate of 0.307 bps over 120 km. This is the longest QKD distance yet reported by using known true SPSs. The ultralow multiphoton emissions of our SPS and ultralow dark count of the SNSPD contributed to this result. The experimental results demonstrate the potential applicability of QD SPSs to practical telecom QKD networks. PMID:26404010
Kaur, S; Nieuwenhuijsen, M J
2009-07-01
Short-term human exposure concentrations to PM2.5, ultrafine particle counts (particle range: 0.02-1 microm), and carbon monoxide (CO) were investigated at and around a street canyon intersection in Central London, UK. During a four week field campaign, groups of four volunteers collected samples at three timings (morning, lunch, and afternoon), along two different routes (a heavily trafficked route and a backstreet route) via five modes of transport (walking, cycling, bus, car, and taxi). This was followed by an investigation into the determinants of exposure using a regression technique which incorporated the site-specific traffic counts, meteorological variables (wind speed and temperature) and the mode of transport used. The analyses explained 9, 62, and 43% of the variability observed in the exposure concentrations to PM2.5, ultrafine particle counts, and CO in this study, respectively. The mode of transport was a statistically significant determinant of personal exposure to PM2.5, ultrafine particle counts, and CO, and for PM2.5 and ultrafine particle counts it was the most important determinant. Traffic count explained little of the variability in the PM2.5 concentrations, but it had a greater influence on ultrafine particle count and CO concentrations. The analyses showed that temperature had a statistically significant impact on ultrafine particle count and CO concentrations. Wind speed also had a statistically significant effect but smaller. The small proportion in variability explained in PM2.5 by the model compared to the largest proportion in ultrafine particle counts and CO may be due to the effect of long-range transboundary sources, whereas for ultrafine particle counts and CO, local traffic is the main source.
Radio Source Contributions to the Microwave Sky
NASA Astrophysics Data System (ADS)
Boughn, S. P.; Partridge, R. B.
2008-03-01
Cross-correlations of the Wilkinson Microwave Anisotropy Probe (WMAP) full sky K-, Ka-, Q-, V-, and W-band maps with the 1.4 GHz NVSS source count map and the HEAO I A2 2-10 keV full sky X-ray flux map are used to constrain rms fluctuations due to unresolved microwave sources in the WMAP frequency range. In the Q band (40.7 GHz), a lower limit, taking account of only those fluctuations correlated with the 1.4 GHz radio source counts and X-ray flux, corresponds to an rms Rayleigh-Jeans temperature of ˜2 μK for a solid angle of 1 deg2 assuming that the cross-correlations are dominated by clustering, and ˜1 μK if dominated by Poisson fluctuations. The correlated fluctuations at the other bands are consistent with a β = -2.1 ± 0.4 frequency spectrum. If microwave sources are distributed similarly in redshift to the radio and X-ray sources and are similarly clustered, then the implied total rms microwave fluctuations correspond to ˜5 μK. While this value should be considered no more than a plausible estimate, it is similar to that implied by the excess, small angular scale fluctuations observed in the Q band by WMAP and is consistent with estimates made by extrapolating low-frequency source counts.
Kumar, J Vijay; Baghirath, P Venkat; Naishadham, P Parameswar; Suneetha, Sujai; Suneetha, Lavanya; Sreedevi, P
2015-01-01
To determine if long-term highly active antiretroviral therapy (HAART) therapy alters salivary flow rate and also to compare its relation of CD4 count with unstimulated and stimulated whole saliva. A cross-sectional study was performed on 150 individuals divided into three groups. Group I (50 human immunodeficiency virus (HIV) seropositive patients, but not on HAART therapy), Group II (50 HIV-infected subjects and on HAART for less than 3 years called short-term HAART), Group III (50 HIV-infected subjects and on HAART for more than or equal to 3 years called long-term HAART). Spitting method proposed by Navazesh and Kumar was used for the measurement of unstimulated and stimulated salivary flow rate. Chi-square test and analysis of variance (ANOVA) were used for statistical analysis. The mean CD4 count was 424.78 ± 187.03, 497.82 ± 206.11 and 537.6 ± 264.00 in the respective groups. Majority of the patients in all the groups had a CD4 count between 401 and 600. Both unstimulated and stimulated whole salivary (UWS and SWS) flow rates in Group I was found to be significantly higher than in Group II (P < 0.05). Unstimulated salivary flow rate between Group II and III subjects were also found to be statistically significant (P < 0.05). ANOVA performed between CD4 count and unstimulated and stimulated whole saliva in each group demonstrated a statistically significant relationship in Group II (P < 0.05). There were no significant results found between CD4 count and stimulated whole saliva in each groups. The reduction in CD4 cell counts were significantly associated with salivary flow rates of HIV-infected individuals who are on long-term HAART.
Emission Features and Source Counts of Galaxies in Mid-Infrared
NASA Technical Reports Server (NTRS)
Xu, C.; Hacking, P. B.; Fang, F.; Shupe, D. L.; Lonsdale, C. J.; Lu, N. Y.; Helou, G.; Stacey, G. J.; Ashby, M. L. N.
1998-01-01
In this work we incorporate the newest ISO results on the mid-infrared spectral-energy-distributions (MIR SEDs) of galaxies into models for the number counts and redshift distributions of MIR surveys.
Goktekin, Mehmet C; Yilmaz, Mustafa
2018-06-01
In this research, the aim was to compare hematological data for the differentiation of subarachnoid hemorrhage, migraine attack, and other headache syndromes during consultation in emergency service. In this research, which was designed as retrospective case control study, hematological parameters (WBC, HgB, HCT, PLT, lymphocyte and neutrophile counts and neutrophile/lymphocyte rates) of the patients consulting to emergency service with SAH and migraine and other consulting patients complaining mainly from headache and having normal cranial CT were analysed. Sixty migraine attack patients (F/M:47/13), 57 SAH patients (F/M:30/27), and 53 patients except migraine having normal brain CT (F/M:36/17) who were consulted to emergency service with headache complaint were included in our research. WBC, Hct, HgB, MCV, PLT, MPV, LY, Neu counts, and NY/LY rates were found to differentiate between SAH and migraine. WBC, PLT, MPV, LY, and Neu rates were found to differentiate between SAH and HS patients. Only Hct, HgB, MCV, and NY/LY rates were found to differ meaningfully between SAH and migraine patients but these rates were not found to have meaningful difference between SAH and HS patients. In addition, an increase in WBC counts and NY/LY rates and decrease in MPV counts in ROC analysis were found to be more specific for SAH. WBC, HgB, HCT, PLT, lymphocyte and Neu counts, and NY/LY rates can indicate distinguishing SAH and migraine. WBC, HgB, HCT, PLT, lymphocyte and Neu counts can indicate to the clinician a differentiation of SAH and other headache syndromes.
Artifact reduction in the CSPAD detectors used for LCLS experiments.
Pietrini, Alberto; Nettelblad, Carl
2017-09-01
The existence of noise and column-wise artifacts in the CSPAD-140K detector and in a module of the CSPAD-2.3M large camera, respectively, is reported for the L730 and L867 experiments performed at the CXI Instrument at the Linac Coherent Light Source (LCLS), in low-flux and low signal-to-noise ratio regime. Possible remedies are discussed and an additional step in the preprocessing of data is introduced, which consists of performing a median subtraction along the columns of the detector modules. Thus, we reduce the overall variation in the photon count distribution, lowering the mean false-positive photon detection rate by about 4% (from 5.57 × 10 -5 to 5.32 × 10 -5 photon counts pixel -1 frame -1 in L867, cxi86715) and 7% (from 1.70 × 10 -3 to 1.58 × 10 -3 photon counts pixel -1 frame -1 in L730, cxi73013), and the standard deviation in false-positive photon count per shot by 15% and 35%, while not making our average photon detection threshold more stringent. Such improvements in detector noise reduction and artifact removal constitute a step forward in the development of flash X-ray imaging techniques for high-resolution, low-signal and in serial nano-crystallography experiments at X-ray free-electron laser facilities.
Multianode cylindrical proportional counter for high count rates
Hanson, J.A.; Kopp, M.K.
1980-05-23
A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.
Multianode cylindrical proportional counter for high count rates
Hanson, James A.; Kopp, Manfred K.
1981-01-01
A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.
A garage sale bargain: A leaking 2.2 GBq Ra-226 source, Phase II - Internal dose assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toohey, R.E.; Goans, R.E.
1996-06-01
The Radiation Emergency Assistance Center and Training Site (REAC/TS) at the Oak Ridge Institute for Science and Education (ORISE) in Oak Ridge was asked by the Department of Energy to assist the Tennessee Division of Radiological Health in assessing the potential health consequences of this incident. The purchaser of the radium source and his wife visited the REAMS facility on 8 May 1995, approximately 50 d after the purchase. Medical histories were taken and physical exams were performed by the REAC/TS physician, and blood samples were collected for complete blood counts (CBC), differentials, chemistry panels, and cytogenetic testing. The clinicalmore » results were normal, and a chromosome analysis of cultured peripheral lymphocytes showed no aberrations (rings or dicentrics) above background levels found in unexposed controls. A whole-body count was performed on the purchaser in the ORISE facility, but his wife declined because of discomfort with the enclosed space within the shield. In the energy band from 1.61 to 1.87 MeV, bracketing the 1.76-MeV peak from {sup 214}Bi, the subject had a net count rate of 0. 1 5 {+-} 0.04, counts per second, corresponding to a {sup 214}Bi body content of 400 {+-} 100 Bq. With the assumption that the {sup 222}Rn retention fraction was 0.37, this figure corresponded to a {sup 226}Ra content of 1.1 {+-} 0.3 kBq. With the further assumption that the primary intake route was Inhalation of 1.0-micron AMAD particles of class W {sup 226}Ra, the intake was computed to be 13 {+-} 3 kBq. The annual limit of intake by inhalation for class W {sup 226}Ra is based on the stochastic limit and is 20 kBq; therefore, the committed effective dose equivalent for this subject was 30 {+-} 7 mSv. A separate whole-body count of the subject`s wife was performed with an unshielded detector at the REAC/TS facility, with negative results.« less
Dark-count-less photon-counting x-ray computed tomography system using a YAP-MPPC detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Sato, Yuich; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2012-10-01
A high-sensitive X-ray computed tomography (CT) system is useful for decreasing absorbed dose for patients, and a dark-count-less photon-counting CT system was developed. X-ray photons are detected using a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator and an MPPC (multipixel photon counter). Photocurrents are amplified by a high-speed current-voltage amplifier, and smooth event pulses from an integrator are sent to a high-speed comparator. Then, logical pulses are produced from the comparator and are counted by a counter card. Tomography is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by the linear scan. The image contrast of gadolinium medium slightly fell with increase in lower-level voltage (Vl) of the comparator. The dark count rate was 0 cps, and the count rate for the CT was approximately 250 kcps.
Bright X-ray transient in the LMC
NASA Astrophysics Data System (ADS)
Saxton, R.; Read, A. M.; Li, D. Y.
2018-01-01
We report a bright X-ray transient in the LMC from an XMM-Newton slew made on 5th January 2018. The source, XMMSL2 J053629.4-675940, had a soft X-ray (0.2-2 keV) count rate in the EPIC-pn detector, medium filter of 1.82+/-0.56 c/s, equivalent to a flux Fx=2.3+/-0.7E-12 ergs/s/cm2 for a nominal spectrum of a power-law of slope 2 absorbed by a column NH=3E20 cm^-2.
NASA Astrophysics Data System (ADS)
Cramer, S. N.; Roussin, R. W.
1981-11-01
A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.
Simplifying Chandra aperture photometry with srcflux
NASA Astrophysics Data System (ADS)
Glotfelty, Kenny
2014-11-01
This poster will highlight some of the features of the srcflux script in CIAO. This script combines many threads and tools together to compute photometric properties for sources: counts, rates, various fluxes, and confidence intervals or upper limits. Beginning and casual X-ray astronomers greatly benefit from the simple interface: just specify the event file and a celestial location, while power-users and X-ray astronomy experts can take advantage of the all the parameters to automatically produce catalogs for entire fields. Current limitations and future enhancements of the script will also be presented.
NASA Astrophysics Data System (ADS)
Fenske, Roger; Näther, Dirk U.; Dennis, Richard B.; Smith, S. Desmond
2010-02-01
Commercial Fluorescence Lifetime Spectrometers have long suffered from the lack of a simple, compact and relatively inexpensive broad spectral band light source that can be flexibly employed for both quasi-steady state and time resolved measurements (using Time Correlated Single Photon Counting [TCSPC]). This paper reports the integration of an optically pumped photonic crystal fibre, supercontinuum source1 (Fianium model SC400PP) as a light source in Fluorescence Lifetime Spectrometers (Edinburgh Instruments FLS920 and Lifespec II), with single photon counting detectors (micro-channel plate photomultiplier and a near-infrared photomultiplier) covering the UV to NIR range. An innovative method of spectral selection of the supercontinuum source involving wedge interference filters is also discussed.
Montana Kids Count Data Book and County Profiles, 1994.
ERIC Educational Resources Information Center
Healthy Mothers, Healthy Babies--The Montana Coalition, Helena.
This Kids Count publication is the first to examine statewide trends in the well-being of Montana's children. The statistical portrait is based on 13 indicators of well-being: (1) low birthweight rate; (2) infant mortality rate; (3) child death rate; (4) teen violent death rate; (5) percent of public school enrollment in Chapter 1 programs; (6)…
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.
HgCdTe APD-based linear-mode photon counting components and ladar receivers
NASA Astrophysics Data System (ADS)
Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.
2011-05-01
Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.
Events and the Ontology of Individuals: Verbs as a Source of Individuating Mass and Count Nouns
ERIC Educational Resources Information Center
Barner, David; Wagner, Laura; Snedeker, Jesse
2008-01-01
What does mass-count syntax contribute to the interpretation of noun phrases (NPs), and how much of NP meaning is contributed by lexical items alone? Many have argued that count syntax specifies reference to countable individuals (e.g., "cats") while mass syntax specifies reference to unindividuated entities (e.g., "water"). We evaluated this…
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2010 CFR
2010-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
20 CFR 418.3325 - What earned income do we not count?
Code of Federal Regulations, 2011 CFR
2011-04-01
... percentage of your total earned income per month. The amount we exclude will be equal to the average... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false What earned income do we not count? 418.3325... Subsidies Income § 418.3325 What earned income do we not count? (a) While we must know the source and amount...
Battaile, Brian C; Trites, Andrew W
2013-01-01
We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL) method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864); northern fur seals, Callorhinus ursinus (L., 1758); and Steller sea lions, Eumetopias jubatus (Schreber, 1776))--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Automated food microbiology: potential for the hydrophobic grid-membrane filter.
Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L
1978-01-01
Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054
Color quench correction for low level Cherenkov counting.
Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B
2009-05-01
The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.
Schein, Stan; Ahmad, Kareem M
2006-11-01
A rod transmits absorption of a single photon by what appears to be a small reduction in the small number of quanta of neurotransmitter (Q(count)) that it releases within the integration period ( approximately 0.1 s) of a rod bipolar dendrite. Due to the quantal and stochastic nature of release, discrete distributions of Q(count) for darkness versus one isomerization of rhodopsin (R*) overlap. We suggested that release must be regular to narrow these distributions, reduce overlap, reduce the rate of false positives, and increase transmission efficiency (the fraction of R* events that are identified as light). Unsurprisingly, higher quantal release rates (Q(rates)) yield higher efficiencies. Focusing here on the effect of small changes in Q(rate), we find that a slightly higher Q(rate) yields greatly reduced efficiency, due to a necessarily fixed quantal-count threshold. To stabilize efficiency in the face of drift in Q(rate), the dendrite needs to regulate the biochemical realization of its quantal-count threshold with respect to its Q(count). These considerations reveal the mathematical role of calcium-based negative feedback and suggest a helpful role for spontaneous R*. In addition, to stabilize efficiency in the face of drift in degree of regularity, efficiency should be approximately 50%, similar to measurements.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
NASA Astrophysics Data System (ADS)
Nocente, M.; Tardocchi, M.; Olariu, A.; Olariu, S.; Pereira, R. C.; Chugunov, I. N.; Fernandes, A.; Gin, D. B.; Grosso, G.; Kiptily, V. G.; Neto, A.; Shevelev, A. E.; Silva, M.; Sousa, J.; Gorini, G.
2013-04-01
High resolution γ-ray spectroscopy measurements at MHz counting rates were carried out at nuclear accelerators, combining a LaBr 3(Ce) detector with dedicated hardware and software solutions based on digitization and off-line analysis. Spectra were measured at counting rates up to 4 MHz, with little or no degradation of the energy resolution, adopting a pile up rejection algorithm. The reported results represent a step forward towards the final goal of high resolution γ-ray spectroscopy measurements on a burning plasma device.
Radiation detectors and sources enhanced with micro/nanotechnology
NASA Astrophysics Data System (ADS)
Whitney, Chad Michael
The ongoing threat of nuclear terrorism presents major challenges to maintaining national security. Currently, only a small percentage of the cargo containers that enter America are searched for fissionable bomb making materials. This work reports on a multi-channel radiation detection platform enabled with nanoparticles that is capable of detecting and discriminating all types of radiation emitted from fissionable bomb making materials. Typical Geiger counters are limited to detecting only beta and gamma radiation. The micro-Geiger counter reported here detects all species of radiation including beta particles, gamma/X-rays, alpha particles, and neutrons. The multi-species detecting micro-Geiger counter contains a hermetically sealed and electrically biased fill gas. Impinging radiation interacts with tailored nanoparticles to release secondary charged particles that ionize the fill gas. The ionized particles collect on respectively biased electrodes resulting in a characteristic electrical pulse. Pulse height spectroscopy and radiation energy binning techniques can then be used to analyze the pulses to determine the specific radiation isotope. The ideal voltage range of operation for energy discrimination was found to be in the proportional region at 1000VDC. In this region, specific pulse heights for different radiation species resulted. The amplification region strength which determines the device sensitivity to radiation energy can be tuned with the electrode separation distance. Considerable improvements in count rates were achieved by using the charge conversion nanoparticles with the highest cross sections for particular radiation species. The addition of tungsten nanoparticles to the microGeiger counter enabled the device to be four times more efficient at detecting low level beta particles with a dose rate of 3.2uR/hr (micro-Roentgen per hour) and just under three times more efficient than an off the shelf Geiger counter. The addition of lead nanoparticles enabled the gamma/X-ray microGeiger counter channel to be 28 times more efficient at detecting low level gamma rays with a dose rate of 10uR/hr when compared to a device without nanoparticles. The addition of 10B nanoparticles enabled the neutron microGeiger counter channel to be 17 times more efficient at detecting neutrons. The device achieved a neutron count rate of 9,866 counts per minute when compared to a BF3 tube which resulted in a count rate of 9,000 counts per minute. By using a novel micro-injection ceramic molding and low temperature (950°C) silver paste metallizing process, the batch fabrication of essentially disposable micro-devices can be achieved. This novel fabrication technique was then applied to a MEMS neutron gun and water spectroscopy device that also utilizes the high voltage/temperature insulating packaging.
Study of gamma spectrometry laboratory measurement in various sediment and vulcanic rocks
NASA Astrophysics Data System (ADS)
Nurhandoko, Bagus Endar B.; Kurniadi, Rizal; Rizka Asmara Hadi, Muhammad; Rizal Komara, Insan
2017-01-01
Gamma-ray spectroscopy is the quantitative study of the energy spectra of gamma-ray sources. This method is powerful to characterize some minerals, especially to differentiate rocks which contains among Potassium, Uranium, dan Thorium. Rock contains radioactive material which produce gamma rays in various energies and intensities. When these emissions are detected and analyzed with a spectroscopy system, a gamma-ray energy spectrum can be used as indicator for mineral content of rock. Some sediment and vulcanic rock have been collected from East Java Basin. Samples are ranging from Andesite vulcanics, Tuff, Shale, various vulcanic clay and Alluvial clay. We present some unique characteristics of gamma spectrometry in various sedimentar and vulcanic rocks of East Java Basins. Details contents of gamma ray spectra give enrichments to characterize sample of sediment and vulcanic in East Java. Weathered vulcanic clay has lower counting rate of gamma ray than alluvial deltaic clay counting rate. Therefore, gamma spectrometrometry can be used as tool for characterizing the enviroment of clay whether vulcanic or alluvial-deltaic. This phenomena indicates that gamma ray spectrometry can be as tool for characterizing the clay whether it tends to Smectite or Illite
Modeling structured population dynamics using data from unmarked individuals
Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew
2014-01-01
The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.
High sensitivity pulse-counting mass spectrometer system for noble gas analysis
NASA Technical Reports Server (NTRS)
Hohenberg, C. M.
1980-01-01
A pulse-counting mass spectrometer is described which is comprised of a new ion source of cylindrical geometry, with exceptional optical properties (the Baur source), a dual focal plane externally adjustable collector slits, and a 17-stage Allen-type electron multiplier, all housed in a metal 21 cm radius, 90 deg magnetic sector flight tube. Mass discrimination of the instrument is less than 1 per mil per mass unit; the optical transmission is more than 90%; the source sensitivity (Faraday collection) is 4 ma/torr at 250 micron emission; and the abundance sensitivity is 30,000.
Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior
ERIC Educational Resources Information Center
Liu, Hui; Powers, Daniel A.
2007-01-01
This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…
Kids Count Data Book, 2003: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
O'Hare, William P.
This Kids Count data book examines national and statewide trends in the well being of the nation's children. Statistical portraits are based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide, and suicide; (5) teen birth rate; (6)…
KIDS COUNT Data Book, 2002: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
O'Hare, William P.
This KIDS COUNT data book examines national and statewide trends in the well being of the nations children. Statistical portraits are based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide, and suicide; (5) teen birth rate; (6)…
KIDS COUNT Data Book, 2001: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
Annie E. Casey Foundation, Baltimore, MD.
This Kids Count report examines national and statewide trends in the well-being of the nation's children. The statistical portrait is based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide and suicide; (5) teen birth rate; (6)…
Waveguide integrated low noise NbTiN nanowire single-photon detectors with milli-Hz dark count rate
Schuck, Carsten; Pernice, Wolfram H. P.; Tang, Hong X.
2013-01-01
Superconducting nanowire single-photon detectors are an ideal match for integrated quantum photonic circuits due to their high detection efficiency for telecom wavelength photons. Quantum optical technology also requires single-photon detection with low dark count rate and high timing accuracy. Here we present very low noise superconducting nanowire single-photon detectors based on NbTiN thin films patterned directly on top of Si3N4 waveguides. We systematically investigate a large variety of detector designs and characterize their detection noise performance. Milli-Hz dark count rates are demonstrated over the entire operating range of the nanowire detectors which also feature low timing jitter. The ultra-low dark count rate, in combination with the high detection efficiency inherent to our travelling wave detector geometry, gives rise to a measured noise equivalent power at the 10−20 W/Hz1/2 level. PMID:23714696
NASA Astrophysics Data System (ADS)
Cooper, R. J.; Amman, M.; Vetter, K.
2018-04-01
High-resolution gamma-ray spectrometers are required for applications in nuclear safeguards, emergency response, and fundamental nuclear physics. To overcome one of the shortcomings of conventional High Purity Germanium (HPGe) detectors, we have developed a prototype device capable of achieving high event throughput and high energy resolution at very high count rates. This device, the design of which we have previously reported on, features a planar HPGe crystal with a reduced-capacitance strip electrode geometry. This design is intended to provide good energy resolution at the short shaping or digital filter times that are required for high rate operation and which are enabled by the fast charge collection afforded by the planar geometry crystal. In this work, we report on the initial performance of the system at count rates up to and including two million counts per second.
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
Evaluating potential sources of variation in Chironomidae catch rates on sticky traps
Smith, Joshua T.; Muehlbauer, Jeffrey D.; Kennedy, Theodore A.
2016-01-01
Sticky traps are a convenient tool for assessing adult aquatic insect population dynamics, but there are many practical questions about how trap sampling artefacts may affect observed results. Utilising study sites on the Colorado River and two smaller streams in northern Arizona, USA, we evaluated whether catch rates and sex ratios of Chironomidae, a ubiquitous aquatic insect, were affected by spraying traps with insecticide, placing traps at different heights above ground, and placing traps at different locations within a terrestrial habitat patch. We also evaluated temporal variation in Chironomidae counts monthly over a 9-month growing season. We found no significant variation in catch rates or sex ratios between traps treated versus untreated with insecticide, nor between traps placed at the upstream or downstream end of a terrestrial habitat patch. Traps placed near ground level did have significantly higher catch rates than traps placed at 1.5 m, although sex ratios were similar across heights. Chironomidae abundance and sex ratios also varied from month-to-month and seemed to be related to climatic conditions. Our results inform future sticky trap studies by demonstrating that trap height, but not insecticide treatment or precise trap placement within a habitat patch, is an important source of variation influencing catch rates.
Material screening with HPGe counting station for PandaX experiment
NASA Astrophysics Data System (ADS)
Wang, X.; Chen, X.; Fu, C.; Ji, X.; Liu, X.; Mao, Y.; Wang, H.; Wang, S.; Xie, P.; Zhang, T.
2016-12-01
A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Illinois Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Illinois' Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Pitch Counts in Youth Baseball and Softball: A Historical Review.
Feeley, Brian T; Schisel, Jessica; Agel, Julie
2018-07-01
Pitching injuries are getting increased attention in the mass media. Many references are made to pitch counts and the role they play in injury prevention. The original purpose of regulating the pitch count in youth baseball was to reduce injury and fatigue to pitchers. This article reviews the history and development of the pitch count limit in baseball, the effect it has had on injury, and the evidence regarding injury rates on softball windmill pitching. Literature search through PubMed, mass media, and organizational Web sites through June 2015. Pitch count limits and rest recommendations were introduced in 1996 after a survey of 28 orthopedic surgeons and baseball coaches showed injuries to baseball pitchers' arms were believed to be from the number of pitches thrown. Follow-up research led to revised recommendations with more detailed guidelines in 2006. Since that time, data show a relationship between innings pitched and upper extremity injury, but pitch type has not clearly been shown to affect injury rates. Current surveys of coaches and players show that coaches, parents, and athletes often do not adhere to these guidelines. There are no pitch count guidelines currently available in softball. The increase in participation in youth baseball and softball with an emphasis on early sport specialization in youth sports activities suggests that there will continue to be a rise in injury rates to young throwers. The published pitch counts are likely to positively affect injury rates but must be adhered to by athletes, coaches, and parents.
NASA Astrophysics Data System (ADS)
Nishizawa, Yukiyasu; Sugita, Takeshi; Sanada, Yukihisa; Torii, Tatsuo
2015-04-01
Since 2011, MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan) have been conducting aerial monitoring to investigate the distribution of radioactive cesium dispersed into the atmosphere after the accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP), Tokyo Electric Power Company. Distribution maps of the air dose-rate at 1 m above the ground and the radioactive cesium deposition concentration on the ground are prepared using spectrum obtained by aerial monitoring. The radioactive cesium deposition is derived from its dose rate, which is calculated by excluding the dose rate of the background radiation due to natural radionuclides from the air dose-rate at 1 m above the ground. The first step of the current method of calculating the dose rate due to natural radionuclides is calculate the ratio of the total count rate of areas where no radioactive cesium is detected and the count rate of regions with energy levels of 1,400 keV or higher (BG-Index). Next, calculate the air dose rate of radioactive cesium by multiplying the BG-Index and the integrated count rate of 1,400 keV or higher for the area where the radioactive cesium is distributed. In high dose-rate areas, however, the count rate of the 1,365-keV peak of Cs-134, though small, is included in the integrated count rate of 1,400 keV or higher, which could cause an overestimation of the air dose rate of natural radionuclides. We developed a method for accurately evaluating the distribution maps of natural air dose-rate by excluding the effect of radioactive cesium, even in contaminated areas, and obtained the accurate air dose-rate map attributed the radioactive cesium deposition on the ground. Furthermore, the natural dose-rate distribution throughout Japan has been obtained by this method.
Radiation Discrimination in LiBaF3 Scintillator Using Digital Signal Processing Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Bowyer, Sonya M.; Reeder, Paul L.
2002-11-01
The new scintillator material LiBaF3:Ce offers the possibility of measuring neutron or alpha count rates and energy spectra simultaneously while measuring gamma count rates and spectra using a single detector.
259 E Ohio, April 2014, Lindsay Light Radiological Survey
The count rates for the sidewalk ranged from 5,600 cpm to 16,800 cpm.There were two locations along Ontario St. with elevated count rates approaching thethreshold limit correlating to 7.1 pCi/g total thorium.
Simulations of a micro-PET system based on liquid xenon
NASA Astrophysics Data System (ADS)
Miceli, A.; Glister, J.; Andreyev, A.; Bryman, D.; Kurchaninov, L.; Lu, P.; Muennich, A.; Retiere, F.; Sossi, V.
2012-03-01
The imaging performance of a high-resolution preclinical micro-positron emission tomography (micro-PET) system employing liquid xenon (LXe) as the gamma-ray detection medium was simulated. The arrangement comprises a ring of detectors consisting of trapezoidal LXe time projection ionization chambers and two arrays of large area avalanche photodiodes for the measurement of ionization charge and scintillation light. A key feature of the LXePET system is the ability to identify individual photon interactions with high energy resolution and high spatial resolution in three dimensions and determine the correct interaction sequence using Compton reconstruction algorithms. The simulated LXePET imaging performance was evaluated by computing the noise equivalent count rate, the sensitivity and point spread function for a point source according to the NEMA-NU4 standard. The image quality was studied with a micro-Derenzo phantom. Results of these simulation studies included noise equivalent count rate peaking at 1326 kcps at 188 MBq (705 kcps at 184 MBq) for an energy window of 450-600 keV and a coincidence window of 1 ns for mouse (rat) phantoms. The absolute sensitivity at the center of the field of view was 12.6%. Radial, tangential and axial resolutions of 22Na point sources reconstructed with a list-mode maximum likelihood expectation maximization algorithm were ⩽0.8 mm (full-width at half-maximum) throughout the field of view. Hot-rod inserts of <0.8 mm diameter were resolvable in the transaxial image of a micro-Derenzo phantom. The simulations show that a LXe system would provide new capabilities for significantly enhancing PET images.
Performance of large area x-ray proportional counters in a balloon experiment
NASA Astrophysics Data System (ADS)
Roy, J.; Agrawal, P. C.; Dedhia, D. K.; Manchanda, R. K.; Shah, P. B.; Chitnis, V. R.; Gujar, V. M.; Parmar, J. V.; Pawar, D. M.; Kurhade, V. B.
2016-10-01
ASTROSAT is India's first satellite fully devoted to astronomical observations covering a wide spectral band from optical to hard X-rays by a complement of 4 co-aligned instruments and a Scanning Sky X-ray Monitor. One of the instruments is Large Area X-ray Proportional Counter with 3 identical detectors. In order to assess the performance of this instrument, a balloon experiment with two prototype Large Area X-ray Proportional Counters (LAXPC) was carried out on 2008 April 14. The design of these LAXPCs was similar to those on the ASTROSAT except that their field of view (FOV) was 3 ∘ × 3 ∘ versus FOV of 1 ∘ × 1 ∘ for the LAXPCs on the ASTROSAT. The LAXPCs are aimed at the timing and spectral studies of X-ray sources in 3-80 keV region. In the balloon experiment, the LAXPC, associated electronics and support systems were mounted on an oriented platform which could be pre-programmed to track any source in the sky. A brief description of the LAXPC design, laboratory tests, calibration and the detector characteristics is presented here. The details of the experiment and background counting rates of the 2 LAXPCs at the float altitude of about 41 km are presented in different energy bands. The bright black hole X-ray binary Cygnus X-1 (Cyg X-1) was observed in the experiment for ˜ 3 hours. Details of Cyg X-1 observations, count rates measured from it in different energy intervals and the intensity variations of Cyg X-1 detected during the observations are presented and briefly discussed.
Dynamic time-correlated single-photon counting laser ranging
NASA Astrophysics Data System (ADS)
Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang
2018-03-01
We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.
Absolute activity measurements with the windowless 4π-CsI(Tl)-sandwich spectrometer
NASA Astrophysics Data System (ADS)
Denecke, B.
1994-01-01
The windowless 4π-CsI(Tl)-sandwich spectrometer consists of two scintillation crystals sandwiching radioactive sources deposited on thin plastic foils. This configuration has a solid angle very close to 4π sr. The detectors are sensitive to charged particles with energies > 15 keV and measure photons of 15-200 keV with a probability > 98%. Disintegration rates of samples of radionuclides with complex decay modes can be determined directly from the measured count rates with uncertainties below 0.3%. Radionuclide solutions of 57Co, 109Cd, 125I, 152Eu and 192Ir were standardised, partly in the framework of international comparisons. A detailed description of the spectrometer and the measurement procedure is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Multi-channel photon counting DOT system based on digital lock-in detection technique
NASA Astrophysics Data System (ADS)
Wang, Tingting; Zhao, Huijuan; Wang, Zhichao; Hou, Shaohua; Gao, Feng
2011-02-01
Relying on deeper penetration of light in the tissue, Diffuse Optical Tomography (DOT) achieves organ-level tomography diagnosis, which can provide information on anatomical and physiological features. DOT has been widely used in imaging of breast, neonatal cerebral oxygen status and blood oxygen kinetics observed by its non-invasive, security and other advantages. Continuous wave DOT image reconstruction algorithms need the measurement of the surface distribution of the output photon flow inspired by more than one driving source, which means that source coding is necessary. The most currently used source coding in DOT is time-division multiplexing (TDM) technology, which utilizes the optical switch to switch light into optical fiber of different locations. However, in case of large amounts of the source locations or using the multi-wavelength, the measurement time with TDM and the measurement interval between different locations within the same measurement period will therefore become too long to capture the dynamic changes in real-time. In this paper, a frequency division multiplexing source coding technology is developed, which uses light sources modulated by sine waves with different frequencies incident to the imaging chamber simultaneously. Signal corresponding to an individual source is obtained from the mixed output light using digital phase-locked detection technology at the detection end. A digital lock-in detection circuit for photon counting measurement system is implemented on a FPGA development platform. A dual-channel DOT photon counting experimental system is preliminary established, including the two continuous lasers, photon counting detectors, digital lock-in detection control circuit, and codes to control the hardware and display the results. A series of experimental measurements are taken to validate the feasibility of the system. This method developed in this paper greatly accelerates the DOT system measurement, and can also obtain the multiple measurements in different source-detector locations.
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Support of selected X-ray studies to be performed using data from the Uhuru (SAS-A) satellite
NASA Technical Reports Server (NTRS)
Garmire, G. P.
1976-01-01
A new measurement of the diffuse X-ray emission sets more stringent upper limits on the fluctuations of the background and on the number counts of X-ray sources with absolute value of b 20 deg than previous measurements. A random sample of background data from the Uhuru satellite gives a relative fluctuation in excess of statistics of 2.0% between 2.4 and 6.9 keV. The hypothesis that the relative fluctuation exceeds 2.9% can be rejected at the 90% confidence level. No discernable energy dependence is evident in the fluctuations in the pulse height data, when separated into three energy channels of nearly equal width from 1.8 to 10.0 keV. The probability distribution of fluctuations was convolved with the photon noise and cosmic ray background deviation (obtained from the earth-viewing data) to yield the differential source count distribution for high latitude sources. Results imply that a maximum of 160 sources could be between 1.7 and 5.1 x 10 to the -11 power ergs/sq cm/sec (1-3 Uhuru counts).
A double-observer method for reducing bias in faecal pellet surveys of forest ungulates
Jenkins, K.J.; Manly, B.F.J.
2008-01-01
1. Faecal surveys are used widely to study variations in abundance and distribution of forest-dwelling mammals when direct enumeration is not feasible. The utility of faecal indices of abundance is limited, however, by observational bias and variation in faecal disappearance rates that obscure their relationship to population size. We developed methods to reduce variability in faecal surveys and improve reliability of faecal indices. 2. We used double-observer transect sampling to estimate observational bias of faecal surveys of Roosevelt elk Cervus elaphus roosevelti and Columbian black-tailed deer Odocoileus hemionus columbianus in Olympic National Park, Washington, USA. We also modelled differences in counts of faecal groups obtained from paired cleared and uncleared transect segments as a means to adjust standing crop faecal counts for a standard accumulation interval and to reduce bias resulting from variable decay rates. 3. Estimated detection probabilities of faecal groups ranged from < 0.2-1.0 depending upon the observer, whether the faecal group was from elk or deer, faecal group size, distance of the faecal group from the sampling transect, ground vegetation cover, and the interaction between faecal group size and distance from the transect. 4. Models of plot-clearing effects indicated that standing crop counts of deer faecal groups required 34% reduction on flat terrain and 53% reduction on sloping terrain to represent faeces accumulated over a standard 100-day interval, whereas counts of elk faecal groups required 0% and 46% reductions on flat and sloping terrain, respectively. 5. Synthesis and applications. Double-observer transect sampling provides a cost-effective means of reducing observational bias and variation in faecal decay rates that obscure the interpretation of faecal indices of large mammal abundance. Given the variation we observed in observational bias of faecal surveys and persistence of faeces, we emphasize the need for future researchers to account for these comparatively manageable sources of bias before comparing faecal indices spatially or temporally. Double-observer sampling methods are readily adaptable to study variations in faecal indices of large mammals at the scale of the large forest reserve, natural area, or other forested regions when direct estimation of populations is problematic. ?? 2008 The Authors.
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Enqvist, Andreas
2017-09-01
Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.
Discovery of 3.6-s X-ray pulsations from 4U0115+63
NASA Technical Reports Server (NTRS)
Cominsky, L.; Clark, G. W.; Li, F.; Mayer, W.; Rappaport, S.
1978-01-01
SAS 3 observations reveal a pulsation period of 3.61 sec for the transient X-ray source 4U0115+63. Positional measurement is accurate to approximately 30 arc s, and has led to the likely identification of an optical counterpart. The intensity of the pulses, as reported on 5.9 January 1978, is given as approximately 1.7 times that of the Crab Nebula (1-27 keV). Spectral information was also obtained from the ratios of counting rates in the first three energy channels of the center slat collimator detector (1-27 keV). Two classes of models are proposed to explain the transient nature of the X-ray sources: (1) episodic mass transfer in a binary system, and (2) eccentric binary orbits.
Web-based encyclopedia on physical effects
NASA Astrophysics Data System (ADS)
Papliatseyeu, Andrey; Repich, Maryna; Ilyushonak, Boris; Hurbo, Aliaksandr; Makarava, Katerina; Lutkovski, Vladimir M.
2004-07-01
Web-based learning applications open new horizons for educators. In this work we present the computer encyclopedia designed to overcome drawbacks of traditional paper information sources such as awkward search, low update rate, limited copies count and high cost. Moreover, we intended to improve access and search functions in comparison with some Internet sources in order to make it more convenient. The system is developed using modern Java technologies (Jave Servlets, Java Server Pages) and contains systemized information about most important and explored physical effects. It also may be used in other fields of science. The system is accessible via Intranet/Internet networks by means of any up-to-date Internet browser. It may be used for general learning purposes and as a study guide or tutorial for performing laboratory works.
Experimental Ten-Photon Entanglement.
Wang, Xi-Lin; Chen, Luo-Kan; Li, W; Huang, H-L; Liu, C; Chen, C; Luo, Y-H; Su, Z-E; Wu, D; Li, Z-D; Lu, H; Hu, Y; Jiang, X; Peng, C-Z; Li, L; Liu, N-L; Chen, Yu-Ao; Lu, Chao-Yang; Pan, Jian-Wei
2016-11-18
We report the first experimental demonstration of quantum entanglement among ten spatially separated single photons. A near-optimal entangled photon-pair source was developed with simultaneously a source brightness of ∼12 MHz/W, a collection efficiency of ∼70%, and an indistinguishability of ∼91% between independent photons, which was used for a step-by-step engineering of multiphoton entanglement. Under a pump power of 0.57 W, the ten-photon count rate was increased by about 2 orders of magnitude compared to previous experiments, while maintaining a state fidelity sufficiently high for proving the genuine ten-particle entanglement. Our work created a state-of-the-art platform for multiphoton experiments, and enabled technologies for challenging optical quantum information tasks, such as the realization of Shor's error correction code and high-efficiency scattershot boson sampling.
Poisson Regression Analysis of Illness and Injury Surveillance Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frome E.L., Watkins J.P., Ellis E.D.
2012-12-12
The Department of Energy (DOE) uses illness and injury surveillance to monitor morbidity and assess the overall health of the work force. Data collected from each participating site include health events and a roster file with demographic information. The source data files are maintained in a relational data base, and are used to obtain stratified tables of health event counts and person time at risk that serve as the starting point for Poisson regression analysis. The explanatory variables that define these tables are age, gender, occupational group, and time. Typical response variables of interest are the number of absences duemore » to illness or injury, i.e., the response variable is a count. Poisson regression methods are used to describe the effect of the explanatory variables on the health event rates using a log-linear main effects model. Results of fitting the main effects model are summarized in a tabular and graphical form and interpretation of model parameters is provided. An analysis of deviance table is used to evaluate the importance of each of the explanatory variables on the event rate of interest and to determine if interaction terms should be considered in the analysis. Although Poisson regression methods are widely used in the analysis of count data, there are situations in which over-dispersion occurs. This could be due to lack-of-fit of the regression model, extra-Poisson variation, or both. A score test statistic and regression diagnostics are used to identify over-dispersion. A quasi-likelihood method of moments procedure is used to evaluate and adjust for extra-Poisson variation when necessary. Two examples are presented using respiratory disease absence rates at two DOE sites to illustrate the methods and interpretation of the results. In the first example the Poisson main effects model is adequate. In the second example the score test indicates considerable over-dispersion and a more detailed analysis attributes the over-dispersion to extra-Poisson variation. The R open source software environment for statistical computing and graphics is used for analysis. Additional details about R and the data that were used in this report are provided in an Appendix. Information on how to obtain R and utility functions that can be used to duplicate results in this report are provided.« less
355 E Riverwalk, February 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 2,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
230 E. Ontario, May 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,366 cpm.
Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners.
Schmidtlein, C Ross; Kirov, Assen S; Nehmeh, Sadek A; Erdi, Yusuf E; Humm, John L; Amols, Howard I; Bidaut, Luc M; Ganin, Alex; Stearns, Charles W; McDaniel, David L; Hamacher, Klaus A
2006-01-01
The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9 +/- 1.4% and the simulated scatter fraction is 43.5 +/- 0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 kBq/cc were matched by the simulated results to within 0.5% and 0.1% respectively. The simulated count rate curves also resulted in a peak NECR of 35.2 kcps at 10.8 kBq/cc compared to 37.6 kcps at 10.0 kBq/cc from averaged experimental values. The spatial resolution of the simulated scanner matched the experimental results to within 0.2 mm.
Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM
2006-07-25
The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.
500-MHz x-ray counting with a Si-APD and a fast-pulse processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishimoto, Shunji; Taniguchi, Takashi; Tanaka, Manobu
2010-06-23
We introduce a counting system of up to 500 MHz for synchrotron x-ray high-rate measurements. A silicon avalanche photodiode detector was used in the counting system. The fast-pulse circuit of the amplifier was designed with hybrid ICs to prepare an ASIC system for a large-scale pixel array detector in near future. The fast amplifier consists of two cascading emitter-followers using 10-GHz band transistors. A count-rate of 3.25x10{sup 8} s{sup -1} was then achieved using the system for 8-keV x-rays. However, a baseline shift by adopting AC-coupling in the amplifier disturbed us to observe the maximum count of 4.49x10{sup 8} s{supmore » -1}, determined by electron-bunch filling into a ring accelerator. We also report that an amplifier with a baseline restorer was tested in order to keep the baseline level to be 0 V even at high input rates.« less
The Chandra Source Catalog: Source Properties and Data Products
NASA Astrophysics Data System (ADS)
Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.
2009-09-01
The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).
Estimating the mass variance in neutron multiplicity counting-A comparison of approaches
NASA Astrophysics Data System (ADS)
Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.
2017-12-01
In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubi, C.; Croft, S.; Favalli, A.
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches
Dubi, C.; Croft, S.; Favalli, A.; ...
2017-09-14
In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less
Relationship between salivary flow rates and Candida counts in subjects with xerostomia.
Torres, Sandra R; Peixoto, Camila Bernardo; Caldas, Daniele Manhães; Silva, Eline Barboza; Akiti, Tiyomi; Nucci, Márcio; de Uzeda, Milton
2002-02-01
This study evaluated the relationship between salivary flow and Candida colony counts in the saliva of patients with xerostomia. Sialometry and Candida colony-forming unit (CFU) counts were taken from 112 subjects who reported xerostomia in a questionnaire. Chewing-stimulated whole saliva was collected and streaked in Candida plates and counted in 72 hours. Species identification was accomplished under standard methods. There was a significant inverse relationship between salivary flow and Candida CFU counts (P =.007) when subjects with high colony counts were analyzed (cutoff point of 400 or greater CFU/mL). In addition, the median sialometry of men was significantly greater than that of women (P =.003), even after controlling for confounding variables like underlying disease and medications. Sjögren's syndrome was associated with low salivary flow rate (P =.007). There was no relationship between the median Candida CFU counts and gender or age. There was a high frequency (28%) of mixed colonization. Candida albicans was the most frequent species, followed by C parapsilosis, C tropicalis, and C krusei. In subjects with high Candida CFU counts there was an inverse relationship between salivary flow and Candida CFU counts.
Corsi, Steven R.; Walker, John F.; Graczyk, D.J.; Greb, S.R.; Owens, D.W.; Rappold, K.F.
1995-01-01
A special study was done to determine the effect of holding time on fecal coliform colony counts. A linear regression indicated that the mean decrease in colony counts over 72 hours was 8.2 percent per day. Results after 24 hours showed that colony counts increased in some samples and decreased in others.
Predicting Attack-Prone Components with Source Code Static Analyzers
2009-05-01
models to determine if additional metrics are required to increase the accuracy of the model: non-security SCSA warnings, code churn and size, the count...code churn and size, the count of faults found manually during development, and the measure of coupling between components. The dependent variable...is the count of vulnerabilities reported by testing and those found in the field. We evaluated our model on three commercial telecommunications
ERIC Educational Resources Information Center
Annie E. Casey Foundation, Baltimore, MD.
Data from the 50 United States are listed for 1997 from Kids Count in an effort to track state-by-state the status of children in the United States and to secure better futures for all children. Data include percent low birth weight babies; infant mortality rate; child death rate; rate of teen deaths by accident, homicide, and suicide; teen birth…
Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…
Miami-Dade Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Miami-Dade's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…
Powerful model for the point source sky: Far-ultraviolet and enhanced midinfrared performance
NASA Technical Reports Server (NTRS)
Cohen, Martin
1994-01-01
I report further developments of the Wainscoat et al. (1992) model originally created for the point source infrared sky. The already detailed and realistic representation of the Galaxy (disk, spiral arms and local spur, molecular ring, bulge, spheroid) has been improved, guided by CO surveys of local molecular clouds, and by the inclusion of a component to represent Gould's Belt. The newest version of the model is very well validated by Infrared Astronomy Satellite (IRAS) source counts. A major new aspect is the extension of the same model down to the far ultraviolet. I compare predicted and observed far-utraviolet source counts from the Apollo 16 'S201' experiment (1400 A) and the TD1 satellite (for the 1565 A band).
Richard L. Hutto; Sallie J. Hejl; Jeffrey F. Kelly; Sandra M. Pletschet
1995-01-01
We conducted a series of 275 paired (on- and off-road) point counts within 4 distinct vegetation cover types in northwestern Montana. Roadside counts generated a bird list that was essentially the same as the list generated from off-road counts within the same vegetation cover type. Species that were restricted to either on- or off-road counts were rare, suggesting...
Radionuclide counting technique for measuring wind velocity and direction
NASA Technical Reports Server (NTRS)
Singh, J. J. (Inventor)
1984-01-01
An anemometer utilizing a radionuclide counting technique for measuring both the velocity and the direction of wind is described. A pendulum consisting of a wire and a ball with a source of radiation on the lower surface of the ball is positioned by the wind. Detectors and are located in a plane perpendicular to pendulum (no wind). The detectors are located on the circumferene of a circle and are equidistant from each other as well as the undisturbed (no wind) source ball position.
Flow rate calibration to determine cell-derived microparticles and homogeneity of blood components.
Noulsri, Egarit; Lerdwana, Surada; Kittisares, Kulvara; Palasuwan, Attakorn; Palasuwan, Duangdao
2017-08-01
Cell-derived microparticles (MPs) are currently of great interest to screening transfusion donors and blood components. However, the current approach to counting MPs is not affordable for routine laboratory use due to its high cost. The current study aimed to investigate the potential use of flow-rate calibration for counting MPs in whole blood, packed red blood cells (PRBCs), and platelet concentrates (PCs). The accuracy of flow-rate calibration was investigated by comparing the platelet counts of an automated counter and a flow-rate calibrator. The concentration of MPs and their origins in whole blood (n=100), PRBCs (n=100), and PCs (n=92) were determined using a FACSCalibur. The MPs' fold-changes were calculated to assess the homogeneity of the blood components. Comparing the platelet counts conducted by automated counting and flow-rate calibration showed an r 2 of 0.6 (y=0.69x+97,620). The CVs of the within-run and between-run variations of flow-rate calibration were 8.2% and 12.1%, respectively. The Bland-Altman plot showed a mean bias of -31,142platelets/μl. MP enumeration revealed both the difference in MP levels and their origins in whole blood, PRBCs, and PCs. Screening the blood components demonstrated high heterogeneity of the MP levels in PCs when compared to whole blood and PRBCs. The results of the present study suggest the accuracy and precision of flow-rate calibration for enumerating MPs. This flow-rate approach is affordable for assessing the homogeneity of MPs in blood components in routine laboratory practice. Copyright © 2017 Elsevier Ltd. All rights reserved.
OpenCFU, a new free and open-source software to count cell colonies and other circular objects.
Geissmann, Quentin
2013-01-01
Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an intuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net.
371 E. Lower Wacker Drive, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
220 E. Illinois St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,500 cpm to 5,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
8-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,200 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.
429 E. Grand Ave, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
201-211 E. Grand Ave, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,900 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
230 N. Michigan Ave, April 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,400 cpm to 3,800 cpm. No count rates were found at any time that exceeded the threshold limit of 6,542 cpm.
36 W. Illinois St, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,400 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
1-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,000 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.
211 E. Ohio St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,300 cpm.No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.
140-200 E. Grand Ave, February 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,400 cpm. No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.
430 N. Michigan Ave, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,100 cpm. No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.
401-599 N. Dearborn St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation, The count rates in the excavation ranged from 1,700 cpm to 5,800 cpm.No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.
Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas
2014-09-30
A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated information and, therefore, more reliable estimated breeding values were obtained. The proposed unified method integrated and blended several sources of information well into a genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. The unified method can also be extended to other types of situations such as single-step genomic or multi-trait evaluations, combining information across different traits.
Point Count Length and Detection of Forest Neotropical Migrant Birds
Deanna K. Dawson; David R. Smith; Chandler S. Robbins
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences...
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Zipkin, Elise F; Sillett, T Scott; Grant, Evan H Campbell; Chandler, Richard B; Royle, J Andrew
2014-01-01
Wildlife populations consist of individuals that contribute disproportionately to growth and viability. Understanding a population's spatial and temporal dynamics requires estimates of abundance and demographic rates that account for this heterogeneity. Estimating these quantities can be difficult, requiring years of intensive data collection. Often, this is accomplished through the capture and recapture of individual animals, which is generally only feasible at a limited number of locations. In contrast, N-mixture models allow for the estimation of abundance, and spatial variation in abundance, from count data alone. We extend recently developed multistate, open population N-mixture models, which can additionally estimate demographic rates based on an organism's life history characteristics. In our extension, we develop an approach to account for the case where not all individuals can be assigned to a state during sampling. Using only state-specific count data, we show how our model can be used to estimate local population abundance, as well as density-dependent recruitment rates and state-specific survival. We apply our model to a population of black-throated blue warblers (Setophaga caerulescens) that have been surveyed for 25 years on their breeding grounds at the Hubbard Brook Experimental Forest in New Hampshire, USA. The intensive data collection efforts allow us to compare our estimates to estimates derived from capture–recapture data. Our model performed well in estimating population abundance and density-dependent rates of annual recruitment/immigration. Estimates of local carrying capacity and per capita recruitment of yearlings were consistent with those published in other studies. However, our model moderately underestimated annual survival probability of yearling and adult females and severely underestimates survival probabilities for both of these male stages. The most accurate and precise estimates will necessarily require some amount of intensive data collection efforts (such as capture–recapture). Integrated population models that combine data from both intensive and extensive sources are likely to be the most efficient approach for estimating demographic rates at large spatial and temporal scales. PMID:24634726
A compact 7-cell Si-drift detector module for high-count rate X-ray spectroscopy.
Hansen, K; Reckleben, C; Diehl, I; Klär, H
2008-05-01
A new Si-drift detector module for fast X-ray spectroscopy experiments was developed and realized. The Peltier-cooled module comprises a sensor with 7 × 7-mm 2 active area, an integrated circuit for amplification, shaping and detection, storage, and derandomized readout of signal pulses in parallel, and amplifiers for line driving. The compactness and hexagonal shape of the module with a wrench size of 16mm allow very short distances to the specimen and multi-module arrangements. The power dissipation is 186mW. At a shaper peaking time of 190 ns and an integration time of 450 ns an electronic rms noise of ~11 electrons was achieved. When operated at 7 °C, FWHM line widths around 260 and 460 eV (Cu-K α ) were obtained at low rates and at sum-count rates of 1.7 MHz, respectively. The peak shift is below 1% for a broad range of count rates. At 1.7-MHz sum-count rate the throughput loss amounts to 30%.
Acoustic Emission Parameters of Three Gorges Sandstone during Shear Failure
NASA Astrophysics Data System (ADS)
Xu, Jiang; Liu, Yixin; Peng, Shoujian
2016-12-01
In this paper, an experimental investigation of sandstone samples from the Three Gorges during shear failure was conducted using acoustic emission (AE) and direct shear tests. The AE count rate, cumulative AE count, AE energy, and amplitude of the sandstone samples were determined. Then, the relationships among the AE signals and shearing behaviors of the samples were analyzed in order to detect micro-crack initiation and propagation and reflect shear failure. The results indicated that both the shear strength and displacement exhibited a logarithmic relationship with the displacement rate at peak levels of stress. In addition, the various characteristics of the AE signals were apparent in various situations. The AE signals corresponded with the shear stress under different displacement rates. As the displacement rate increased, the amount of accumulative damage to each specimen decreased, while the AE energy peaked earlier and more significantly. The cumulative AE count primarily increased during the post-peak period. Furthermore, the AE count rate and amplitude exhibited two peaks during the peak shear stress period due to crack coalescence and rock bridge breakage. These isolated cracks later formed larger fractures and eventually caused ruptures.
Lithium and boron based semiconductors for thermal neutron counting
NASA Astrophysics Data System (ADS)
Kargar, Alireza; Tower, Joshua; Hong, Huicong; Cirignano, Leonard; Higgins, William; Shah, Kanai
2011-09-01
Thermal neutron detectors in planar configuration were fabricated from LiInSe2 and B2Se3 crystals grown at RMD Inc. All fabricated semiconductor devices were characterized for the current-voltage (I-V) characteristic and neutron counting measurement. Pulse height spectra were collected from 241AmBe (neutron source on all samples), as well as 137Cs and 60Co gamma ray sources. In this study, the resistivity of all crystals is reported and the collected pulse height spectra are presented for fabricated devices. Note that, the 241AmBe neutron source was custom designed with polyethylene around the source as the neutron moderator, mainly to thermalize the fast neutrons before reaching the detectors. Both LiInSe2 and B2Se3 devices showed response to thermal neutrons of the 241AmBe source.
An acoustic emission study of plastic deformation in polycrystalline aluminium
NASA Technical Reports Server (NTRS)
Bill, R. C.; Frederick, J. R.; Felbeck, D. K.
1979-01-01
Acoustic emission experiments were performed on polycrystalline and single crystal 99.99% aluminum while undergoing tensile deformation. It was found that acoustic emission counts as a function of grain size showed a maximum value at a particular grain size. Furthermore, the slip area associated with this particular grain size corresponded to the threshold level of detectability of single dislocation slip events. The rate of decline in acoustic emission activity as grain size is increased beyond the peak value suggests that grain boundary associated dislocation sources are giving rise to the bulk of the detected acoustic emissions.
Karanfil, C; Bunker, G; Newville, M; Segre, C U; Chapman, D
2012-05-01
Third-generation synchrotron radiation sources pose difficult challenges for energy-dispersive detectors for XAFS because of their count rate limitations. One solution to this problem is the bent crystal Laue analyzer (BCLA), which removes most of the undesired scatter and fluorescence before it reaches the detector, effectively eliminating detector saturation due to background. In this paper experimental measurements of BCLA performance in conjunction with a 13-element germanium detector, and a quantitative analysis of the signal-to-noise improvement of BCLAs are presented. The performance of BCLAs are compared with filters and slits.
Epidemiology of Tuberculosis in Young Children in the United States
Pang, Jenny; Teeter, Larry D.; Katz, Dolly J.; Davidow, Amy L.; Miranda, Wilson; Wall, Kirsten; Ghosh, Smita; Stein-Hart, Trudy; Restrepo, Blanca I.; Reves, Randall; Graviss, Edward A.
2016-01-01
OBJECTIVES To estimate tuberculosis (TB) rates among young children in the United States by children’s and parents’ birth origins and describe the epidemiology of TB among young children who are foreign-born or have at least 1 foreign-born parent. METHODS Study subjects were children <5 years old diagnosed with TB in 20 US jurisdictions during 2005–2006. TB rates were calculated from jurisdictions’ TB case counts and American Community Survey population estimates. An observational study collected demographics, immigration and travel histories, and clinical and source case details from parental interviews and health department and TB surveillance records. RESULTS Compared with TB rates among US-born children with US-born parents, rates were 32 times higher in foreign-born children and 6 times higher in US-born children with foreign-born parents. Most TB cases (53%) were among the 29% of children who were US born with foreign-born parents. In the observational study, US-born children with foreign-born parents were more likely than foreign-born children to be infants (30% vs 7%), Hispanic (73% vs 37%), diagnosed through contact tracing (40% vs 7%), and have an identified source case (61% vs 19%); two-thirds of children were exposed in the United States. CONCLUSIONS Young children who are US born of foreign-born parents have relatively high rates of TB and account for most cases in this age group. Prompt diagnosis and treatment of adult source cases, effective contact investigations prioritizing young contacts, and targeted testing and treatment of latent TB infection are necessary to reduce TB morbidity in this population. PMID:24515517
Injecting drug users in Scotland, 2006: Listing, number, demography, and opiate-related death-rates.
King, Ruth; Bird, Sheila M; Overstall, Antony; Hay, Gordon; Hutchinson, Sharon J
2013-06-01
Using Bayesian capture-recapture analysis, we estimated the number of current injecting drug users (IDUs) in Scotland in 2006 from the cross-counts of 5670 IDUs listed on four data-sources: social enquiry reports (901 IDUs listed), hospital records (953), drug treatment agencies (3504), and recent Hepatitis C virus (HCV) diagnoses (827 listed as IDU-risk). Further, we accessed exact numbers of opiate-related drugs-related deaths (DRDs) in 2006 and 2007 to improve estimation of Scotland's DRD rates per 100 current IDUs. Using all four data-sources, and model-averaging of standard hierarchical log-linear models to allow for pairwise interactions between data-sources and/or demographic classifications, Scotland had an estimated 31700 IDUs in 2006 (95% credible interval: 24900-38700); but 25000 IDUs (95% CI: 20700-35000) by excluding recent HCV diagnoses whose IDU-risk can refer to past injecting. Only in the younger age-group (15-34 years) were Scotland's opiate-related DRD rates significantly lower for females than males. Older males' opiate-related DRD rate was 1.9 (1.24-2.40) per 100 current IDUs without or 1.3 (0.94-1.64) with inclusion of recent HCV diagnoses. If, indeed, Scotland had only 25000 current IDUs in 2006, with only 8200 of them aged 35+ years, the opiate-related DRD rate is higher among this older age group than has been appreciated hitherto. There is counter-balancing good news for the public health: the hitherto sharp increase in older current IDUs had stalled by 2006.
Novis, David A; Walsh, Molly; Wilkinson, David; St Louis, Mary; Ben-Ezra, Jonathon
2006-05-01
Automated laboratory hematology analyzers are capable of performing differential counts on peripheral blood smears with greater precision and more accurate detection of distributional and morphologic abnormalities than those performed by manual examinations of blood smears. Manual determinations of blood morphology and leukocyte differential counts are time-consuming, expensive, and may not always be necessary. The frequency with which hematology laboratory workers perform manual screens despite the availability of labor-saving features of automated analyzers is unknown. To determine the normative rates with which manual peripheral blood smears were performed in clinical laboratories, to examine laboratory practices associated with higher or lower manual review rates, and to measure the effects of manual smear review on the efficiency of generating complete blood count (CBC) determinations. From each of 3 traditional shifts per day, participants were asked to select serially, 10 automated CBC specimens, and to indicate whether manual scans and/or reviews with complete differential counts were performed on blood smears prepared from those specimens. Sampling continued until a total of 60 peripheral smears were reviewed manually. For each specimen on which a manual review was performed, participants indicated the patient's age, hemoglobin value, white blood cell count, platelet count, and the primary reason why the manual review was performed. Participants also submitted data concerning their institutions' demographic profiles and their laboratories' staffing, work volume, and practices regarding CBC determinations. The rates of manual reviews and estimations of efficiency in performing CBC determinations were obtained from the data. A total of 263 hospitals and independent laboratories, predominantly located in the United States, participating in the College of American Pathologists Q-Probes Program. There were 95,141 CBC determinations examined in this study; participants reviewed 15,423 (16.2%) peripheral blood smears manually. In the median institution (50th percentile), manual reviews of peripheral smears were performed on 26.7% of specimens. Manual differential count review rates were inversely associated with the magnitude of platelet counts that were required by laboratory policy to trigger smear reviews and with the efficiency of generating CBC reports. Lower manual differential count review rates were associated with laboratory policies that allowed manual reviews solely on the basis of abnormal automated red cell parameters and that precluded performing repeat manual reviews within designated time intervals. The manual scan rate elevated with increased number of hospital beds. In more than one third (35.7%) of the peripheral smears reviewed manually, participants claimed to have learned additional information beyond what was available on automated hematology analyzer printouts alone. By adopting certain laboratory practices, it may be possible to reduce the rates of manual reviews of peripheral blood smears and increase the efficiency of generating CBC results.
Linear-log counting-rate meter uses transconductance characteristics of a silicon planar transistor
NASA Technical Reports Server (NTRS)
Eichholz, J. J.
1969-01-01
Counting rate meter compresses a wide range of data values, or decades of current. Silicon planar transistor, operating in the zero collector-base voltage mode, is used as a feedback element in an operational amplifier to obtain the log response.
The use of noise equivalent count rate and the NEMA phantom for PET image quality evaluation.
Yang, Xin; Peng, Hao
2015-03-01
PET image quality is directly associated with two important parameters among others: count-rate performance and image signal-to-noise ratio (SNR). The framework of noise equivalent count rate (NECR) was developed back in the 1990s and has been widely used since then to evaluate count-rate performance for PET systems. The concept of NECR is not entirely straightforward, however, and among the issues requiring clarification are its original definition, its relationship to image quality, and its consistency among different derivation methods. In particular, we try to answer whether a higher NECR measurement using a standard NEMA phantom actually corresponds to better imaging performance. The paper includes the following topics: 1) revisiting the original analytical model for NECR derivation; 2) validating three methods for NECR calculation based on the NEMA phantom/standard; and 3) studying the spatial dependence of NECR and quantitative relationship between NECR and image SNR. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hollister, R
2009-08-26
Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Reducing the Teen Death Rate. KIDS COUNT Indicator Brief
ERIC Educational Resources Information Center
Shore, Rima; Shore, Barbara
2009-01-01
Life continues to hold considerable risk for adolescents in the United States. In 2006, the teen death rate stood at 64 deaths per 100,000 teens (13,739 teens) (KIDS COUNT Data Center, 2009). Although it has declined by 4 percent since 2000, the rate of teen death in this country remains substantially higher than in many peer nations, based…
Chen, Jinghua; Ren, Yichao; Li, Yuquan; Xia, Bin
2018-06-01
Bioflocs are not only a source of supplemental nutrition but also provide substantial probiotic bacteria and bioactive compounds, which play an important role in improving physiological health of aquatic organisms. A 60-day experiment was conducted to investigate the growth, intestinal microbiota, non-specific immune response and disease resistance of sea cucumber in biofloc systems with different carbon sources (glucose, sucrose and starch). Control (no biofloc) and three biofloc systems were set up, and each group has three replicates. The results showed that biofloc volume (BFV) and total suspended solids (TSS) increased in the sequences of glucose > sucrose > starch and green sea cucumber > white sea cucumber during the experiment. The highest specific growth rates (SGRs) were observed in biofloc system with glucose as carbon source, which also had relatively lower glucose, lactate and cortisol levels in coelomic fluid and higher glycogen content in muscle compared to other groups. There were significant increased Bacillus and Lactobacillus counts of sea cucumber intestine in biofloc systems, and the activities of superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) also showed obvious ascending trends. Significant increases in total coelomocytes counts (TCC), phagocytosis, respiratory burst, complement C3 content and lysozyme (LSZ) and acid phosphatase (ACP) activities of sea cucumber were all found in biofloc system (glucose). The expression patterns of most immune-related genes (i.e. Hsp90, Hsp70, c-type lectin (CL), toll-like receptor (TLR)) were up-regulated, suggesting the promotion of pathogen recognition ability and immune signaling pathways activation by biofloc. Furthermore, green and white sea cucumber had significantly higher survival rates in biofloc systems during the 14-day challenge test. In conclusion, biofloc technology could improve growth and physiological health of A. japonicus, by optimizing intestinal microbiota, strengthening antioxidant ability, enhancing non-specific immune response and disease resistance against pathogens, meanwhile glucose was recommended as optimal carbon source in biofloc system of sea cucumber culturing. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Matsuura, Hideharu
2015-04-01
High-resolution silicon X-ray detectors with a large active area are required for effectively detecting traces of hazardous elements in food and soil through the measurement of the energies and counts of X-ray fluorescence photons radially emitted from these elements. The thicknesses and areas of commercial silicon drift detectors (SDDs) are up to 0.5 mm and 1.5 cm2, respectively. We describe 1.5-mm-thick gated SDDs (GSDDs) that can detect photons with energies up to 50 keV. We simulated the electric potential distributions in GSDDs with a Si thickness of 1.5 mm and areas from 0.18 to 168 cm2 at a single high reverse bias. The area of a GSDD could be enlarged simply by increasing all the gate widths by the same multiple, and the capacitance of the GSDD remained small and its X-ray count rate remained high.
The Hard X-ray Emission from Scorpius X-1 as Seen by INTEGRAL
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.
2008-01-01
We present the results of our hard X-ray and gamma-ray study of the LMXB Sco X-1 utilizing INTEGRAL data as well as contemporaneous RXTE PCA data. We have concentrated on investigating the hard X-ray spectral properties of Sco X-1 including the nature of the high-energy, nonthermal component of the spectrum and its possible correlations with the location of the source on the X-ray color-color diagram. We find that Sco X-1 has two distinct spectral when the 20-40 keV count rate is greater than 140 counts/second. One state is a hard state which exhibits a significant high-energy, powerlaw tail to the lower energy thermal spectrum. The other state shows no evidence for a powerlaw tail whatsoever. We found suggestive evidence for a correlation of these hard and soft high-energy states with the position of Sco X-1 on the low-energy X-ray color-color diagram.
A method to stabilise the performance of negatively fed KM3NeT photomultipliers
NASA Astrophysics Data System (ADS)
Adrián-Martínez, S.; Ageron, M.; Aiello, S.; Albert, A.; Ameli, F.; Anassontzis, E. G.; Andre, M.; Androulakis, G.; Anghinolfi, M.; Anton, G.; Ardid, M.; Avgitas, T.; Barbarino, G.; Barbarito, E.; Baret, B.; Barrios-Martí, J.; Belias, A.; Berbee, E.; van den Berg, A.; Bertin, V.; Beurthey, S.; van Beveren, V.; Beverini, N.; Biagi, S.; Biagioni, A.; Billault, M.; Bondì, M.; Bormuth, R.; Bouhadef, B.; Bourlis, G.; Bourret, S.; Boutonnet, C.; Bouwhuis, M.; Bozza, C.; Bruijn, R.; Brunner, J.; Buis, E.; Buompane, R.; Busto, J.; Cacopardo, G.; Caillat, L.; Calamai, M.; Calvo, D.; Capone, A.; Caramete, L.; Cecchini, S.; Celli, S.; Champion, C.; Cherubini, S.; Chiarella, V.; Chiarelli, L.; Chiarusi, T.; Circella, M.; Classen, L.; Cobas, D.; Cocimano, R.; Coelho, J. A. B.; Coleiro, A.; Colonges, S.; Coniglione, R.; Cordelli, M.; Cosquer, A.; Coyle, P.; Creusot, A.; Cuttone, G.; D'Amato, C.; D'Amico, A.; D'Onofrio, A.; De Bonis, G.; De Sio, C.; Di Capua, F.; Di Palma, I.; Distefano, C.; Donzaud, C.; Dornic, D.; Dorosti-Hasankiadeh, Q.; Drakopoulou, E.; Drouhin, D.; Durocher, M.; Eberl, T.; Eichie, S.; van Eijk, D.; El Bojaddaini, I.; Elsaesser, D.; Enzenhöfer, A.; Favaro, M.; Fermani, P.; Ferrara, G.; Frascadore, G.; Furini, M.; Fusco, L. A.; Gal, T.; Galatà, S.; Garufi, F.; Gay, P.; Gebyehu, M.; Giacomini, F.; Gialanella, L.; Giordano, V.; Gizani, N.; Gracia, R.; Graf, K.; Grégoire, T.; Grella, G.; Grmek, A.; Guerzoni, M.; Habel, R.; Hallmann, S.; van Haren, H.; Harissopulos, S.; Heid, T.; Heijboer, A.; Heine, E.; Henry, S.; Hernández-Rey, J. J.; Hevinga, M.; Hofestädt, J.; Hugon, C. M. F.; Illuminati, G.; James, C. W.; Jansweijer, P.; Jongen, M.; de Jong, M.; Kadler, M.; Kalekin, O.; Kappes, A.; Katz, U. F.; Keller, P.; Kieft, G.; Kießling, D.; Koffeman, E. N.; Kooijman, P.; Kouchner, A.; Kreter, M.; Kulikovskiy, V.; Lahmann, R.; Lamare, P.; Leisos, A.; Leonora, E.; Clark, M. Lindsey; Liolios, A.; Llorens Alvarez, C. D.; Lo Presti, D.; Löhner, H.; Lonardo, A.; Lotze, M.; Loucatos, S.; Maccioni, E.; Mannheim, K.; Manzali, M.; Margiotta, A.; Margotti, A.; Marinelli, A.; Mariš, O.; Markou, C.; Martínez-Mora, J. A.; Martini, A.; Marzaioli, F.; Mele, R.; Melis, K. W.; Michael, T.; Migliozzi, P.; Migneco, E.; Mijakowski, P.; Miraglia, A.; Mollo, C. M.; Mongelli, M.; Morganti, M.; Moussa, A.; Musico, P.; Musumeci, M.; Nicolau, C. A.; Olcina, I.; Olivetto, C.; Orlando, A.; Orzelli, A.; Pancaldi, G.; Paolucci, A.; Papaikonomou, A.; Papaleo, R.; Păvălaš, G. E.; Peek, H.; Pellegrini, G.; Pellegrino, C.; Perrina, C.; Pfutzner, M.; Piattelli, P.; Pikounis, K.; Poma, G. E.; Popa, V.; Pradier, T.; Pratolongo, F.; Pühlhofer, G.; Pulvirenti, S.; Quinn, L.; Racca, C.; Raffaelli, F.; Randazzo, N.; Real, D.; Resvanis, L.; Reubelt, J.; Riccobene, G.; Rossi, C.; Rovelli, A.; Saldaña, M.; Salvadori, I.; Samtleben, D. F. E.; Sánchez García, A.; Sánchez Losa, A.; Sanguineti, M.; Santangelo, A.; Santonocito, D.; Sapienza, P.; Schimmel, F.; Schmelling, J.; Schnabel, J.; Sciacca, V.; Sedita, M.; Seitz, T.; Sgura, I.; Simeone, F.; Sipala, V.; Spisso, B.; Spurio, M.; Stavropoulos, G.; Steijger, J.; Stellacci, S. M.; Stransky, D.; Taiuti, M.; Tayalati, Y.; Terrasi, F.; Tézier, D.; Theraube, S.; Timmer, P.; Töonnis, C.; Trasatti, L.; Travaglini, R.; Trovato, A.; Tsirigotis, A.; Tzamarias, S.; Tzamariudaki, E.; Vallage, B.; Van Elewyck, V.; Vermeulen, J.; Versari, F.; Vicini, P.; Viola, S.; Vivolo, D.; Volkert, M.; Wiggers, L.; Wilms, J.; de Wolf, E.; Zachariadou, K.; Zani, S.; Zornoza, J. D.; Zúñiga, J.
2016-12-01
The KM3NeT research infrastructure, currently under construction in the Mediterranean Sea, will host neutrino telescopes for the identification of neutrino sources in the Universe and for studies of the neutrino mass hierarchy. These telescopes will house hundreds of thousands of photomultiplier tubes that will have to be operated in a stable and reliable fashion. In this context, the stability of the dark counts has been investigated for photomultiplier tubes with negative high voltage on the photocathode and held in insulating support structures made of 3D printed nylon material. Small gaps between the rigid support structure and the photomultiplier tubes in the presence of electric fields can lead to discharges that produce dark count rates that are highly variable. A solution was found by applying the same insulating varnish as used for the high voltage bases directly to the outside of the photomultiplier tubes. This transparent conformal coating provides a convenient and inexpensive method of insulation.
A XMM-Newton Observation of Nova LMC 1995, a Bright Supersoft X-ray Source
NASA Technical Reports Server (NTRS)
Orio, Marina; Hartmann, Wouter; Still, Martin; Greiner, Jochen
2003-01-01
Nova LMC 1995, previously detected during 1995-1998 with ROSAT, was observed again as a luminous supersoft X-ray source with XMM-Newton in December of 2000. This nova offers the possibility to observe the spectrum of a hot white dwarf, burning hydrogen in a shell and not obscured by a wind or by nebular emission like in other supersoft X-ray sources. Notwithstanding uncertainties in the calibration of the EPIC instruments at energy E<0.5 keV, using atmospheric models in Non Local Thermonuclear Equilibrium we derived an effective temperature in the range 400,000-450,000 K, a bolometric luminosity Lbolabout equal to 2.3 times 10 sup37 erg s sup-l, and we verified that the abundance of carbon is not significantly enhanced in the X-rays emitting shell. The RGS grating spectra do not show emission lines (originated in a nebula or a wind) observed for some other supersoft X-ray sources. The crowded atmospheric absorption lines of the white dwarf cannot be not resolved. There is no hard component (expected from a wind, a surrounding nebula or an accretion disk), with no counts above the background at E>0.6 keV, and an upper limit Fx,hard = 10 sup-14 erg s sup-l cm sup-2 to the X-ray flux above this energy. The background corrected count rate measured by the EPIC instruments was variable on time scales of minutes and hours, but without the flares or sudden obscuration observed for other novae. The power spectrum shows a peak at 5.25 hours, possibly due to a modulation with the orbital period. We also briefly discuss the scenarios in which this nova may become a type Ia supernova progenitor.
Fever in trauma patients: evaluation of risk factors, including traumatic brain injury.
Bengualid, Victoria; Talari, Goutham; Rubin, David; Albaeni, Aiham; Ciubotaru, Ronald L; Berger, Judith
2015-03-01
The role of fever in trauma patients remains unclear. Fever occurs as a response to release of cytokines and prostaglandins by white blood cells. Many factors, including trauma, can trigger release of these factors. To determine whether (1) fever in the first 48 hours is related to a favorable outcome in trauma patients and (2) fever is more common in patients with head trauma. Retrospective study of trauma patients admitted to the intensive care unit for at least 2 days. Data were analyzed by using multivariate analysis. Of 162 patients studied, 40% had fever during the first 48 hours. Febrile patients had higher mortality rates than did afebrile patients. When adjusted for severity of injuries, fever did not correlate with mortality. Neither the incidence of fever in the first 48 hours after admission to the intensive care unit nor the number of days febrile in the unit differed between patients with and patients without head trauma (traumatic brain injury). About 70% of febrile patients did not have a source found for their fever. Febrile patients without an identified source of infection had lower peak white blood cell counts, lower maximum body temperature, and higher minimum platelet counts than did febrile patients who had an infectious source identified. The most common infection was pneumonia. No relationship was found between the presence of fever during the first 48 hours and mortality. Patients with traumatic brain injury did not have a higher incidence of fever than did patients without traumatic brain injury. About 30% of febrile patients had an identifiable source of infection. Further studies are needed to understand the origin and role of fever in trauma patients. ©2015 American Association of Critical-Care Nurses.
CASA-Mot technology: how results are affected by the frame rate and counting chamber.
Bompart, Daznia; García-Molina, Almudena; Valverde, Anthony; Caldeira, Carina; Yániz, Jesús; Núñez de Murga, Manuel; Soler, Carles
2018-04-04
For over 30 years, CASA-Mot technology has been used for kinematic analysis of sperm motility in different mammalian species, but insufficient attention has been paid to the technical limitations of commercial computer-aided sperm analysis (CASA) systems. Counting chamber type and frame rate are two of the most important aspects to be taken into account. Counting chambers can be disposable or reusable, with different depths. In human semen analysis, reusable chambers with a depth of 10µm are the most frequently used, whereas for most farm animal species it is more common to use disposable chambers with a depth of 20µm . The frame rate was previously limited by the hardware, although changes in the number of images collected could lead to significant variations in some kinematic parameters, mainly in curvilinear velocity (VCL). A frame rate of 60 frames s-1 is widely considered to be the minimum necessary for satisfactory results. However, the frame rate is species specific and must be defined in each experimental condition. In conclusion, we show that the optimal combination of frame rate and counting chamber type and depth should be defined for each species and experimental condition in order to obtain reliable results.
Plastic Scintillator Based Detector for Observations of Terrestrial Gamma-ray Flashes.
NASA Astrophysics Data System (ADS)
Barghi, M. R., Sr.; Delaney, N.; Forouzani, A.; Wells, E.; Parab, A.; Smith, D.; Martinez, F.; Bowers, G. S.; Sample, J.
2017-12-01
We present an overview of the concept and design of the Light and Fast TGF Recorder (LAFTR), a balloon borne gamma-ray detector designed to observe Terrestrial Gamma-Ray Flashes (TGFs). Terrestrial Gamma-Ray Flashes (TGFs) are extremely bright, sub-millisecond bursts of gamma-rays observed to originate inside thunderclouds coincident with lightning. LAFTR is joint institutional project built by undergraduates at the University of California Santa Cruz and Montana State University. It consists of a detector system fed into analog front-end electronics and digital processing. The presentation focuses specifically on the UCSC components, which consists of the detector system and analog front-end electronics. Because of the extremely high count rates observed during TGFs, speed is essential for both the detector and electronics of the instrument. The detector employs a fast plastic scintillator (BC-408) read out by a SensL Silicon Photomultiplier (SiPM). BC-408 is chosen for its speed ( 4 ns decay time) and low cost and availability. Furthermore, GEANT3 simulations confirm the scintillator is sensitive to 500 counts at 7 km horizontal distance from the TGF source (for a 13 km source altitude and 26 km balloon altitude) and to 5 counts out to 20 km. The signal from the SiPM has a long exponential decay tail and is sent to a custom shaping circuit board that amplifies and shapes the signal into a semi-Gaussian pulse with a 40 ns FWHM. The signal is then input to a 6-channel discriminator board that clamps the signal and outputs a Low Voltage Differential Signal (LVDS) for processing by the digital electronics.
Chornobyl 30 years later: Radiation, pregnancies, and developmental anomalies in Rivne, Ukraine.
Wertelecki, Wladimir; Chambers, Christina D; Yevtushok, Lyubov; Zymak-Zakutnya, Natalya; Sosyniuk, Zoriana; Lapchenko, Serhiy; Ievtushok, Bogdana; Akhmedzhanova, Diana; Komov, Oleksandr
2017-01-01
In the 30 years since the Chornobyl nuclear power plant disaster, there is evidence of persistent levels of incorporated ionizing radiation in adults, children and pregnant women in the surrounding area. Measured levels of Cesium-137 vary by region, and may be influenced by dietary and water sources as well as proximity to nuclear power plants. Since 2000, comprehensive, population-based birth defects monitoring has been performed in selected regions of Ukraine to evaluate trends and to generate hypotheses regarding potential causes of unexplained variations in defect rates. Significantly higher rates of microcephaly, neural tube defects, and microphthalmia have been identified in selected regions of Ukraine collectively known as Polissia compared to adjacent regions collectively termed non-Polissia, and these significantly higher rates were evident particularly in the years 2000-2009. The Polissia regions have also demonstrated higher mean whole body counts of Cesium-137 compared to values in individuals residing in other non-Polissia regions. The potential causal relationship between persistent ionizing radiation pollution and selected congenital anomaly rates supports the need for a more thorough, targeted investigation of the sources of persistent ionizing radiation and the biological plausibility of a potential teratogenic effect. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Budy, Phaedra; Bowerman, Tracy; Al-Chokhachy, Robert K.; Conner, Mary; Schaller, Howard
2017-01-01
Temporal symmetry models (TSM) represent advances in the analytical application of mark–recapture data to population status assessments. For a population of char, we employed 10 years of active and passive mark–recapture data to quantify population growth rates using different data sources and analytical approaches. Estimates of adult population growth rate were 1.01 (95% confidence interval = 0.84–1.20) using a temporal symmetry model (λTSM), 0.96 (0.68–1.34) based on logistic regressions of annual snorkel data (λA), and 0.92 (0.77–1.11) from redd counts (λR). Top-performing TSMs included an increasing time trend in recruitment (f) and changes in capture probability (p). There was only a 1% chance the population decreased ≥50%, and a 10% chance it decreased ≥30% (λMCMC; based on Bayesian Markov chain Monte Carlo procedure). Size structure was stable; however, the adult population was dominated by small adults, and over the study period there was a decline in the contribution of large adults to total biomass. Juvenile condition decreased with increasing adult densities. Utilization of these different information sources provided a robust weight-of-evidence approach to identifying population status and potential mechanisms driving changes in population growth rates.
Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less
NASA Astrophysics Data System (ADS)
Enderlein, Joerg; Ruhlandt, Daja; Chithik, Anna; Ebrecht, René; Wouters, Fred S.; Gregor, Ingo
2016-02-01
Fluorescence lifetime microscopy has become an important method of bioimaging, allowing not only to record intensity and spectral, but also lifetime information across an image. One of the most widely used methods of FLIM is based on Time-Correlated Single Photon Counting (TCSPC). In TCSPC, one determines this curve by exciting molecules with a periodic train of short laser pulses, and then measuring the time delay between the first recorded fluorescence photon after each exciting laser pulse. An important technical detail of TCSPC measurements is the fact that the delay times between excitation laser pulses and resulting fluorescence photons are always measured between a laser pulse and the first fluorescence photon which is detected after that pulse. At high count rates, this leads to so-called pile-up: ``early'' photons eclipse long-delay photons, resulting in heavily skewed TCSPC histograms. To avoid pile-up, a rule of thumb is to perform TCSPC measurements at photon count rates which are at least hundred times smaller than the laser-pulse excitation rate. The downside of this approach is that the fluorescence-photon count-rate is restricted to a value below one hundredth of the laser-pulse excitation-rate, reducing the overall speed with which a fluorescence signal can be measured. We present a new data evaluation method which provides pile-up corrected fluorescence decay estimates from TCSPC measurements at high count rates, and we demonstrate our method on FLIM of fluorescently labeled cells.
200-300 N. Stetson, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates throughout the grading ranged from 4,500 cpm to 8,000 cpm. No count rates were found at any time that exceeded the threshold limits of 17,246 cpm and 18,098 cpm.
0 - 36 W. Illinois St., January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limits of 6,738 cpm and 7,029 cpm.
400-449 N. State St, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limits of 6,338 cpm and 7,038 cpm.
Database crime to crime match rate calculation.
Buckleton, John; Bright, Jo-Anne; Walsh, Simon J
2009-06-01
Guidance exists on how to count matches between samples in a crime sample database but we are unable to locate a definition of how to estimate a match rate. We propose a method that does not proceed from the match counting definition but which has a strong logic.
The Complete Z-diagram of LMC X-2
NASA Technical Reports Server (NTRS)
White, Nicholas E. (Technical Monitor); Smale, A. P.; Homan, J.; Kuulkers, E.
2003-01-01
We present results from four Rossi X-ray Timing Explorer (RXTE) observations of the bright low mass X-ray binary LMC X-2. During these observations, which span a year and include over 160 hrs of data, the source exhibits clear evolution through three branches on its hardness-intensity and color-color diagrams, consistent with the flaring, normal, and horizontal branches (FB, NB, HB) of a Z-source, and remarkably similar to Z-tracks derived for GX 17+2, Sco X-1 and GX 349+2. LMC X-2 was observed in the FB, NB, and HB for roughly 30%, 40%, and 30% respectively of the total time covered. The source traces out the full extent of the Z in approximately 1 day, and the Z-track shows evidence for secular shifts on a timescale in excess of a few days. Although the count rate of LMC X-2 is low compared with the other known 2-sources due to its greater distance, the power density spectra selected by branch show very-low-frequency noise characteristics at least consistent with those from other Z-sources. We thus confirm the identification of LMC X-2 as a Z-source, the first identified outside our Galaxy.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 2 2011-10-01 2011-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 2 2013-10-01 2012-10-01 true Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 2 2012-10-01 2012-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; ...
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less
Silva, H G; Lopes, I
Heliospheric modulation of galactic cosmic rays links solar cycle activity with neutron monitor count rate on earth. A less direct relation holds between neutron monitor count rate and atmospheric electric field because different atmospheric processes, including fluctuations in the ionosphere, are involved. Although a full quantitative model is still lacking, this link is supported by solid statistical evidence. Thus, a connection between the solar cycle activity and atmospheric electric field is expected. To gain a deeper insight into these relations, sunspot area (NOAA, USA), neutron monitor count rate (Climax, Colorado, USA), and atmospheric electric field (Lisbon, Portugal) are presented here in a phase space representation. The period considered covers two solar cycles (21, 22) and extends from 1978 to 1990. Two solar maxima were observed in this dataset, one in 1979 and another in 1989, as well as one solar minimum in 1986. Two main observations of the present study were: (1) similar short-term topological features of the phase space representations of the three variables, (2) a long-term phase space radius synchronization between the solar cycle activity, neutron monitor count rate, and potential gradient (confirmed by absolute correlation values above ~0.8). Finally, the methodology proposed here can be used for obtaining the relations between other atmospheric parameters (e.g., solar radiation) and solar cycle activity.
NASA Astrophysics Data System (ADS)
Scott, K. S.; Yun, M. S.; Wilson, G. W.; Austermann, J. E.; Aguilar, E.; Aretxaga, I.; Ezawa, H.; Ferrusca, D.; Hatsukade, B.; Hughes, D. H.; Iono, D.; Giavalisco, M.; Kawabe, R.; Kohno, K.; Mauskopf, P. D.; Oshima, T.; Perera, T. A.; Rand, J.; Tamura, Y.; Tosaki, T.; Velazquez, M.; Williams, C. C.; Zeballos, M.
2010-07-01
We present the first results from a confusion-limited map of the Great Observatories Origins Deep Survey-South (GOODS-S) taken with the AzTEC camera on the Atacama Submillimeter Telescope Experiment. We imaged a field to a 1σ depth of 0.48-0.73 mJybeam-1, making this one of the deepest blank-field surveys at mm-wavelengths ever achieved. Although by traditional standards our GOODS-S map is extremely confused due to a sea of faint underlying sources, we demonstrate through simulations that our source identification and number counts analyses are robust, and the techniques discussed in this paper are relevant for other deeply confused surveys. We find a total of 41 dusty starburst galaxies with signal-to-noise ratios S/N >= 3. 5 within this uniformly covered region, where only two are expected to be false detections, and an additional seven robust source candidates located in the noisier (1σ ~ 1 mJybeam-1) outer region of the map. We derive the 1.1 mm number counts from this field using two different methods: a fluctuation or ``P(d)'' analysis and a semi-Bayesian technique and find that both methods give consistent results. Our data are well fit by a Schechter function model with . Given the depth of this survey, we put the first tight constraints on the 1.1 mm number counts at S1.1mm = 0.5 mJy, and we find evidence that the faint end of the number counts at from various SCUBA surveys towards lensing clusters are biased high. In contrast to the 870μm survey of this field with the LABOCA camera, we find no apparent underdensity of sources compared to previous surveys at 1.1mm the estimates of the number counts of SMGs at flux densities >1mJy determined here are consistent with those measured from the AzTEC/SHADES survey. Additionally, we find a significant number of SMGs not identified in the LABOCA catalogue. We find that in contrast to observations at λ <= 500μm, MIPS 24μm sources do not resolve the total energy density in the cosmic infrared background at 1.1 mm, demonstrating that a population of z >~ 3 dust-obscured galaxies that are unaccounted for at these shorter wavelengths potentially contribute to a large fraction (~2/3) of the infrared background at 1.1 mm.
Radiocarbon-based ages and growth rates of bamboo corals from the Gulf of Alaska
NASA Astrophysics Data System (ADS)
Roark, E. Brendan; Guilderson, Thomas P.; Flood-Page, Sarah; Dunbar, Robert B.; Ingram, B. Lynn; Fallon, Stewart J.; McCulloch, Malcolm
2005-02-01
Deep-sea coral communities have long been recognized by fisherman as areas that support large populations of commercial fish. As a consequence, many deep-sea coral communities are threatened by bottom trawling. Successful management and conservation of this widespread deep-sea habitat requires knowledge of the age and growth rates of deep-sea corals. These organisms also contain important archives of intermediate and deep-water variability, and are thus of interest in the context of decadal to century-scale climate dynamics. Here, we present Δ14C data that suggest that bamboo corals from the Gulf of Alaska are long-lived (75-126 years) and that they acquire skeletal carbon from two distinct sources. Independent verification of our growth rate estimates and coral ages is obtained by counting seasonal Sr/Ca cycles and probable lunar cycle growth bands.
Radiocarbon-Based Ages and Growth Rates of Bamboo Corals from the Gulf of Alaska
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roark, E B; Guilderson, T P; Flood-Page, S
2004-12-12
Deep-sea coral communities have long been recognized by fisherman as areas that support large populations of commercial fish. As a consequence, many deep-sea coral communities are threatened by bottom trawling. Successful management and conservation of this widespread deep-sea habitat requires knowledge of the age and growth rates of deep-sea corals. These organisms also contain important archives of intermediate and deep-water variability, and are thus of interest in the context of decadal to century-scale climate dynamics. Here, we present {Delta}{sup 14}C data that suggest that bamboo corals from the Gulf of Alaska are long-lived (75-126 years) and that they acquire skeletalmore » carbon from two distinct sources. Independent verification of our growth rate estimates and coral ages is obtained by counting seasonal Sr/Ca cycles and probable lunar cycle growth bands.« less
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
Ahmed, Anwar E; Ali, Yosra Z; Al-Suliman, Ahmad M; Albagshi, Jafar M; Al Salamah, Majid; Elsayid, Mohieldin; Alanazi, Wala R; Ahmed, Rayan A; McClish, Donna K; Al-Jahdali, Hamdan
2017-01-01
High white blood cell (WBC) count is an indicator of sickle cell disease (SCD) severity, however, there are limited studies on WBC counts in Saudi Arabian patients with SCD. The aim of this study was to estimate the prevalence of abnormal leukocyte count (either low or high) and identify factors associated with high WBC counts in a sample of Saudi patients with SCD. A cross-sectional and retrospective chart review study was carried out on 290 SCD patients who were routinely treated at King Fahad Hospital in Hofuf, Saudi Arabia. An interview was conducted to assess clinical presentations, and we reviewed patient charts to collect data on blood test parameters for the previous 6 months. Almost half (131 [45.2%]) of the sample had abnormal leukocyte counts: low WBC counts 15 (5.2%) and high 116 (40%). High WBC counts were associated with shortness of breath ( P =0.022), tiredness ( P =0.039), swelling in hands/feet ( P =0.020), and back pain ( P =0.007). The mean hemoglobin was higher in patients with normal WBC counts ( P =0.024), while the mean hemoglobin S was high in patients with high WBC counts ( P =0.003). After adjustment for potential confounders, predictors of high WBC counts were male gender (adjusted odds ratio [aOR]=3.63) and patients with cough (aOR=2.18), low hemoglobin (aOR=0.76), and low heart rate (aOR=0.97). Abnormal leukocyte count was common: approximately five in ten Saudi SCD patients assessed in this sample. Male gender, cough, low hemoglobin, and low heart rate were associated with high WBC count. Strategies targeting high WBC count could prevent disease complication and thus could be beneficial for SCD patients.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Hantavirus pulmonary syndrome, United States, 1993-2009.
MacNeil, Adam; Ksiazek, Thomas G; Rollin, Pierre E
2011-07-01
Hantavirus pulmonary syndrome (HPS) is a severe respiratory illness identified in 1993. Since its identification, the Centers for Disease Control and Prevention has obtained standardized information about and maintained a registry of all laboratory-confirmed HPS cases in the United States. During 1993-2009, a total of 510 HPS cases were identified. Case counts have varied from 11 to 48 per year (case-fatality rate 35%). However, there were no trends suggesting increasing or decreasing case counts or fatality rates. Although cases were reported in 30 states, most cases occurred in the western half of the country; annual case counts varied most in the southwestern United States. Increased hematocrits, leukocyte counts, and creatinine levels were more common in HPS case-patients who died. HPS is a severe disease with a high case-fatality rate, and cases continue to occur. The greatest potential for high annual HPS incidence exists in the southwestern United States.
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Bybee, R. L.
1978-01-01
The paper describes a new type of continuous channel multiplier (CEM) fabricated from a low-resistance glass to produce a high-conductivity channel section and thereby obtain a high count-rate capability. The flat-cone cathode configuration of the CEM is specifically designed for the detection of astigmatic exit images from grazing-incidence spectrometers at the optimum angle of illumination for high detection efficiencies at XUV wavelengths. Typical operating voltages are in the range of 2500-2900 V with stable counting plateau slopes in the range 3-6% per 100-V increment. The modal gain at 2800 V was typically in the range (50-80) million. The modal gain falls off at count rates in excess of about 20,000 per sec. The detection efficiency remains essentially constant to count rates in excess of 2 million per sec. Higher detection efficiencies (better than 20%) are obtained by coating the CEM with MgF2. In life tests of coated CEMs, no measurable change in detection efficiency was measured to a total accumulated signal of 2 times 10 to the 11th power counts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofstetter, K.J.; Sigg, R.
1990-12-31
A number of concrete culverts used to retrievably store drummed, dry, radioactive waste at the Savannah River Site (SRS), were suspected of containing ambiguous quantities of transuranic (TRU) nuclides. These culverts were assayed in place for Pu-239 content using thermal and fast neutron counting techniques. High resolution gamma-ray spectroscopy on 17 culverts, having neutron emission rates several times higher than expected, showed characteristic gamma-ray signatures of neutron emitters other than Pu-239 (e.g., Pu-238, Pu/Be, or Am/Be neutron sources). This study confirmed the Pu-239 content of the culverts with anomalous neutron rates and established limits on the Pu-239 mass in eachmore » of the 17 suspect culverts by in-field, non-intrusive gamma-ray measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofstetter, K.J.; Sigg, R.
1990-01-01
A number of concrete culverts used to retrievably store drummed, dry, radioactive waste at the Savannah River Site (SRS), were suspected of containing ambiguous quantities of transuranic (TRU) nuclides. These culverts were assayed in place for Pu-239 content using thermal and fast neutron counting techniques. High resolution gamma-ray spectroscopy on 17 culverts, having neutron emission rates several times higher than expected, showed characteristic gamma-ray signatures of neutron emitters other than Pu-239 (e.g., Pu-238, Pu/Be, or Am/Be neutron sources). This study confirmed the Pu-239 content of the culverts with anomalous neutron rates and established limits on the Pu-239 mass in eachmore » of the 17 suspect culverts by in-field, non-intrusive gamma-ray measurements.« less
ERIC Educational Resources Information Center
University of South Florida, Tampa. Florida Center for Children and Youth.
This Kids Count report investigates statewide trends in the well-being of Florida's children. The statistical report is based on 19 indicators of child well being: (1) low birth weight infants; (2) infant mortality rate; (3) child death rate; (4) births to single teens; (5) juvenile violent crime arrest rate; (6) percent graduating from high…
1-99 W. Hubbard St, May 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation.The count rates in the excavation ranged from 2,100 cpm to 4,200 cpm.No count rates were found at any time that exceeded the instrument specific threshold limits of 7,366 and 6,415 cpm.
Is Parenting Child's Play? Kids Count in Missouri Report on Adolescent Pregnancy.
ERIC Educational Resources Information Center
Citizens for Missouri's Children, St. Louis.
This Kids Count report presents current information on adolescent pregnancy rates in Missouri. Part 1, "Overview of Adolescent Pregnancy in Missouri," discusses the changing pregnancy, abortion, and birth rates for 15- to 19-year-old adolescents, racial differences in pregnancy risk, regional differences suggesting a link between…
A very deep IRAS survey - Constraints on the evolution of starburst galaxies
NASA Astrophysics Data System (ADS)
Hacking, Perry; Condon, J. J.; Houck, J. R.
1987-05-01
Counts of sources (primarily starburst galaxies) from a deep 60 microns IRAS survey published by Hacking and Houck (1987) are compared with four evolutionary models. The counts below 100 mJy are higher than expected if no evolution has taken place out to a redshift of approximately 0.2. Redshift measurements of the survey sources should be able to distinguish between luminosity-evolution and density-evolution models and detect as little as a 20 percent brightening or increase in density of infrared sources per billion years ago (H/0/ = 100 km/s per Mpc). Starburst galaxies cannot account for the reported 100 microns background without extreme evolution at high redshifts.