Sample records for coincidence counting method

  1. Primary Standardization of 152Eu by 4πβ(LS) – γ (Nal) coincidence counting and CIEMAT-NIST method

    NASA Astrophysics Data System (ADS)

    Ruzzarin, A.; da Cruz, P. A. L.; Ferreira Filho, A. L.; Iwahara, A.

    2018-03-01

    The 4πβ-γ coincidence counting and CIEMAT/NIST liquid scintillation method were used in the standardization of a solution of 152Eu. In CIEMAT/NIST method, measurements were performed in a Liquid Scintillation Counter model Wallac 1414. In the 4πβ-γ coincidence counting, the solution was standardized using a coincidence method with ‘‘beta-efficiency extrapolation”. A simple 4πβ-γ coincidence system was used, with acrylic scintillation cell coupled to two coincident photomultipliers at 180° each other and NaI(Tl) detector. The activity concentrations obtained were 156.934 ± 0.722 and 157.403 ± 0.113 kBq/g, respectively, for CIEMAT/NIST and 4πβ-γ coincidence counting measurement methods.

  2. Standardization of iodine-129 by the TDCR liquid scintillation method and 4π β-γ coincidence counting

    NASA Astrophysics Data System (ADS)

    Cassette, P.; Bouchard, J.; Chauvenet, B.

    1994-01-01

    Iodine-129 is a long-lived fission product, with physical and chemical properties that make it a good candidate for evaluating the environmental impact of the nuclear energy fuel cycle. To avoid solid source preparation problems, liquid scintillation has been used to standardize this nuclide for a EUROMET intercomparison. Two methods were used to measure the iodine-129 activity: triple-to-double-coincidence ratio liquid scintillation counting and 4π β-γ coincidence counting; the results are in good agreement.

  3. A novel algorithm for solving the true coincident counting issues in Monte Carlo simulations for radiation spectroscopy.

    PubMed

    Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A

    2015-06-01

    Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.

  4. Evaluation of absolute measurement using a 4π plastic scintillator for the 4πβ-γ coincidence counting method.

    PubMed

    Unno, Y; Sanami, T; Sasaki, S; Hagiwara, M; Yunoki, A

    2018-04-01

    Absolute measurement by the 4πβ-γ coincidence counting method was conducted by two photomultipliers facing across a plastic scintillator to be focused on β ray counting efficiency. The detector was held with a through-hole-type NaI(Tl) detector. The results include absolutely determined activity and its uncertainty especially about extrapolation. A comparison between the obtained and known activities showed agreement within their uncertainties. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Quantification of 235U and 238U activity concentrations for undeclared nuclear materials by a digital gamma-gamma coincidence spectroscopy.

    PubMed

    Zhang, Weihua; Yi, Jing; Mekarski, Pawel; Ungar, Kurt; Hauck, Barry; Kramer, Gary H

    2011-06-01

    The purpose of this study is to investigate the possibility of verifying depleted uranium (DU), natural uranium (NU), low enriched uranium (LEU) and high enriched uranium (HEU) by a developed digital gamma-gamma coincidence spectroscopy. The spectroscopy consists of two NaI(Tl) scintillators and XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The results demonstrate that the spectroscopy provides an effective method of (235)U and (238)U quantification based on the count rate of their gamma-gamma coincidence counting signatures. The main advantages of this approach over the conventional gamma spectrometry include the facts of low background continuum near coincident signatures of (235)U and (238)U, less interference from other radionuclides by the gamma-gamma coincidence counting, and region-of-interest (ROI) imagine analysis for uranium enrichment determination. Compared to conventional gamma spectrometry, the method offers additional advantage of requiring minimal calibrations for (235)U and (238)U quantification at different sample geometries. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  6. Standardization of Ga-68 by coincidence measurements, liquid scintillation counting and 4πγ counting.

    PubMed

    Roteta, Miguel; Peyres, Virginia; Rodríguez Barquero, Leonor; García-Toraño, Eduardo; Arenillas, Pablo; Balpardo, Christian; Rodrígues, Darío; Llovera, Roberto

    2012-09-01

    The radionuclide (68)Ga is one of the few positron emitters that can be prepared in-house without the use of a cyclotron. It disintegrates to the ground state of (68)Zn partially by positron emission (89.1%) with a maximum energy of 1899.1 keV, and partially by electron capture (10.9%). This nuclide has been standardized in the frame of a cooperation project between the Radionuclide Metrology laboratories from CIEMAT (Spain) and CNEA (Argentina). Measurements involved several techniques: 4πβ-γ coincidences, integral gamma counting and Liquid Scintillation Counting using the triple to double coincidence ratio and the CIEMAT/NIST methods. Given the short half-life of the radionuclide assayed, a direct comparison between results from both laboratories was excluded and a comparison of experimental efficiencies of similar NaI detectors was used instead. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Measuring Transmission Efficiencies Of Mass Spectrometers

    NASA Technical Reports Server (NTRS)

    Srivastava, Santosh K.

    1989-01-01

    Coincidence counts yield absolute efficiencies. System measures mass-dependent transmission efficiencies of mass spectrometers, using coincidence-counting techniques reminiscent of those used for many years in calibration of detectors for subatomic particles. Coincidences between detected ions and electrons producing them counted during operation of mass spectrometer. Under certain assumptions regarding inelastic scattering of electrons, electron/ion-coincidence count is direct measure of transmission efficiency of spectrometer. When fully developed, system compact, portable, and used routinely to calibrate mass spectrometers.

  8. Digital coincidence counting

    NASA Astrophysics Data System (ADS)

    Buckman, S. M.; Ius, D.

    1996-02-01

    This paper reports on the development of a digital coincidence-counting system which comprises a custom-built data acquisition card and associated PC software. The system has been designed to digitise the pulse-trains from two radiation detectors at a rate of 20 MSamples/s with 12-bit resolution. Through hardware compression of the data, the system can continuously record both individual pulse-shapes and the time intervals between pulses. Software-based circuits are used to process the stored pulse trains. These circuits are constructed simply by linking together icons representing various components such as coincidence mixers, time delays, single-channel analysers, deadtimes and scalers. This system enables a pair of pulse trains to be processed repeatedly using any number of different methods. Some preliminary results are presented in order to demonstrate the versatility and efficiency of this new method.

  9. Compton suppression gamma-counting: The effect of count rate

    USGS Publications Warehouse

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  10. The effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting

    DOE PAGES

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.; ...

    2016-01-13

    In neutron coincidence counting using the shift register autocorrelation technique, a predelay is inserted before the opening of the (R+A)-gate. Operationally the purpose of the predelay is to ensure that the (R+A)- and A-gates have matched effectiveness, otherwise a bias will result when the difference between the gates is used to calculate the accidentals corrected net reals coincidence rate. The necessity for the predelay was established experimentally in the early practical development and deployment of the coincidence counting method. The choice of predelay for a given detection system is usually made experimentally, but even today long standing traditional values (e.g.,more » 4.5 µs) are often used. This, at least in part, reflects the fact that a deep understanding of why a finite predelay setting is needed and how to control the underlying influences has not been fully worked out. We attempt, in this paper, to gain some insight into the problem. One aspect we consider is the slowing down, thermalization, and diffusion of neutrons in the detector moderator. The other is the influence of deadtime and electronic transients. These may be classified as non-ideal detector behaviors because they are not included in the conventional model used to interpret measurement data. From improved understanding of the effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting, the performance of both future and current coincidence counters may be improved.« less

  11. The effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.; Goddard, Braden; Stewart, Scott

    2016-04-01

    In neutron coincidence counting using the shift register autocorrelation technique, a predelay is inserted before the opening of the (R+A)-gate. Operationally the purpose of the predelay is to ensure that the (R+A)- and A-gates have matched effectiveness, otherwise a bias will result when the difference between the gates is used to calculate the accidentals corrected net reals coincidence rate. The necessity for the predelay was established experimentally in the early practical development and deployment of the coincidence counting method. The choice of predelay for a given detection system is usually made experimentally, but even today long standing traditional values (e.g., 4.5 μs) are often used. This, at least in part, reflects the fact that a deep understanding of why a finite predelay setting is needed and how to control the underlying influences has not been fully worked out. In this paper we attempt to gain some insight into the problem. One aspect we consider is the slowing down, thermalization, and diffusion of neutrons in the detector moderator. The other is the influence of deadtime and electronic transients. These may be classified as non-ideal detector behaviors because they are not included in the conventional model used to interpret measurement data. From improved understanding of the effect of deadtime and electronic transients on the predelay bias in neutron coincidence counting, the performance of both future and current coincidence counters may be improved.

  12. Low-Noise Free-Running High-Rate Photon-Counting for Space Communication and Ranging

    NASA Technical Reports Server (NTRS)

    Lu, Wei; Krainak, Michael A.; Yang, Guangning; Sun, Xiaoli; Merritt, Scott

    2016-01-01

    We present performance data for low-noise free-running high-rate photon counting method for space optical communication and ranging. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We successfully measured real-time communication performance using both the 2 detected-photon threshold and logic AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects without using other method of Time Gating The HgCdTe APD array routinely demonstrated very high photon detection efficiencies ((is) greater than 50%) at near infrared wavelength. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output. NASA GSFC has tested both detectors for their potential application for space communications and ranging. We developed and compare their performances using both the 2 detected photon threshold and coincidence methods.

  13. Low-Noise Free-Running High-Rate Photon-Counting for Space Communication and Ranging

    NASA Technical Reports Server (NTRS)

    Lu, Wei; Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Merritt, Scott

    2016-01-01

    We present performance data for low-noise free-running high-rate photon counting method for space optical communication and ranging. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We successfully measured real-time communication performance using both the 2 detected-photon threshold and logic AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects without using other method of Time Gating The HgCdTe APD array routinely demonstrated very high photon detection efficiencies (50) at near infrared wavelength. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output. NASA GSFC has tested both detectors for their potential application for space communications and ranging. We developed and compare their performances using both the 2 detected photon threshold and coincidence methods.

  14. The IAEA neutron coincidence counting (INCC) and the DEMING least-squares fitting programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krick, M.S.; Harker, W.C.; Rinard, P.M.

    1998-12-01

    Two computer programs are described: (1) the INCC (IAEA or International Neutron Coincidence Counting) program and (2) the DEMING curve-fitting program. The INCC program is an IAEA version of the Los Alamos NCC (Neutron Coincidence Counting) code. The DEMING program is an upgrade of earlier Windows{reg_sign} and DOS codes with the same name. The versions described are INCC 3.00 and DEMING 1.11. The INCC and DEMING codes provide inspectors with the software support needed to perform calibration and verification measurements with all of the neutron coincidence counting systems used in IAEA inspections for the nondestructive assay of plutonium and uranium.

  15. FPGA-based gating and logic for multichannel single photon counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pooser, Raphael C; Earl, Dennis Duncan; Evans, Philip G

    2012-01-01

    We present results characterizing multichannel InGaAs single photon detectors utilizing gated passive quenching circuits (GPQC), self-differencing techniques, and field programmable gate array (FPGA)-based logic for both diode gating and coincidence counting. Utilizing FPGAs for the diode gating frontend and the logic counting backend has the advantage of low cost compared to custom built logic circuits and current off-the-shelf detector technology. Further, FPGA logic counters have been shown to work well in quantum key distribution (QKD) test beds. Our setup combines multiple independent detector channels in a reconfigurable manner via an FPGA backend and post processing in order to perform coincidencemore » measurements between any two or more detector channels simultaneously. Using this method, states from a multi-photon polarization entangled source are detected and characterized via coincidence counting on the FPGA. Photons detection events are also processed by the quantum information toolkit for application testing (QITKAT)« less

  16. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.

  17. Means and method for calibrating a photon detector utilizing electron-photon coincidence

    NASA Technical Reports Server (NTRS)

    Srivastava, S. K. (Inventor)

    1984-01-01

    An arrangement for calibrating a photon detector particularly applicable for the ultraviolet and vacuum ultraviolet regions is based on electron photon coincidence utilizing crossed electron beam atom beam collisions. Atoms are excited by electrons which lose a known amount of energy and scatter with a known remaining energy, while the excited atoms emit photons of known radiation. Electrons of the known remaining energy are separated from other electrons and are counted. Photons emitted in a direction related to the particular direction of scattered electrons are detected to serve as a standard. Each of the electrons is used to initiate the measurements of a time interval which terminates with the arrival of a photon exciting the photon detector. Only the number of time intervals related to the coincidence correlation and of electrons scattered in the particular direction with the known remaining energy and photons of a particular radiation level emitted due to the collisions of such scattered electrons are counted. The detector calibration is related to the number of counted electrons and photons.

  18. Simulation of HLNC and NCC measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.; De Ridder, P.

    1994-03-01

    This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less

  19. Dynamic time-correlated single-photon counting laser ranging

    NASA Astrophysics Data System (ADS)

    Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang

    2018-03-01

    We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.

  20. SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahlka, R; Kappadath, S; Mawlawi, O

    2016-06-15

    Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less

  1. Experimental study on the measurement of uranium casting enrichment by time-dependent coincidence method

    NASA Astrophysics Data System (ADS)

    Xie, Wen-Xiong; Li, Jian-Sheng; Gong, Jian; Zhu, Jian-Yu; Huang, Po

    2013-10-01

    Based on the time-dependent coincidence method, a preliminary experiment has been performed on uranium metal castings with similar quality (about 8-10 kg) and shape (hemispherical shell) in different enrichments using neutron from Cf fast fission chamber and timing DT accelerator. Groups of related parameters can be obtained by analyzing the features of time-dependent coincidence counts between source-detector and two detectors to characterize the fission signal. These parameters have high sensitivity to the enrichment, the sensitivity coefficient (defined as (ΔR/Δm)/R¯) can reach 19.3% per kg of 235U. We can distinguish uranium castings with different enrichments to hold nuclear weapon verification.

  2. High-performance reconfigurable coincidence counting unit based on a field programmable gate array.

    PubMed

    Park, Byung Kwon; Kim, Yong-Su; Kwon, Osung; Han, Sang-Wook; Moon, Sung

    2015-05-20

    We present a high-performance reconfigurable coincidence counting unit (CCU) using a low-end field programmable gate array (FPGA) and peripheral circuits. Because of the flexibility guaranteed by the FPGA program, we can easily change system parameters, such as internal input delays, coincidence configurations, and the coincidence time window. In spite of a low-cost implementation, the proposed CCU architecture outperforms previous ones in many aspects: it has 8 logic inputs and 4 coincidence outputs that can measure up to eight-fold coincidences. The minimum coincidence time window and the maximum input frequency are 0.47 ns and 163 MHz, respectively. The CCU will be useful in various experimental research areas, including the field of quantum optics and quantum information.

  3. Detection of dependence patterns with delay.

    PubMed

    Chevallier, Julien; Laloë, Thomas

    2015-11-01

    The Unitary Events (UE) method is a popular and efficient method used this last decade to detect dependence patterns of joint spike activity among simultaneously recorded neurons. The first introduced method is based on binned coincidence count (Grün, 1996) and can be applied on two or more simultaneously recorded neurons. Among the improvements of the methods, a transposition to the continuous framework has recently been proposed by Muiño and Borgelt (2014) and fully investigated by Tuleau-Malot et al. (2014) for two neurons. The goal of the present paper is to extend this study to more than two neurons. The main result is the determination of the limit distribution of the coincidence count. This leads to the construction of an independence test between L≥2 neurons. Finally, we propose a multiple test procedure via a Benjamini and Hochberg approach (Benjamini and Hochberg, 1995). All the theoretical results are illustrated by a simulation study, and compared to the UE method proposed by Grün et al. (2002). Furthermore our method is applied on real data. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Direct fissile assay of enriched uranium using random self-interrogation and neutron coincidence response

    DOEpatents

    Menlove, Howard O.; Stewart, James E.

    1986-01-01

    Apparatus and method for the direct, nondestructive evaluation of the .sup.235 U nuclide content of samples containing UF.sub.6, UF.sub.4, or UO.sub.2 utilizing the passive neutron self-interrogation of the sample resulting from the intrinsic production of neutrons therein. The ratio of the emitted neutron coincidence count rate to the total emitted neutron count rate is determined and yields a measure of the bulk fissile mass. The accuracy of the method is 6.8% (1.sigma.) for cylinders containing UF.sub.6 with enrichments ranging from 6% to 98% with measurement times varying from 3-6 min. The samples contained from below 1 kg to greater than 16 kg. Since the subject invention relies on fast neutron self-interrogation, complete sampling of the UF.sub.6 takes place, reducing difficulties arising from inhomogeneity of the sample which adversely affects other assay procedures.

  5. Direct fissile assay of enriched uranium using random self-interrogation and neutron coincidence response

    DOEpatents

    Menlove, H.O.; Stewart, J.E.

    1985-02-04

    Apparatus and method for the direct, nondestructive evaluation of the /sup 235/U nuclide content of samples containing UF/sub 6/, UF/sub 4/, or UO/sub 2/ utilizing the passive neutron self-interrogation of the sample resulting from the intrinsic production of neutrons therein. The ratio of the emitted neutron coincidence count rate to the total emitted neutron count rate is determined and yields a measure of the bulk fissile mass. The accuracy of the method is 6.8% (1sigma) for cylinders containing UF/sub 6/ with enrichments ranging from 6% to 98% with measurement times varying from 3-6 min. The samples contained from below 1 kg to greater than 16 kg. Since the subject invention relies on fast neutron self-interrogation, complete sampling of the UF/sub 6/ takes place, reducing difficulties arising from inhomogeneity of the sample which adversely affects other assay procedures. 4 figs., 1 tab.

  6. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  7. A Maximum NEC Criterion for Compton Collimation to Accurately Identify True Coincidences in PET

    PubMed Central

    Chinn, Garry; Levin, Craig S.

    2013-01-01

    In this work, we propose a new method to increase the accuracy of identifying true coincidence events for positron emission tomography (PET). This approach requires 3-D detectors with the ability to position each photon interaction in multi-interaction photon events. When multiple interactions occur in the detector, the incident direction of the photon can be estimated using the Compton scatter kinematics (Compton Collimation). If the difference between the estimated incident direction of the photon relative to a second, coincident photon lies within a certain angular range around colinearity, the line of response between the two photons is identified as a true coincidence and used for image reconstruction. We present an algorithm for choosing the incident photon direction window threshold that maximizes the noise equivalent counts of the PET system. For simulated data, the direction window removed 56%–67% of random coincidences while retaining > 94% of true coincidences from image reconstruction as well as accurately extracted 70% of true coincidences from multiple coincidences. PMID:21317079

  8. Recovering the triple coincidence of non-pure positron emitters in preclinical PET

    NASA Astrophysics Data System (ADS)

    Lin, Hsin-Hon; Chuang, Keh-Shih; Chen, Szu-Yu; Jan, Meei-Ling

    2016-03-01

    Non-pure positron emitters, with their long half-lives, allow for the tracing of slow biochemical processes which cannot be adequately examined by the commonly used short-lived positron emitters. Most of these isotopes emit high-energy cascade gamma rays in addition to positron decay that can be detected and create a triple coincidence with annihilation photons. Triple coincidence is discarded in most scanners, however, the majority of the triple coincidence contains true photon pairs that can be recovered. In this study, we propose a strategy for recovering triple coincidence events to raise the sensitivity of PET imaging for non-pure positron emitters. To identify the true line of response (LOR) from a triple coincidence, a framework utilizing geometrical, energy and temporal information is proposed. The geometrical criterion is based on the assumption that the LOR with the largest radial offset among the three sub pairs of triple coincidences is least likely to be a true LOR. Then, a confidence time window is used to test the valid LOR among those within triple coincidence. Finally, a likelihood ratio discriminant rule based on the energy probability density distribution of cascade and annihilation gammas is established to identify the true LOR. An Inveon preclinical PET scanner was modeled with GATE (GEANT4 application for tomographic emission) Monte Carlo software. We evaluated the performance of the proposed method in terms of identification fraction, noise equivalent count rates (NECR), and image quality on various phantoms. With the inclusion of triple coincidence events using the proposed method, the NECR was found to increase from 11% to 26% and 19% to 29% for I-124 and Br-76, respectively, when 7.4-185 MBq of activity was used. Compared to the reconstructed images using double coincidence, this technique increased the SNR by 5.1-7.3% for I-124 and 9.3-10.3% for Br-76 within the activity range of 9.25-74 MBq, without compromising the spatial resolution or contrast. We conclude that the proposed method can improve the counting statistics of PET imaging for non-pure positron emitters and is ready to be implemented on current PET systems. Parts of this work were presented at the 2012 Annual Congress of the European Association of Nuclear Medicine.

  9. Quantitative NDA of isotopic neutron sources.

    PubMed

    Lakosi, L; Nguyen, C T; Bagi, J

    2005-01-01

    A non-destructive method for assaying transuranic neutron sources was developed, using a combination of gamma-spectrometry and neutron correlation technique. Source strength or actinide content of a number of PuBe, AmBe, AmLi, (244)Cm, and (252)Cf sources was assessed, both as a safety issue and with respect to combating illicit trafficking. A passive neutron coincidence collar was designed with (3)He counters embedded in a polyethylene moderator (lined with Cd) surrounding the sources to be measured. The electronics consist of independent channels of pulse amplifiers and discriminators as well as a shift register for coincidence counting. The neutron output of the sources was determined by gross neutron counting, and the actinide content was found out by adopting specific spontaneous fission and (alpha,n) reaction yields of individual isotopes from the literature. Identification of an unknown source type and constituents can be made by gamma-spectrometry. The coincidences are due to spontaneous fission in the case of Cm and Cf sources, while they are mostly due to neutron-induced fission of the Pu isotopes (i.e. self-multiplication) and the (9)Be(n,2n)(8)Be reaction in Be-containing sources. Recording coincidence rate offers a potential for calibration, exploiting a correlation between the Pu amount and the coincidence-to-total ratio. The method and the equipment were tested in an in-field demonstration exercise, with participation of national public authorities and foreign observers. Seizure of the illicit transport of a PuBe source was simulated in the exercise, and the Pu content of the source was determined. It is expected that the method could be used for identification and assay of illicit, found, or not documented neutron sources.

  10. Method and apparatus for detecting dilute concentrations of radioactive xenon in samples of xenon extracted from the atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warburton, William K.; Hennig, Wolfgang G.

    A method and apparatus for measuring the concentrations of radioxenon isotopes in a gaseous sample wherein the sample cell is surrounded by N sub-detectors that are sensitive to both electrons and to photons from radioxenon decays. Signal processing electronics are provided that can detect events within the sub-detectors, measure their energies, determine whether they arise from electrons or photons, and detect coincidences between events within the same or different sub-detectors. The energies of detected two or three event coincidences are recorded as points in associated two or three-dimensional histograms. Counts within regions of interest in the histograms are then usedmore » to compute estimates of the radioxenon isotope concentrations. The method achieves lower backgrounds and lower minimum detectable concentrations by using smaller detector crystals, eliminating interference between double and triple coincidence decay branches, and segregating double coincidences within the same sub-detector from those occurring between different sub-detectors.« less

  11. A new NIST primary standardization of 18F.

    PubMed

    Fitzgerald, R; Zimmerman, B E; Bergeron, D E; Cessna, J C; Pibida, L; Moreira, D S

    2014-02-01

    A new primary standardization of (18)F by NIST is reported. The standard is based on live-timed beta-gamma anticoincidence counting with confirmatory measurements by three other methods: (i) liquid scintillation (LS) counting using CIEMAT/NIST (3)H efficiency tracing; (ii) triple-to-double coincidence ratio (TDCR) counting; and (iii) NaI integral counting and HPGe γ-ray spectrometry. The results are reported as calibration factors for NIST-maintained ionization chambers (including some "dose calibrators"). The LS-based methods reveal evidence for cocktail instability for one LS cocktail. Using an ionization chamber to link this work with previous NIST results, the new value differs from the previous reports by about 4%, but appears to be in good agreement with the key comparison reference value (KCRV) of 2005. © 2013 Published by Elsevier Ltd.

  12. Patient-dependent count-rate adaptive normalization for PET detector efficiency with delayed-window coincidence events

    NASA Astrophysics Data System (ADS)

    Niu, Xiaofeng; Ye, Hongwei; Xia, Ting; Asma, Evren; Winkler, Mark; Gagnon, Daniel; Wang, Wenli

    2015-07-01

    Quantitative PET imaging is widely used in clinical diagnosis in oncology and neuroimaging. Accurate normalization correction for the efficiency of each line-of- response is essential for accurate quantitative PET image reconstruction. In this paper, we propose a normalization calibration method by using the delayed-window coincidence events from the scanning phantom or patient. The proposed method could dramatically reduce the ‘ring’ artifacts caused by mismatched system count-rates between the calibration and phantom/patient datasets. Moreover, a modified algorithm for mean detector efficiency estimation is proposed, which could generate crystal efficiency maps with more uniform variance. Both phantom and real patient datasets are used for evaluation. The results show that the proposed method could lead to better uniformity in reconstructed images by removing ring artifacts, and more uniform axial variance profiles, especially around the axial edge slices of the scanner. The proposed method also has the potential benefit to simplify the normalization calibration procedure, since the calibration can be performed using the on-the-fly acquired delayed-window dataset.

  13. γγ coincidence spectrometer for instrumental neutron-activation analysis

    NASA Astrophysics Data System (ADS)

    Tomlin, B. E.; Zeisler, R.; Lindstrom, R. M.

    2008-05-01

    Neutron-activation analysis (NAA) is an important technique for the accurate and precise determination of trace and ultra-trace elemental compositions. The application of γγ coincidence counting to NAA in order to enhance specificity was first explored over 40 years ago but has not evolved into a regularly used technique. A γγ coincidence spectrometer has been constructed at the National Institute of Standards and Technology, using two HPGe γ-ray detectors and an all-digital data-acquisition system, for the purpose of exploring coincidence NAA and its value in characterizing reference materials. This paper describes the initial evaluation of the quantitative precision of coincidence counting versus singles spectrometry, based upon a sample of neutron-irradiated bovine liver material.

  14. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less

  15. Estimating the Effective System Dead Time Parameter for Correlated Neutron Counting

    DOE PAGES

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; ...

    2017-04-29

    We present that neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correctingmore » these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. In addition, this latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.« less

  16. Estimating the effective system dead time parameter for correlated neutron counting

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Cleveland, Steve; Favalli, Andrea; McElroy, Robert D.; Simone, Angela T.

    2017-11-01

    Neutron time correlation analysis is one of the main technical nuclear safeguards techniques used to verify declarations of, or to independently assay, special nuclear materials. Quantitative information is generally extracted from the neutron-event pulse train, collected from moderated assemblies of 3He proportional counters, in the form of correlated count rates that are derived from event-triggered coincidence gates. These count rates, most commonly referred to as singles, doubles and triples rates etc., when extracted using shift-register autocorrelation logic, are related to the reduced factorial moments of the time correlated clusters of neutrons emerging from the measurement items. Correcting these various rates for dead time losses has received considerable attention recently. The dead time losses for the higher moments in particular, and especially for large mass (high rate and highly multiplying) items, can be significant. Consequently, even in thoughtfully designed systems, accurate dead time treatments are needed if biased mass determinations are to be avoided. In support of this effort, in this paper we discuss a new approach to experimentally estimate the effective system dead time of neutron coincidence counting systems. It involves counting a random neutron source (e.g. AmLi is a good approximation to a source without correlated emission) and relating the second and higher moments of the neutron number distribution recorded in random triggered interrogation coincidence gates to the effective value of dead time parameter. We develop the theoretical basis of the method and apply it to the Oak Ridge Large Volume Active Well Coincidence Counter using sealed AmLi radionuclide neutron sources and standard multiplicity shift register electronics. The method is simple to apply compared to the predominant present approach which involves using a set of 252Cf sources of wide emission rate, it gives excellent precision in a conveniently short time, and it yields consistent results as a function of the order of the moment used to extract the dead time parameter. This latter observation is reassuring in that it suggests the assumptions underpinning the theoretical analysis are fit for practical application purposes. However, we found that the effective dead time parameter obtained is not constant, as might be expected for a parameter that in the dead time model is characteristic of the detector system, but rather, varies systematically with gate width.

  17. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  18. Real Time Coincidence Detection Engine for High Count Rate Timestamp Based PET

    NASA Astrophysics Data System (ADS)

    Tetrault, M.-A.; Oliver, J. F.; Bergeron, M.; Lecomte, R.; Fontaine, R.

    2010-02-01

    Coincidence engines follow two main implementation flows: timestamp based systems and AND-gate based systems. The latter have been more widespread in recent years because of its lower cost and high efficiency. However, they are highly dependent on the selected electronic components, they have limited flexibility once assembled and they are customized to fit a specific scanner's geometry. Timestamp based systems are gathering more attention lately, especially with high channel count fully digital systems. These new systems must however cope with important singles count rates. One option is to record every detected event and postpone coincidence detection offline. For daily use systems, a real time engine is preferable because it dramatically reduces data volume and hence image preprocessing time and raw data management. This paper presents the timestamp based coincidence engine for the LabPET¿, a small animal PET scanner with up to 4608 individual readout avalanche photodiode channels. The engine can handle up to 100 million single events per second and has extensive flexibility because it resides in programmable logic devices. It can be adapted for any detector geometry or channel count, can be ported to newer, faster programmable devices and can have extra modules added to take advantage of scanner-specific features. Finally, the user can select between full processing mode for imaging protocols and minimum processing mode to study different approaches for coincidence detection with offline software.

  19. Towards component-based validation of GATE: aspects of the coincidence processor.

    PubMed

    Moraes, Eder R; Poon, Jonathan K; Balakrishnan, Karthikayan; Wang, Wenli; Badawi, Ramsey D

    2015-02-01

    GATE is public domain software widely used for Monte Carlo simulation in emission tomography. Validations of GATE have primarily been performed on a whole-system basis, leaving the possibility that errors in one sub-system may be offset by errors in others. We assess the accuracy of the GATE PET coincidence generation sub-system in isolation, focusing on the options most closely modeling the majority of commercially available scanners. Independent coincidence generators were coded by teams at Toshiba Medical Research Unit (TMRU) and UC Davis. A model similar to the Siemens mCT scanner was created in GATE. Annihilation photons interacting with the detectors were recorded. Coincidences were generated using GATE, TMRU and UC Davis code and results compared to "ground truth" obtained from the history of the photon interactions. GATE was tested twice, once with every qualified single event opening a time window and initiating a coincidence check (the "multiple window method"), and once where a time window is opened and a coincidence check initiated only by the first single event to occur after the end of the prior time window (the "single window method"). True, scattered and random coincidences were compared. Noise equivalent count rates were also computed and compared. The TMRU and UC Davis coincidence generators agree well with ground truth. With GATE, reasonable accuracy can be obtained if the single window method option is chosen and random coincidences are estimated without use of the delayed coincidence option. However in this GATE version, other parameter combinations can result in significant errors. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Gamma-gamma coincidence performance of LaBr 3:Ce scintillation detectors vs HPGe detectors in high count-rate scenarios

    DOE PAGES

    Drescher, A.; Yoho, M.; Landsberger, S.; ...

    2017-01-15

    In this study, a radiation detection system consisting of two cerium doped lanthanum bromide (LaBr 3:Ce) scintillation detectors in a gamma-gamma coincidence configuration has been used to demonstrate the advantages that coincident detection provides relative to a single detector, and the advantages that LaBr 3:Ce detectors provide relative to high purity germanium (HPGe) detectors. Signal to noise ratios of select photopeak pairs for these detectors have been compared to high-purity germanium (HPGe) detectors in both single and coincident detector configurations in order to quantify the performance of each detector configuration. The efficiency and energy resolution of LaBr 3:Ce detectors havemore » been determined and compared to HPGe detectors. Coincident gamma-ray pairs from the radionuclides 152Eu and 133Ba have been identified in a sample that is dominated by 137Cs. Gamma-gamma coincidence successfully reduced the Compton continuum from the large 137Cs peak, revealed several coincident gamma energies characteristic of these nuclides, and improved the signal-to-noise ratio relative to single detector measurements. LaBr 3:Ce detectors performed at count rates multiple times higher than can be achieved with HPGe detectors. The standard background spectrum consisting of peaks associated with transitions within the LaBr 3:Ce crystal has also been significantly reduced. Finally, it is shown that LaBr 3:Ce detectors have the unique capability to perform gamma-gamma coincidence measurements in very high count rate scenarios, which can potentially benefit nuclear safeguards in situ measurements of spent nuclear fuel.« less

  1. Gamma-gamma coincidence performance of LaBr 3:Ce scintillation detectors vs HPGe detectors in high count-rate scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drescher, A.; Yoho, M.; Landsberger, S.

    In this study, a radiation detection system consisting of two cerium doped lanthanum bromide (LaBr 3:Ce) scintillation detectors in a gamma-gamma coincidence configuration has been used to demonstrate the advantages that coincident detection provides relative to a single detector, and the advantages that LaBr 3:Ce detectors provide relative to high purity germanium (HPGe) detectors. Signal to noise ratios of select photopeak pairs for these detectors have been compared to high-purity germanium (HPGe) detectors in both single and coincident detector configurations in order to quantify the performance of each detector configuration. The efficiency and energy resolution of LaBr 3:Ce detectors havemore » been determined and compared to HPGe detectors. Coincident gamma-ray pairs from the radionuclides 152Eu and 133Ba have been identified in a sample that is dominated by 137Cs. Gamma-gamma coincidence successfully reduced the Compton continuum from the large 137Cs peak, revealed several coincident gamma energies characteristic of these nuclides, and improved the signal-to-noise ratio relative to single detector measurements. LaBr 3:Ce detectors performed at count rates multiple times higher than can be achieved with HPGe detectors. The standard background spectrum consisting of peaks associated with transitions within the LaBr 3:Ce crystal has also been significantly reduced. Finally, it is shown that LaBr 3:Ce detectors have the unique capability to perform gamma-gamma coincidence measurements in very high count rate scenarios, which can potentially benefit nuclear safeguards in situ measurements of spent nuclear fuel.« less

  2. Recovery and normalization of triple coincidences in PET.

    PubMed

    Lage, Eduardo; Parot, Vicente; Moore, Stephen C; Sitek, Arkadiusz; Udías, Jose M; Dave, Shivang R; Park, Mi-Ae; Vaquero, Juan J; Herraiz, Joaquin L

    2015-03-01

    Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose a simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements. To recover triple coincidences, the authors' method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners. The addition of triple-coincidence events with the authors' method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%. Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.

  3. Evaluation of methods for the assay of radium-228 in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noyce, J.R.

    1981-02-01

    The technical literature from 1967 to May 1980 was searched for methods for assaying radium-228 in water. These methods were evaluated for their suitability as potential EPA reference methods for drinking water assays. The authors suggest the present EPA reference method (Krieger, 1976) be retained but improved, and a second method (McCurdy and Mellor, 1979), which employs beta-gamma coincidence counting, be added. Included in this report is a table that lists the principal features of 17 methods for radium-228 assays.

  4. Cosmic ray neutron background reduction using localized coincidence veto neutron counting

    DOEpatents

    Menlove, Howard O.; Bourret, Steven C.; Krick, Merlyn S.

    2002-01-01

    This invention relates to both the apparatus and method for increasing the sensitivity of measuring the amount of radioactive material in waste by reducing the interference caused by cosmic ray generated neutrons. The apparatus includes: (a) a plurality of neutron detectors, each of the detectors including means for generating a pulse in response to the detection of a neutron; and (b) means, coupled to each of the neutrons detectors, for counting only some of the pulses from each of the detectors, whether cosmic ray or fission generated. The means for counting includes a means that, after counting one of the pulses, vetos the counting of additional pulses for a prescribed period of time. The prescribed period of time is between 50 and 200 .mu.s. In the preferred embodiment the prescribed period of time is 128 .mu.s. The veto means can be an electronic circuit which includes a leading edge pulse generator which passes a pulse but blocks any subsequent pulse for a period of between 50 and 200 .mu.s. Alternately, the veto means is a software program which includes means for tagging each of the pulses from each of the detectors for both time and position, means for counting one of the pulses from a particular position, and means for rejecting those of the pulses which originate from the particular position and in a time interval on the order of the neutron die-away time in polyethylene or other shield material. The neutron detectors are grouped in pods, preferably at least 10. The apparatus also includes means for vetoing the counting of coincidence pulses from all of the detectors included in each of the pods which are adjacent to the pod which includes the detector which produced the pulse which was counted.

  5. Characterization of 176Lu background in LSO-based PET scanners

    NASA Astrophysics Data System (ADS)

    Conti, Maurizio; Eriksson, Lars; Rothfuss, Harold; Sjoeholm, Therese; Townsend, David; Rosenqvist, Göran; Carlier, Thomas

    2017-05-01

    LSO and LYSO are today the most common scintillators used in positron emission tomography. Lutetium contains traces of 176Lu, a radioactive isotope that decays β - with a cascade of γ photons in coincidence. Therefore, Lutetium-based scintillators are characterized by a small natural radiation background. In this paper, we investigate and characterize the 176Lu radiation background via experiments performed on LSO-based PET scanners. LSO background was measured at different energy windows and different time coincidence windows, and by using shields to alter the original spectrum. The effect of radiation background in particularly count-starved applications, such as 90Y imaging, is analysed and discussed. Depending on the size of the PET scanner, between 500 and 1000 total random counts per second and between 3 and 5 total true coincidences per second were measured in standard coincidence mode. The LSO background counts in a Siemens mCT in the standard PET energy and time windows are in general negligible in terms of trues, and are comparable to that measured in a BGO scanner of similar size.

  6. Recovery and normalization of triple coincidences in PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lage, Eduardo, E-mail: elage@mit.edu; Parot, Vicente; Dave, Shivang R.

    2015-03-15

    Purpose: Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose amore » simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements. Methods: To recover triple coincidences, the authors’ method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners. Results: The addition of triple-coincidence events with the authors’ method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%. Conclusions: Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.« less

  7. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.

  8. Quantitative non-destructive assay of PuBe neutron sources

    NASA Astrophysics Data System (ADS)

    Lakosi, László; Bagi, János; Nguyen, Cong Tam

    2006-02-01

    PuBe neutron sources were assayed, using a combination of high resolution γ-spectrometry (HRGS) and neutron correlation technique. In a previous publication [J. Bagi, C. Tam Nguyen, L. Lakosi, Nucl. Instr. and Meth. B 222 (2004) 242] a passive neutron well-counter was reported with 3He tubes embedded in a polyamide (TERRAMID) moderator (lined inside with Cd) surrounding the sources to be measured. Gross and coincidence neutron counting was performed, and the Pu content of the sources was found out from isotope analysis and by adopting specific (α, n) reaction yields of the Pu isotopes and 241Am in Be, based on supplier's information and literature data. The method was further developed and refined. Evaluation algorithm was more precisely worked out. The contribution of secondary (correlated) neutrons to the total neutron output was derived from the coincidence (doubles) count rate and taken into account in assessing the Pu content. A new evaluation of former results was performed. Assay was extended to other PuBe sources, and new results were added. In order to attain higher detection efficiency, a more efficient moderator was also applied, with and without Cd shielding around the assay chamber. Calibration seems possible using neutron measurements only (without γ-spectrometry), based on a correlation between the Pu amount and the coincidence-to-total ratio. It is expected that the method could be used for Pu accountancy and safeguards verification as well as identification and assay of seized, found, or not documented PuBe neutron sources.

  9. A two-fold reduction in measurement time for neutron assay: Initial tests of a prototype dual-gated shift register

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J.E.; Bourret, S.C.; Krick, M.S.

    1996-09-01

    Neutron coincidence counting (NCC) is used routinely around the world for nondestructive mass assay of uranium and plutonium in many forms, including waste. Compared with other methods, NCC is generally the most flexible, economic, and rapid. Many applications of NCC would benefit from a reduction in counting time required for a fixed random error. We have developed and tested the first prototype of a dual- gated, shift-register-based electronics unit that offers the potential of decreased measurement time for all passive and active NCC applications.

  10. A two-fold reduction in measurement time for neutron assay: Initial tests of a prototype dual-gated shift register

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J.E.; Bourret, S.C.; Krick, M.S.

    1996-12-31

    Neutron coincidence counting (NCC) is used routinely around the world for nondestructive mass assay of uranium and plutonium in many forms, including waste. Compared with other methods, NCC is generally the most flexible, economic, and rapid. Many applications of NCC would benefit from a reduction in counting time required for a fixed random error. The authors have developed and tested the first prototype of a dual-gated, shift-register-based electronics unit that offers the potential of decreased measurement time for all passive and active NCC applications.

  11. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurz, Christopher, E-mail: Christopher.Kurz@physik.uni-muenchen.de; Bauer, Julia; Conti, Maurizio

    Purpose: External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β{sup +}-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small numbermore » of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. Methods: The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Results: Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80 000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% − 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Conclusions: Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.« less

  12. Towards component-based validation of GATE: aspects of the coincidence processor

    PubMed Central

    Moraes, Eder R.; Poon, Jonathan K.; Balakrishnan, Karthikayan; Wang, Wenli; Badawi, Ramsey D.

    2014-01-01

    GATE is public domain software widely used for Monte Carlo simulation in emission tomography. Validations of GATE have primarily been performed on a whole-system basis, leaving the possibility that errors in one sub-system may be offset by errors in others. We assess the accuracy of the GATE PET coincidence generation sub-system in isolation, focusing on the options most closely modeling the majority of commercially available scanners. Independent coincidence generators were coded by teams at Toshiba Medical Research Unit (TMRU) and UC Davis. A model similar to the Siemens mCT scanner was created in GATE. Annihilation photons interacting with the detectors were recorded. Coincidences were generated using GATE, TMRU and UC Davis code and results compared to “ground truth” obtained from the history of the photon interactions. GATE was tested twice, once with every qualified single event opening a time window and initiating a coincidence check (the “multiple window method”), and once where a time window is opened and a coincidence check initiated only by the first single event to occur after the end of the prior time window (the “single window method”). True, scattered and random coincidences were compared. Noise equivalent count rates were also computed and compared. The TMRU and UC Davis coincidence generators agree well with ground truth. With GATE, reasonable accuracy can be obtained if the single window method option is chosen and random coincidences are estimated without use of the delayed coincidence option. However in this GATE version, other parameter combinations can result in significant errors. PMID:25240897

  13. A PET reconstruction formulation that enforces non-negativity in projection space for bias reduction in Y-90 imaging

    NASA Astrophysics Data System (ADS)

    Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.

    2018-02-01

    Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.

  14. Active Well Counting Using New PSD Plastic Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis

    This report presents results and analysis from a series of proof-of-concept measurements to assess the suitability of segmented detectors constructed from Eljen EJ-299-34 PSD-plastic scintillator with pulse-shape discrimination capability for the purposes of quantifying uranium via active neutron coincidence counting. Present quantification of bulk uranium materials for international safeguards and domestic materials control and accounting relies on active neutron coincidence counting systems, such as the Active Well Coincidence Counter (AWCC) and the Uranium Neutron Coincidence Collar (UNCL), that use moderated He-3 proportional counters along with necessarily low-intensity 241Am(Li) neutron sources. Scintillation-based fast-neutron detectors are a potentially superior technology to themore » existing AWCC and UNCL designs due to their spectroscopic capability and their inherently short neutron coincidence times that largely eliminate random coincidences and enable interrogation by stronger sources. One of the past impediments to the investigation and adoption of scintillation counters for the purpose of quantifying bulk uranium was the commercial availability of scintillators having the necessary neutron-gamma pulse-shape discrimination properties only as flammable liquids. Recently, Eljen EJ-299-34 PSD-plastic scintillator became commercially available. The present work is the first assessment of an array of PSD-plastic detectors for the purposes of quantifying bulk uranium. The detector panel used in the present work was originally built as the focal plane for a fast-neutron imager, but it was repurposed for the present investigation by construction of a stand to support the inner well of an AWCC immediately in front of the detector panel. The detector panel and data acquisition of this system are particularly well suited for performing active-well fast-neutron counting of LEU and HEU samples because the active detector volume is solid, the 241Am(Li) interrogating neutrons are largely below the detector threshold, and the segmented construction of the detector modules allow for separation of true neutron-neutron coincidences from inter-detector scattering using the kinematics of neutron scattering. The results from a series of measurements of a suite of uranium standards are presented, and compared to measurements of the same standards and source configurations using the AWCC. Using these results, the performance of the segmented detectors reconfigured as a well counter is predicted and outperforms the AWCC.« less

  15. Roles for Coincidence Detection in Coding Amplitude-Modulated Sounds

    PubMed Central

    Ashida, Go; Kretzberg, Jutta; Tollin, Daniel J.

    2016-01-01

    Many sensory neurons encode temporal information by detecting coincident arrivals of synaptic inputs. In the mammalian auditory brainstem, binaural neurons of the medial superior olive (MSO) are known to act as coincidence detectors, whereas in the lateral superior olive (LSO) roles of coincidence detection have remained unclear. LSO neurons receive excitatory and inhibitory inputs driven by ipsilateral and contralateral acoustic stimuli, respectively, and vary their output spike rates according to interaural level differences. In addition, LSO neurons are also sensitive to binaural phase differences of low-frequency tones and envelopes of amplitude-modulated (AM) sounds. Previous physiological recordings in vivo found considerable variations in monaural AM-tuning across neurons. To investigate the underlying mechanisms of the observed temporal tuning properties of LSO and their sources of variability, we used a simple coincidence counting model and examined how specific parameters of coincidence detection affect monaural and binaural AM coding. Spike rates and phase-locking of evoked excitatory and spontaneous inhibitory inputs had only minor effects on LSO output to monaural AM inputs. In contrast, the coincidence threshold of the model neuron affected both the overall spike rates and the half-peak positions of the AM-tuning curve, whereas the width of the coincidence window merely influenced the output spike rates. The duration of the refractory period affected only the low-frequency portion of the monaural AM-tuning curve. Unlike monaural AM coding, temporal factors, such as the coincidence window and the effective duration of inhibition, played a major role in determining the trough positions of simulated binaural phase-response curves. In addition, empirically-observed level-dependence of binaural phase-coding was reproduced in the framework of our minimalistic coincidence counting model. These modeling results suggest that coincidence detection of excitatory and inhibitory synaptic inputs is essential for LSO neurons to encode both monaural and binaural AM sounds. PMID:27322612

  16. Coincidence detection in the medial superior olive: mechanistic implications of an analysis of input spiking patterns

    PubMed Central

    Franken, Tom P.; Bremen, Peter; Joris, Philip X.

    2014-01-01

    Coincidence detection by binaural neurons in the medial superior olive underlies sensitivity to interaural time difference (ITD) and interaural correlation (ρ). It is unclear whether this process is akin to a counting of individual coinciding spikes, or rather to a correlation of membrane potential waveforms resulting from converging inputs from each side. We analyzed spike trains of axons of the cat trapezoid body (TB) and auditory nerve (AN) in a binaural coincidence scheme. ITD was studied by delaying “ipsi-” vs. “contralateral” inputs; ρ was studied by using responses to different noises. We varied the number of inputs; the monaural and binaural threshold and the coincidence window duration. We examined physiological plausibility of output “spike trains” by comparing their rate and tuning to ITD and ρ to those of binaural cells. We found that multiple inputs are required to obtain a plausible output spike rate. In contrast to previous suggestions, monaural threshold almost invariably needed to exceed binaural threshold. Elevation of the binaural threshold to values larger than 2 spikes caused a drastic decrease in rate for a short coincidence window. Longer coincidence windows allowed a lower number of inputs and higher binaural thresholds, but decreased the depth of modulation. Compared to AN fibers, TB fibers allowed higher output spike rates for a low number of inputs, but also generated more monaural coincidences. We conclude that, within the parameter space explored, the temporal patterns of monaural fibers require convergence of multiple inputs to achieve physiological binaural spike rates; that monaural coincidences have to be suppressed relative to binaural ones; and that the neuron has to be sensitive to single binaural coincidences of spikes, for a number of excitatory inputs per side of 10 or less. These findings suggest that the fundamental operation in the mammalian binaural circuit is coincidence counting of single binaural input spikes. PMID:24822037

  17. ASNC upgrade for nuclear material accountancy of ACPF

    NASA Astrophysics Data System (ADS)

    Seo, Hee; Ahn, Seong-Kyu; Lee, Chaehun; Oh, Jong-Myeong; Yoon, Seonkwang

    2018-02-01

    A safeguards neutron coincidence counter for nuclear material accountancy of the Advanced spent-fuel Conditioning Process Facility (ACPF), known as the ACP Safeguards Neutron Counter (ASNC), was upgraded to improve its remote-handling and maintenance capabilities. Based on the results of the previous design study, the neutron counter was completely rebuilt, and various detector parameters for neutron coincidence counting (i.e., high-voltage plateau, efficiency profile, dead time, die-away time, gate length, doubles gate fraction, and stability) were experimentally determined. The measurement data showed good agreement with the MCNP simulation results. To the best of the authors' knowledge, the ASNC is the only safeguards neutron coincidence counter in the world that is installed and operated in a hot-cell. The final goals to be achieved were (1) to evaluate the uncertainty level of the ASNC in nuclear material accountancy of the process materials of the oxide-reduction process for spent fuels and (2) to evaluate the applicability of the neutron coincidence counting technique within a strong radiation field (e.g., in a hot-cell environment).

  18. Analysis of counting errors in the phase/Doppler particle analyzer

    NASA Technical Reports Server (NTRS)

    Oldenburg, John R.

    1987-01-01

    NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.

  19. Scatter Fraction, Count Rates, and Noise Equivalent Count Rate of a Single-Bed Position RPC TOF-PET System Assessed by Simulations Following the NEMA NU2-2001 Standards

    NASA Astrophysics Data System (ADS)

    Couceiro, Miguel; Crespo, Paulo; Marques, Rui F.; Fonte, Paulo

    2014-06-01

    Scatter Fraction (SF) and Noise Equivalent Count Rate (NECR) of a 2400 mm wide axial field-of-view Positron Emission Tomography (PET) system based on Resistive Plate Chamber (RPC) detectors with 300 ps Time Of Flight (TOF) resolution were studied by simulation using Geant4. The study followed the NEMA NU2-2001 standards, using the standard 700 mm long phantom and an axially extended one with 1800 mm, modeling the foreseeable use of this PET system. Data was processed based on the actual RPC readout, which requires a 0.2 μs non-paralyzable dead time for timing signals and a paralyzable dead time (τps) for position signals. For NECR, the best coincidence trigger consisted of a multiple time window coincidence sorter retaining single coincidence pairs (involving only two photons) and all possible coincidence pairs obtained from Multiple coincidences, keeping only those for which the direct TOF-reconstructed point falls inside a tight region surrounding the phantom. For the 700 mm phantom, the SF was 51.8% and, with τps = 3.0 μs, the peak NECR was 167 kcps at 7.6 kBq/cm3. Using τps = 1.0 μs the NECR was 349 kcps at 7.6 kBq/cm3, and no peak was found. For the 1800 mm phantom, the SF was slightly higher, and the NECR curves were identical to those obtained with the standard phantom, but shifted to lower activity concentrations. Although the higher SF, the values obtained for NECR allow concluding that the proposed scanner is expected to outperform current commercial PET systems.

  20. Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H.

    2015-12-01

    This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.

  1. Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.

    PubMed

    Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S

    2001-04-01

    Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.

  2. Analysis of historical delta values for IAEA/LANL NDA training courses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William; Santi, Peter; Swinhoe, Martyn

    2009-01-01

    The Los Alamos National Laboratory (LANL) supports the International Atomic Energy Agency (IAEA) by providing training for IAEA inspectors in neutron and gamma-ray Nondestructive Assay (NDA) of nuclear material. Since 1980, all new IAEA inspectors attend this two week course at LANL gaining hands-on experience in the application of NDA techniques, procedures and analysis to measure plutonium and uranium nuclear material standards with well known pedigrees. As part of the course the inspectors conduct an inventory verification exercise. This exercise provides inspectors the opportunity to test their abilities in performing verification measurements using the various NDA techniques. For an inspector,more » the verification of an item is nominally based on whether the measured assay value agrees with the declared value to within three times the historical delta value. The historical delta value represents the average difference between measured and declared values from previous measurements taken on similar material with the same measurement technology. If the measurement falls outside a limit of three times the historical delta value, the declaration is not verified. This paper uses measurement data from five years of IAEA courses to calculate a historical delta for five non-destructive assay methods: Gamma-ray Enrichment, Gamma-ray Plutonium Isotopics, Passive Neutron Coincidence Counting, Active Neutron Coincidence Counting and the Neutron Coincidence Collar. These historical deltas provide information as to the precision and accuracy of these measurement techniques under realistic conditions.« less

  3. Extension of the TDCR model to compute counting efficiencies for radionuclides with complex decay schemes.

    PubMed

    Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch

    2014-05-01

    The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.

  4. Fourth-Order Spatial Correlation of Thermal Light

    NASA Astrophysics Data System (ADS)

    Wen, Feng; Zhang, Xun; Xue, Xin-Xin; Sun, Jia; Song, Jian-Ping; Zhang, Yan-Peng

    2014-11-01

    We investigate the fourth-order spatial correlation properties of pseudo-thermal light in the photon counting regime, and apply the Klyshko advanced-wave picture to describe the process of four-photon coincidence counting measurement. We deduce the theory of a proof-of-principle four-photon coincidence counting configuration, and find that if the four randomly radiated photons come from the same radiation area and are indistinguishable in principle, the fourth-order correlation of them is 24 times larger than that when four photons come from different radiation areas. In addition, we also show that the higher-order spatial correlation function can be decomposed into multiple lower-order correlation functions, and the contrast and visibility of low-order correlation peaks are less than those of higher orders, while the resolutions all are identical. This study may be useful for better understanding the four-photon interference and multi-channel correlation imaging.

  5. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study.

    PubMed

    Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D

    2012-07-07

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.

  6. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study

    PubMed Central

    Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D

    2013-01-01

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15–22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging, or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 90 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25–31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration. PMID:22678106

  7. Optimal whole-body PET scanner configurations for different volumes of LSO scintillator: a simulation study

    NASA Astrophysics Data System (ADS)

    Poon, Jonathan K.; Dahlbom, Magnus L.; Moses, William W.; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R.; Badawi, Ramsey D.

    2012-07-01

    The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.

  8. Standardization of ¹³¹I: implementation of CIEMAT/NIST method at BARC, India.

    PubMed

    Kulkarni, D B; Anuradha, R; Reddy, P J; Joseph, Leena

    2011-10-01

    The CIEMAT/NIST efficiency tracing method using ³H standard was implemented at Radiation Safety Systems Division, Bhabha Atomic Research Centre (BARC) for the standardization of ¹³¹I radioactive solution. Measurements were also carried out using the 4π β-γ coincidence counting system maintained as a primary standard at the laboratory. The implementation of the CIEMAT/NIST method was verified by comparing the activity concentration obtained in the laboratory with that of the average value of the APMP intercomparison (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)). The results obtained by the laboratory is linked to the CIPM Key Comparison Reference Value (KCRV) through the equivalent activity value of National Metrology Institute of Japan (NMIJ) (Yunoki et al., in progress, (APMP.RI(II)-K2.I-131)), which was the pilot laboratory for the intercomparison. The procedure employed to standardize ¹³¹I by the CIEMAT/NIST efficiency tracing technique is presented. The activity concentrations obtained have been normalized with the activity concentration measured by NMIJ to maintain confidentiality of results until the Draft-A report is accepted by all participants. The normalized activity concentrations obtained with the CIEMAT/NIST method was 0.9985 ± 0.0035 kBq/g and using 4π β-γ coincidence counting method was 0.9909 ± 0.0046 kBq/g as on 20 March 2009, 0 h UTC. The normalized activity concentration measured by the NMIJ was 1 ± 0.0024 kBq/g. The normalized average of the activity concentrations of all the participating laboratories was 1.004 ± 0.028 kBq/g. The results obtained in the laboratory are comparable with the other international standards within the uncertainty limits. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Simple performance evaluation of pulsed spontaneous parametric down-conversion sources for quantum communications.

    PubMed

    Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle

    2011-01-17

    Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.

  10. Analysis of calibration data for the uranium active neutron coincidence counting collar with attention to errors in the measured neutron coincidence rate

    DOE PAGES

    Croft, Stephen; Burr, Thomas Lee; Favalli, Andrea; ...

    2015-12-10

    We report that the declared linear density of 238U and 235U in fresh low enriched uranium light water reactor fuel assemblies can be verified for nuclear safeguards purposes using a neutron coincidence counter collar in passive and active mode, respectively. The active mode calibration of the Uranium Neutron Collar – Light water reactor fuel (UNCL) instrument is normally performed using a non-linear fitting technique. The fitting technique relates the measured neutron coincidence rate (the predictor) to the linear density of 235U (the response) in order to estimate model parameters of the nonlinear Padé equation, which traditionally is used to modelmore » the calibration data. Alternatively, following a simple data transformation, the fitting can also be performed using standard linear fitting methods. This paper compares performance of the nonlinear technique to the linear technique, using a range of possible error variance magnitudes in the measured neutron coincidence rate. We develop the required formalism and then apply the traditional (nonlinear) and alternative approaches (linear) to the same experimental and corresponding simulated representative datasets. Lastly, we find that, in this context, because of the magnitude of the errors in the predictor, it is preferable not to transform to a linear model, and it is preferable not to adjust for the errors in the predictor when inferring the model parameters« less

  11. Investigating the limits of PET/CT imaging at very low true count rates and high random fractions in ion-beam therapy monitoring.

    PubMed

    Kurz, Christopher; Bauer, Julia; Conti, Maurizio; Guérin, Laura; Eriksson, Lars; Parodi, Katia

    2015-07-01

    External beam radiotherapy with protons and heavier ions enables a tighter conformation of the applied dose to arbitrarily shaped tumor volumes with respect to photons, but is more sensitive to uncertainties in the radiotherapeutic treatment chain. Consequently, an independent verification of the applied treatment is highly desirable. For this purpose, the irradiation-induced β(+)-emitter distribution within the patient is detected shortly after irradiation by a commercial full-ring positron emission tomography/x-ray computed tomography (PET/CT) scanner installed next to the treatment rooms at the Heidelberg Ion-Beam Therapy Center (HIT). A major challenge to this approach is posed by the small number of detected coincidences. This contribution aims at characterizing the performance of the used PET/CT device and identifying the best-performing reconstruction algorithm under the particular statistical conditions of PET-based treatment monitoring. Moreover, this study addresses the impact of radiation background from the intrinsically radioactive lutetium-oxyorthosilicate (LSO)-based detectors at low counts. The authors have acquired 30 subsequent PET scans of a cylindrical phantom emulating a patientlike activity pattern and spanning the entire patient counting regime in terms of true coincidences and random fractions (RFs). Accuracy and precision of activity quantification, image noise, and geometrical fidelity of the scanner have been investigated for various reconstruction algorithms and settings in order to identify a practical, well-suited reconstruction scheme for PET-based treatment verification. Truncated listmode data have been utilized for separating the effects of small true count numbers and high RFs on the reconstructed images. A corresponding simulation study enabled extending the results to an even wider range of counting statistics and to additionally investigate the impact of scatter coincidences. Eventually, the recommended reconstruction scheme has been applied to exemplary postirradiation patient data-sets. Among the investigated reconstruction options, the overall best results in terms of image noise, activity quantification, and accurate geometrical recovery were achieved using the ordered subset expectation maximization reconstruction algorithm with time-of-flight (TOF) and point-spread function (PSF) information. For this algorithm, reasonably accurate (better than 5%) and precise (uncertainty of the mean activity below 10%) imaging can be provided down to 80,000 true coincidences at 96% RF. Image noise and geometrical fidelity are generally improved for fewer iterations. The main limitation for PET-based treatment monitoring has been identified in the small number of true coincidences, rather than the high intrinsic random background. Application of the optimized reconstruction scheme to patient data-sets results in a 25% - 50% reduced image noise at a comparable activity quantification accuracy and an improved geometrical performance with respect to the formerly used reconstruction scheme at HIT, adopted from nuclear medicine applications. Under the poor statistical conditions in PET-based treatment monitoring, improved results can be achieved by considering PSF and TOF information during image reconstruction and by applying less iterations than in conventional nuclear medicine imaging. Geometrical fidelity and image noise are mainly limited by the low number of true coincidences, not the high LSO-related random background. The retrieved results might also impact other emerging PET applications at low counting statistics.

  12. Fast coincidence counting with active inspection systems

    NASA Astrophysics Data System (ADS)

    Mullens, J. A.; Neal, J. S.; Hausladen, P. A.; Pozzi, S. A.; Mihalczo, J. T.

    2005-12-01

    This paper describes 2nd and 3rd order time coincidence distributions measurements with a GHz processor that synchronously samples 5 or 10 channels of data from radiation detectors near fissile material. On-line, time coincidence distributions are measured between detectors or between detectors and an external stimulating source. Detector-to-detector correlations are useful for passive measurements also. The processor also measures the number of times n pulses occur in a selectable time window and compares this multiplet distribution to a Poisson distribution as a method of determining the occurrence of fission. The detectors respond to radiation emitted in the fission process induced internally by inherent sources or by external sources such as LINACS, DT generators either pulsed or steady state with alpha detectors, etc. Data can be acquired from prompt emission during the source pulse, prompt emissions immediately after the source pulse, or delayed emissions between source pulses. These types of time coincidence measurements (occurring on the time scale of the fission chain multiplication processes for nuclear weapons grade U and Pu) are useful for determining the presence of these fissile materials and quantifying the amount, and are useful for counter terrorism and nuclear material control and accountability. This paper presents the results for a variety of measurements.

  13. Determination of the Effective Detector Area of an Energy-Dispersive X-Ray Spectrometer at the Scanning Electron Microscope Using Experimental and Theoretical X-Ray Emission Yields.

    PubMed

    Procop, Mathias; Hodoroaba, Vasile-Dan; Terborg, Ralf; Berger, Dirk

    2016-12-01

    A method is proposed to determine the effective detector area for energy-dispersive X-ray spectrometers (EDS). Nowadays, detectors are available for a wide range of nominal areas ranging from 10 up to 150 mm2. However, it remains in most cases unknown whether this nominal area coincides with the "net active sensor area" that should be given according to the related standard ISO 15632, or with any other area of the detector device. Moreover, the specific geometry of EDS installation may further reduce a given detector area. The proposed method can be applied to most scanning electron microscope/EDS configurations. The basic idea consists in a comparison of the measured count rate with the count rate resulting from known X-ray yields of copper, titanium, or silicon. The method was successfully tested on three detectors with known effective area and applied further to seven spectrometers from different manufacturers. In most cases the method gave an effective area smaller than the area given in the detector description.

  14. MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.E.; Baker, M.C.

    1999-07-25

    The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less

  15. Novel Photon-Counting Detectors for Free-Space Communication

    NASA Technical Reports Server (NTRS)

    Krainak, M. A.; Yang, G.; Sun, X.; Lu, W.; Merritt, S.; Beck, J.

    2016-01-01

    We present performance data for novel photon-counting detectors for free space optical communication. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We present and compare dark count, photon-detection efficiency, wavelength response and communication performance data for these detectors. We successfully measured real-time communication performance using both the 2 detected-photon threshold and AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects. The HgCdTe APD array routinely demonstrated photon detection efficiencies of greater than 50% across 5 arrays, with one array reaching a maximum PDE of 70%. We performed high-resolution pixel-surface spot scans and measured the junction diameters of its diodes. We found that decreasing the junction diameter from 31 micrometers to 25 micrometers doubled the e- APD gain from 470 for an array produced in the year 2010 to a gain of 1100 on an array delivered to NASA GSFC recently. The mean single-photon SNR was over 12 and the excess noise factors measurements were 1.2-1.3. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output.

  16. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  17. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  18. Neutron coincidence measurements when nuclear parameters vary during the multiplication process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ming-Shih; Teichmann, T.

    1995-07-01

    In a recent paper, a physical/mathematical model was developed for neutron coincidence counting, taking explicit account of neutron absorption and leakage, and using dual probability generating function to derive explicit formulae for the single and multiple count-rates in terms of the physical parameters of the system. The results of this modeling proved very successful in a number of cases in which the system parameters (neutron reaction cross-sections, detection probabilities, etc.) remained the same at the various stages of the process (i.e. from collision to collision). However, there are practical circumstances in which such system parameters change from collision to collision,more » and it is necessary to accommodate these, too, in a general theory, applicable to such situations. For instance, in the case of the neutron coincidence collar (NCC), the parameters for the initial, spontaneous fission neutrons, are not the same as those for the succeeding induced fission neutrons, and similar situations can be envisaged for certain other experimental configurations. This present document shows how the previous considerations can be elaborated to embrace these more general requirements.« less

  19. Real-Time, Fast Neutron Coincidence Assay of Plutonium With a 4-Channel Multiplexed Analyzer and Organic Scintillators

    NASA Astrophysics Data System (ADS)

    Joyce, Malcolm J.; Gamage, Kelum A. A.; Aspinall, M. D.; Cave, F. D.; Lavietes, A.

    2014-06-01

    The design, principle of operation and the results of measurements made with a four-channel organic scintillator system are described. The system comprises four detectors and a multiplexed analyzer for the real-time parallel processing of fast neutron events. The function of the real-time, digital multiple-channel pulse-shape discrimination analyzer is described together with the results of laboratory-based measurements with 252Cf, 241Am-Li and plutonium. The analyzer is based on a single-board solution with integrated high-voltage supplies and graphical user interface. It has been developed to meet the requirements of nuclear materials assay of relevance to safeguards and security. Data are presented for the real-time coincidence assay of plutonium in terms of doubles count rate versus mass. This includes an assessment of the limiting mass uncertainty for coincidence assay based on a 100 s measurement period and samples in the range 0-50 g. Measurements of count rate versus order of multiplicity for 252Cf and 241Am-Li and combinations of both are also presented.

  20. Evidence for Coincident Fusion Products Using Silicon Surface-barrier Detectors

    NASA Astrophysics Data System (ADS)

    Jones, Steven; Scott, Mark; Keeney, Frank

    2002-10-01

    We report experimental results showing coincident proton and triton production from the reaction: d + d --> t (1.01 MeV) + p (3.02 MeV). Partially-deuterided thin titanium foils were positioned between two silicon surface-barrier detectors which were mounted in a small cylindrical vacuum chamber which also served as a Faraday cage. We performed Monte Carlo studies using the SRIM code to determine the expected energies of arriving particles after they exit the host foil. The dual-coincidence requirement reduces background to very low levels so that low yields from very thin TiD foils can be readily detected. In one sequence of experiments, we observed 74 foreground coincidences in the regions of interest compared with 24 background counts; the statistical significance is approximately ten standard deviations. A striking advance is that the repeatability from the dual-coincidence experiments is currently greater than 70%.

  1. Permutational symmetries for coincidence rates in multimode multiphotonic interferometry

    NASA Astrophysics Data System (ADS)

    Khalid, Abdullah; Spivak, Dylan; Sanders, Barry C.; de Guise, Hubert

    2018-06-01

    We obtain coincidence rates for passive optical interferometry by exploiting the permutational symmetries of partially distinguishable input photons, and our approach elucidates qualitative features of multiphoton coincidence landscapes. We treat the interferometer input as a product state of any number of photons in each input mode with photons distinguished by their arrival time. Detectors at the output of the interferometer count photons from each output mode over a long integration time. We generalize and prove the claim of Tillmann et al. [Phys. Rev. X 5, 041015 (2015), 10.1103/PhysRevX.5.041015] that coincidence rates can be elegantly expressed in terms of immanants. Immanants are functions of matrices that exhibit permutational symmetries and the immanants appearing in our coincidence-rate expressions share permutational symmetries with the input state. Our results are obtained by employing representation theory of the symmetric group to analyze systems of an arbitrary number of photons in arbitrarily sized interferometers.

  2. Development of a Body Shield for Small Animal PET System to Reduce Random and Scatter Coincidences

    NASA Astrophysics Data System (ADS)

    Wada, Yasuhiro; Yamamoto, Seiichi; Watanabe, Yasuyoshi

    2015-02-01

    For small animal positron emission tomography (PET) research using high radioactivity, such as dynamic studies, the resulting high random coincidence rate of the system degrades image quality. The random coincidence rate is increased not only by the gamma photons from inside the axial-field-of-view (axial-FOV) of the PET system but also by those from outside the axial-FOV. For brain imaging in small animal studies, significant interference is observed from gamma photons emitted from the body. Single gamma photons from the body enter the axial-FOV and increase the random and scatter coincidences. Shielding against the gamma photons from outside the axial-FOV would improve the image quality. For this purpose, we developed a body shield for a small animal PET system, the microPET Primate 4-ring system, and evaluated its performance. The body shield is made of 9-mm-thick lead and it surrounds most of a rat's body. We evaluated the effectiveness of the body shield using a head phantom and a body phantom with a radioactivity concentration ratio of 1:2 and a maximum total activity of approximately 250 MBq. The random coincidence rate was dramatically decreased to 1/10, and the noise equivalent count rate (NECR) was increased 6 times with an activity of 7 MBq in the head phantom. The true count rate was increased to 35% due to the decrease in system deadtime. The average scatter fraction was decreased to 1/2.5 with the body shield. Count rate measurements of rat were also conducted with an injection activity of approximately 25 MBq of [C-11]N,N-dimethyl-2-(2-amino-4-cyanophenylthio) benzylamine ([C-11]DASB) and approximately 70 and 310 MBq of 2-deoxy-2-(F-18)fluoro-D-glucose ([F-18]FDG). Using the body shield, [F-18]FDG images of rats were improved by increasing the amount of radioactivity injected. The body shield designed for small animal PET systems is a promising tool for improving image quality and quantitation accuracy in small animal molecular imaging research.

  3. Overview of a FPGA-based nuclear instrumentation dedicated to primary activity measurements.

    PubMed

    Bobin, C; Bouchard, J; Pierre, S; Thiam, C

    2012-09-01

    In National Metrology Institutes like LNE-LNHB, renewal and improvement of the instrumentation is an important task. Nowadays, the current trend is to adopt digital boards, which present numerous advantages over the standard electronics. The feasibility of an on-line fulfillment of nuclear-instrumentation functionalities using a commercial FPGA-based (Field-Programmable Gate Array) board has been validated in the case of TDCR primary measurements (Triple to Double Coincidence Ratio method based on liquid scintillation). The new applications presented in this paper have been included to allow either an on-line processing of the information or a raw-data acquisition for an off-line treatment. Developed as a complementary tool for TDCR counting, a time-to-digital converter specifically designed for this technique has been added. In addition, the description is given of a spectrometry channel based on the connection between conventional shaping amplifiers and the analog-to-digital converter (ADC) input available on the same digital board. First results are presented in the case of α- and γ-counting related to, respectively, the defined solid angle and well-type NaI(Tl) primary activity techniques. The combination of two different channels (liquid scintillation and γ-spectrometry) implementing the live-time anticoincidence processing is also described for the application of the 4πβ-γ coincidence method. The need for an optimized coupling between the analog chain and the ADC stage is emphasized. The straight processing of the signals delivered by the preamplifier connected to a HPGe detector is also presented along with the first development of digital filtering. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Estimates for Pu-239 loadings in burial ground culverts based on fast/slow neutron measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winn, W.G.; Hochel, R.C.; Hofstetter, K.J.

    1989-08-15

    This report provides guideline estimates for Pu-239 mass loadings in selected burial ground culverts. The relatively high recorded Pu-239 contents of these culverts have been appraised as suspect relative to criticality concerns, because they were assayed only with the solid waste monitor (SWM) per gamma-ray counting. After 1985, subsequent waste was also assayed with the neutron coincidence counter (NCC), and a comparison of the assay methods showed that the NCC generally yielded higher assays than the SWM. These higher NCC readings signaled a need to conduct non-destructive/non-intrusive nuclear interrogations of these culverts, and a technical team conducted scoping measurements tomore » illustrate potential assay methods based on neutron and/or gamma counting. A fast/slow neutron method has been developed to estimate the Pu-239 in the culverts. In addition, loading records include the SWM assays of all Pu-239 cuts of some of the culvert drums and these data are useful in estimating the corresponding NCC drum assays from NCC vs SWM data. Together, these methods yield predictions based on direct measurements and statistical inference.« less

  5. Low photon-count tip-tilt sensor

    NASA Astrophysics Data System (ADS)

    Saathof, Rudolf; Schitter, Georg

    2016-07-01

    Due to the low photon-count of dark areas of the universe, signal strength of tip-tilt sensor is low, limiting sky-coverage of reliable tip-tilt measurements. This paper presents the low photon-count tip-tilt (LPC-TT) sensor, which potentially achieves improved signal strength. Its optical design spatially samples and integrates the scene. This increases the probability that several individual sources coincide on a detector segment. Laboratory experiments show feasibility of spatial sampling and integration and the ability to measure tilt angles. By simulation an improvement of the SNR of 10 dB compared to conventional tip-tilt sensors is shown.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deyglun, C.; Simony, B.; Perot, B.

    The quantification of radioactive material is essential in the fields of safeguards, criticality control of nuclear processes, dismantling of nuclear facilities and components, or radioactive waste characterization. The Nuclear Measurement Laboratory (LMN) of CEA is involved in the development of time-correlated neutron detection techniques using plastic scintillators. Usually, 3He proportional counters are used for passive neutron coincidence counting owing to their high thermal neutron capture efficiency and gamma insensitivity. However, the global {sup 3}He shortage in the past few years has made these detectors extremely expensive. In addition, contrary to {sup 3}He counters for which a few tens of microsecondsmore » are needed to thermalize fast neutrons, in view to maximize the {sup 3}He(n,p){sup 3}H capture cross section, plastic scintillators are based on elastic scattering and therefore the light signal is formed within a few nanoseconds, correlated pulses being detected within a few dozen- or hundred nanoseconds. This time span reflects fission particles time of flight, which allows reducing accordingly the duration of the coincidence gate and thus the rate of random coincidences, which may totally blind fission coincidences when using {sup 3}He counters in case of a high (α,n) reaction rate. However, plastic scintillators are very sensitive to gamma rays, requiring the use of a thick metallic shield to reduce the corresponding background. Cross talk between detectors is also a major issue, which consists on the detection of one particle by several detectors due to elastic or inelastic scattering, leading to true but undesired coincidences. Data analysis algorithms are tested to minimize cross-talk in simultaneously activated detectors. The distinction between useful fission coincidences and the correlated background due to cross-talk, (α,n) and induced (n,2n) or (n,n'γ) reactions, is achieved by measuring 3-fold coincidences. The performances of a passive neutron coincidence counting system for radioactive waste drums using plastic scintillators have been studied using the Monte Carlo radiation transport code MCNPX-PoliMi v2.0 coupled to data processing algorithms developed with ROOT data analysis software. In addition to the correlated background, accidental coincidences are taken into account in the simulation by randomly merging pulses from different calculations with fission and (α,n) sources. (authors)« less

  7. Portable multiplicity counter

    DOEpatents

    Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM

    2009-09-01

    A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.

  8. Revision of the NIST Standard for (223)Ra: New Measurements and Review of 2008 Data.

    PubMed

    Zimmerman, B E; Bergeron, D E; Cessna, J T; Fitzgerald, R; Pibida, L

    2015-01-01

    After discovering a discrepancy in the transfer standard currently being disseminated by the National Institute of Standards and Technology (NIST), we have performed a new primary standardization of the alpha-emitter (223)Ra using Live-timed Anticoincidence Counting (LTAC) and the Triple-to-Double Coincidence Ratio Method (TDCR). Additional confirmatory measurements were made with the CIEMAT-NIST efficiency tracing method (CNET) of liquid scintillation counting, integral γ-ray counting using a NaI(Tl) well counter, and several High Purity Germanium (HPGe) detectors in an attempt to understand the origin of the discrepancy and to provide a correction. The results indicate that a -9.5 % difference exists between activity values obtained using the former transfer standard relative to the new primary standardization. During one of the experiments, a 2 % difference in activity was observed between dilutions of the (223)Ra master solution prepared using the composition used in the original standardization and those prepared using 1 mol·L(-1) HCl. This effect appeared to be dependent on the number of dilutions or the total dilution factor to the master solution, but the magnitude was not reproducible. A new calibration factor ("K-value") has been determined for the NIST Secondary Standard Ionization Chamber (IC "A"), thereby correcting the discrepancy between the primary and secondary standards.

  9. Low level radioactivity measurements with phoswich detectors using coincident techniques and digital pulse processing analysis.

    PubMed

    de la Fuente, R; de Celis, B; del Canto, V; Lumbreras, J M; de Celis Alonso, B; Martín-Martín, A; Gutierrez-Villanueva, J L

    2008-10-01

    A new system has been developed for the detection of low radioactivity levels of fission products and actinides using coincidence techniques. The device combines a phoswich detector for alpha/beta/gamma-ray recognition with a fast digital card for electronic pulse analysis. The phoswich can be used in a coincident mode by identifying the composed signal produced by the simultaneous detection of alpha/beta particles and X-rays/gamma particles. The technique of coincidences with phoswich detectors was proposed recently to verify the Nuclear Test Ban Treaty (NTBT) which established the necessity of monitoring low levels of gaseous fission products produced by underground nuclear explosions. With the device proposed here it is possible to identify the coincidence events and determine the energy and type of coincident particles. The sensitivity of the system has been improved by employing liquid scintillators and a high resolution low energy germanium detector. In this case it is possible to identify simultaneously by alpha/gamma coincidence transuranic nuclides present in environmental samples without necessity of performing radiochemical separation. The minimum detectable activity was estimated to be 0.01 Bq kg(-1) for 0.1 kg of soil and 1000 min counting.

  10. Transverse correlations in triphoton entanglement: Geometrical and physical optics

    NASA Astrophysics Data System (ADS)

    Wen, Jianming; Xu, P.; Rubin, Morton H.; Shih, Yanhua

    2007-08-01

    The transverse correlation of triphoton entanglement generated within a single crystal is analyzed. Among many interesting features of the transverse correlation, they arise from the spectral function F of the triphoton state produced in the parametric processes. One consequence of transverse effects of entangled states is quantum imaging, which is theoretically studied in photon counting measurements. Klyshko’s two-photon advanced-wave picture is found to be applicable to the multiphoton entanglement with some modifications. We found that in the two-photon coincidence counting measurement by using triphoton entanglement, although the Gaussian thin lens equation (GTLE) holds, the imaging shown in coincidences is obscure and has a poor quality. This is because of tracing the remaining transverse modes in the untouched beam. In the triphoton imaging experiments, two kinds of cases have been examined. For the case that only one object with one thin lens is placed in the system, we found that the GTLE holds as expected in the triphoton coincidences and the effective distance between the lens and imaging plane is the parallel combination of two distances between the lens and two detectors weighted by wavelengths, which behaves as the parallel combination of resistors in the electromagnetism theory. Only in this case, a point-point correspondence for forming an image is well-accomplished. However, when two objects or two lenses are inserted in the system, though the GTLEs are well-satisfied, in general a point-point correspondence for imaging cannot be established. Under certain conditions, two blurred images may be observed in the coincidence counts. We have also studied the ghost interference-diffraction experiments by using double slits as apertures in triphoton entanglement. It was found that when two double slits are used in two optical beams, the interference-diffraction patterns show unusual features compared with the two-photon case. This unusual behavior is a destructive interference between two amplitudes for two photons crossing two double slits.

  11. Low-SWaP coincidence processing for Geiger-mode LIDAR video

    NASA Astrophysics Data System (ADS)

    Schultz, Steven E.; Cervino, Noel P.; Kurtz, Zachary D.; Brown, Myron Z.

    2015-05-01

    Photon-counting Geiger-mode lidar detector arrays provide a promising approach for producing three-dimensional (3D) video at full motion video (FMV) data rates, resolution, and image size from long ranges. However, coincidence processing required to filter raw photon counts is computationally expensive, generally requiring significant size, weight, and power (SWaP) and also time. In this paper, we describe a laboratory test-bed developed to assess the feasibility of low-SWaP, real-time processing for 3D FMV based on Geiger-mode lidar. First, we examine a design based on field programmable gate arrays (FPGA) and demonstrate proof-of-concept results. Then we examine a design based on a first-of-its-kind embedded graphical processing unit (GPU) and compare performance with the FPGA. Results indicate feasibility of real-time Geiger-mode lidar processing for 3D FMV and also suggest utility for real-time onboard processing for mapping lidar systems.

  12. Spatial and Time Coincidence Detection of the Decay Chain of Short-Lived Radioactive Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granja, Carlos; Jakubek, Jan; Platkevic, Michal

    The quantum counting position sensitive pixel detector Timepix with per-pixel energy and time resolution enables to detect radioactive ions and register the consecutive decay chain by simultaneous position-and time-correlation. This spatial and timing coincidence technique in the same sensor is demonstrated by the registration of the decay chain {sup 8}He{yields}{sup {beta} 8}Li and {sup 8}Li{yields}{sup {beta}-} {sup 8}Be{yields}{alpha}+{alpha} and by the measurement of the {beta} decay half-lives. Radioactive ions, selectively obtained from the Lohengrin fission fragment spectrometer installed at the High Flux Reactor of the ILL Grenoble, are delivered to the Timepix silicon sensor where decays of the implanted ionsmore » and daughter nuclei are registered and visualized. We measure decay lifetimes in the range {>=}{mu}s with precision limited just by counting statistics.« less

  13. 3D Silicon Coincidence Avalanche Detector (3D-SiCAD) for charged particle detection

    NASA Astrophysics Data System (ADS)

    Vignetti, M. M.; Calmon, F.; Pittet, P.; Pares, G.; Cellier, R.; Quiquerez, L.; Chaves de Albuquerque, T.; Bechetoille, E.; Testa, E.; Lopez, J.-P.; Dauvergne, D.; Savoy-Navarro, A.

    2018-02-01

    Single-Photon Avalanche Diodes (SPADs) are p-n junctions operated in Geiger Mode by applying a reverse bias above the breakdown voltage. SPADs have the advantage of featuring single photon sensitivity with timing resolution in the picoseconds range. Nevertheless, their relatively high Dark Count Rate (DCR) is a major issue for charged particle detection, especially when it is much higher than the incoming particle rate. To tackle this issue, we have developed a 3D Silicon Coincidence Avalanche Detector (3D-SiCAD). This novel device implements two vertically aligned SPADs featuring on-chip electronics for the detection of coincident avalanche events occurring on both SPADs. Such a coincidence detection mode allows an efficient discrimination of events related to an incoming charged particle (producing a quasi-simultaneous activation of both SPADs) from dark counts occurring independently on each SPAD. A 3D-SiCAD detector prototype has been fabricated in CMOS technology adopting a 3D flip-chip integration technique, and the main results of its characterization are reported in this work. The particle detection efficiency and noise rejection capability for this novel device have been evaluated by means of a β- strontium-90 radioactive source. Moreover the impact of the main operating parameters (i.e. the hold-off time, the coincidence window duration, the SPAD excess bias voltage) over the particle detection efficiency has been studied. Measurements have been performed with different β- particles rates and show that a 3D-SiCAD device outperforms single SPAD detectors: the former is indeed capable to detect particle rates much lower than the individual DCR observed in a single SPAD-based detectors (i.e. 2 to 3 orders of magnitudes lower).

  14. Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE

    NASA Astrophysics Data System (ADS)

    Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.

    2006-02-01

    A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.

  15. The Feynman-Y Statistic in Relation to Shift-Register Neutron Coincidence Counting: Precision and Dead Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Santi, Peter A.; Henzlova, Daniela

    The Feynman-Y statistic is a type of autocorrelation analysis. It is defined as the excess variance-to-mean ratio, Y = VMR - 1, of the number count distribution formed by sampling a pulse train using a series of non-overlapping gates. It is a measure of the degree of correlation present on the pulse train with Y = 0 for Poisson data. In the context of neutron coincidence counting we show that the same information can be obtained from the accidentals histogram acquired using the multiplicity shift-register method, which is currently the common autocorrelation technique applied in nuclear safeguards. In the casemore » of multiplicity shift register analysis however, overlapping gates, either triggered by the incoming pulse stream or by a periodic clock, are used. The overlap introduces additional covariance but does not alter the expectation values. In this paper we discuss, for a particular data set, the relative merit of the Feynman and shift-register methods in terms of both precision and dead time correction. Traditionally the Feynman approach is applied with a relatively long gate width compared to the dieaway time. The main reason for this is so that the gate utilization factor can be taken as unity rather than being treated as a system parameter to be determined at characterization/calibration. But because the random trigger interval gate utilization factor is slow to saturate this procedure requires a gate width many times the effective 1/e dieaway time. In the traditional approach this limits the number of gates that can be fitted into a given assay duration. We empirically show that much shorter gates, similar in width to those used in traditional shift register analysis can be used. Because the way in which the correlated information present on the pulse train is extracted is different for the moments based method of Feynman and the various shift register based approaches, the dead time losses are manifested differently for these two approaches. The resulting estimates for the dead time corrected first and second order reduced factorial moments should be independent of the method however and this allows the respective dead time formalism to be checked. We discuss how to make dead time corrections in both the shift register and the Feynman approaches.« less

  16. A gamma-gamma coincidence/anticoincidence spectrometer for low-level cosmogenic (22)Na/(7)Be activity ratio measurement.

    PubMed

    Zhang, Weihua; Ungar, Kurt; Stukel, Matthew; Mekarski, Pawel

    2014-04-01

    In this study, a digital gamma-gamma coincidence/anticoincidence spectrometer was developed and examined for low-level cosmogenic (22)Na and (7)Be in air-filter sample monitoring. The spectrometer consists of two bismuth germanate scintillators (BGO) and an XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The spectrometer design allows a more selective measurement of (22)Na with a significant background reduction by gamma-gamma coincidence events processing. Hence, the system provides a more sensitive way to quantify trace amounts of (22)Na than normal high resolution gamma spectrometry providing a critical limit of 3 mBq within a 20 h count. The use of a list-mode data acquisition technique enabled simultaneous determination of (22)Na and (7)Be activity concentrations using a single measurement by coincidence and anticoincidence mode respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Health diagnosis of arch bridge suspender by acoustic emission technique

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Ou, Jinping

    2007-01-01

    Conventional non-destructive methods can't be dynamically monitored the suspenders' damage levels and types, so acoustic emission (AE) technique is proposed to monitor its activity. The validity signals are determined by the relationship with risetime and duration. The ambient noise is eliminated using float threshold value and placing a guard sensor. The cement mortar and steel strand damage level is analyzed by AE parameter method and damage types are judged by waveform analyzing technique. Based on these methods, all the suspenders of Sichuan Ebian Dadu river arch bridge have been monitored using AE techniques. The monitoring results show that AE signal amplitude, energy, counts can visually display the suspenders' damage levels, the difference of waveform and frequency range express different damage type. The testing results are well coincide with the practical situation.

  18. Determination of mammalian cell counts, cell size and cell health using the Moxi Z mini automated cell counter.

    PubMed

    Dittami, Gregory M; Sethi, Manju; Rabbitt, Richard D; Ayliffe, H Edward

    2012-06-21

    Particle and cell counting is used for a variety of applications including routine cell culture, hematological analysis, and industrial controls(1-5). A critical breakthrough in cell/particle counting technologies was the development of the Coulter technique by Wallace Coulter over 50 years ago. The technique involves the application of an electric field across a micron-sized aperture and hydrodynamically focusing single particles through the aperture. The resulting occlusion of the aperture by the particles yields a measurable change in electric impedance that can be directly and precisely correlated to cell size/volume. The recognition of the approach as the benchmark in cell/particle counting stems from the extraordinary precision and accuracy of its particle sizing and counts, particularly as compared to manual and imaging based technologies (accuracies on the order of 98% for Coulter counters versus 75-80% for manual and vision-based systems). This can be attributed to the fact that, unlike imaging-based approaches to cell counting, the Coulter Technique makes a true three-dimensional (3-D) measurement of cells/particles which dramatically reduces count interference from debris and clustering by calculating precise volumetric information about the cells/particles. Overall this provides a means for enumerating and sizing cells in a more accurate, less tedious, less time-consuming, and less subjective means than other counting techniques(6). Despite the prominence of the Coulter technique in cell counting, its widespread use in routine biological studies has been prohibitive due to the cost and size of traditional instruments. Although a less expensive Coulter-based instrument has been produced, it has limitations as compared to its more expensive counterparts in the correction for "coincidence events" in which two or more cells pass through the aperture and are measured simultaneously. Another limitation with existing Coulter technologies is the lack of metrics on the overall health of cell samples. Consequently, additional techniques must often be used in conjunction with Coulter counting to assess cell viability. This extends experimental setup time and cost since the traditional methods of viability assessment require cell staining and/or use of expensive and cumbersome equipment such as a flow cytometer. The Moxi Z mini automated cell counter, described here, is an ultra-small benchtop instrument that combines the accuracy of the Coulter Principle with a thin-film sensor technology to enable precise sizing and counting of particles ranging from 3-25 microns, depending on the cell counting cassette used. The M type cassette can be used to count particles from with average diameters of 4 - 25 microns (dynamic range 2 - 34 microns), and the Type S cassette can be used to count particles with and average diameter of 3 - 20 microns (dynamic range 2 - 26 microns). Since the system uses a volumetric measurement method, the 4-25 microns corresponds to a cell volume range of 34 - 8,180 fL and the 3 - 20 microns corresponds to a cell volume range of 14 - 4200 fL, which is relevant when non-spherical particles are being measured. To perform mammalian cell counts using the Moxi Z, the cells to be counted are first diluted with ORFLO or similar diluent. A cell counting cassette is inserted into the instrument, and the sample is loaded into the port of the cassette. Thousands of cells are pulled, single-file through a "Cell Sensing Zone" (CSZ) in the thin-film membrane over 8-15 seconds. Following the run, the instrument uses proprietary curve-fitting in conjunction with a proprietary software algorithm to provide coincidence event correction along with an assessment of overall culture health by determining the ratio of the number of cells in the population of interest to the total number of particles. The total particle counts include shrunken and broken down dead cells, as well as other debris and contaminants. The results are presented in histogram format with an automatic curve fit, with gates that can be adjusted manually as needed. Ultimately, the Moxi Z enables counting with a precision and accuracy comparable to a Coulter Z2, the current gold standard, while providing additional culture health information. Furthermore it achieves these results in less time, with a smaller footprint, with significantly easier operation and maintenance, and at a fraction of the cost of comparable technologies.

  19. Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata

    2014-09-01

    Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.

  20. Xe isotope detection and discrimination using beta spectroscopy with coincident gamma spectroscopy

    NASA Astrophysics Data System (ADS)

    Reeder, P. L.; Bowyer, T. W.

    1998-02-01

    Beta spectroscopic techniques show promise of significant improvements for a beta-gamma coincidence counter that is part of a system for analyzing Xe automatically separated from air. The previously developed counting system for 131mXe, 133mXe, 133gXe, and 135gXe can be enhanced to give additional discrimination between these Xe isotopes by using the plastic scintillation sample cell as a beta spectrometer to resolve the conversion electron peaks. The automated system will be a key factor in monitoring the Comprehensive Test Ban Treaty.

  1. Breaking through the false coincidence barrier in electron–ion coincidence experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, David L.; Hayden, Carl C.; Hemberger, Patrick

    Photoelectron Photoion Coincidence (PEPICO) spectroscopy holds the promise of a universal, isomer-selective, and sensitive analytical technique for time-resolved quantitative analysis of bimolecular chemical reactions. Unfortunately, its low dynamic range of ~10 3 has largely precluded its use for this purpose, where a dynamic range of at least 10 5 is generally required. This limitation is due to the false coincidence background common to all coincidence experiments, especially at high count rates. Electron/ion pairs emanating from separate ionization events but arriving within the ion time of flight (TOF) range of interest constitute the false coincidence background. Although this background has uniformmore » intensity at every m/z value, the Poisson scatter in the false coincidence background obscures small signals. In this paper, temporal ion deflection coupled with a position-sensitive ion detector enables suppression of the false coincidence background, increasing the dynamic range in the PEPICO TOF mass spectrum by 2–3 orders of magnitude. The ions experience a time-dependent electric deflection field at a well-defined fraction of their time of flight. This deflection defines an m/z- and ionization-time dependent ion impact position for true coincidences, whereas false coincidences appear randomly outside this region and can be efficiently suppressed. When cold argon clusters are ionized, false coincidence suppression allows us to observe species up to Ar 9 +, whereas Ar 4 + is the largest observable cluster under traditional operation. As a result, this advance provides mass-selected photoelectron spectra for fast, high sensitivity quantitative analysis of reacting systems.« less

  2. Breaking through the false coincidence barrier in electron–ion coincidence experiments

    DOE PAGES

    Osborn, David L.; Hayden, Carl C.; Hemberger, Patrick; ...

    2016-10-31

    Photoelectron Photoion Coincidence (PEPICO) spectroscopy holds the promise of a universal, isomer-selective, and sensitive analytical technique for time-resolved quantitative analysis of bimolecular chemical reactions. Unfortunately, its low dynamic range of ~10 3 has largely precluded its use for this purpose, where a dynamic range of at least 10 5 is generally required. This limitation is due to the false coincidence background common to all coincidence experiments, especially at high count rates. Electron/ion pairs emanating from separate ionization events but arriving within the ion time of flight (TOF) range of interest constitute the false coincidence background. Although this background has uniformmore » intensity at every m/z value, the Poisson scatter in the false coincidence background obscures small signals. In this paper, temporal ion deflection coupled with a position-sensitive ion detector enables suppression of the false coincidence background, increasing the dynamic range in the PEPICO TOF mass spectrum by 2–3 orders of magnitude. The ions experience a time-dependent electric deflection field at a well-defined fraction of their time of flight. This deflection defines an m/z- and ionization-time dependent ion impact position for true coincidences, whereas false coincidences appear randomly outside this region and can be efficiently suppressed. When cold argon clusters are ionized, false coincidence suppression allows us to observe species up to Ar 9 +, whereas Ar 4 + is the largest observable cluster under traditional operation. As a result, this advance provides mass-selected photoelectron spectra for fast, high sensitivity quantitative analysis of reacting systems.« less

  3. Rejection of randomly coinciding events in ZnMoO scintillating bolometers

    NASA Astrophysics Data System (ADS)

    Chernyak, D. M.; Danevich, F. A.; Giuliani, A.; Mancuso, M.; Nones, C.; Olivieri, E.; Tenconi, M.; Tretyak, V. I.

    2014-06-01

    Random coincidence of events (particularly from two neutrino double beta decay) could be one of the main sources of background in the search for neutrinoless double beta decay with cryogenic bolometers due to their poor time resolution. Pulse-shape discrimination by using front edge analysis, mean-time and methods were applied to discriminate randomly coinciding events in ZnMoO cryogenic scintillating bolometers. These events can be effectively rejected at the level of 99 % by the analysis of the heat signals with rise-time of about 14 ms and signal-to-noise ratio of 900, and at the level of 92 % by the analysis of the light signals with rise-time of about 3 ms and signal-to-noise ratio of 30, under the requirement to detect 95 % of single events. These rejection efficiencies are compatible with extremely low background levels in the region of interest of neutrinoless double beta decay of Mo for enriched ZnMoO detectors, of the order of counts/(y keV kg). Pulse-shape parameters have been chosen on the basis of the performance of a real massive ZnMoO scintillating bolometer. Importance of the signal-to-noise ratio, correct finding of the signal start and choice of an appropriate sampling frequency are discussed.

  4. Performance of coincidence-based PSD on LiF/ZnS Detectors for Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sean M.; Stave, Sean C.; Lintereur, Azaree

    Abstract: Mass accountancy measurement is a nuclear nonproliferation application which utilizes coincidence and multiplicity counters to verify special nuclear material declarations. With a well-designed and efficient detector system, several relevant parameters of the material can be verified simultaneously. 6LiF/ZnS scintillating sheets may be used for this purpose due to a combination of high efficiency and short die-away times in systems designed with this material, but involve choices of detector geometry and exact material composition (e.g., the addition of Ni-quenching in the material) that must be optimized for the application. Multiplicity counting for verification of declared nuclear fuel mass involves neutronmore » detection in conditions where several neutrons arrive in a short time window, with confounding gamma rays. This paper considers coincidence-based Pulse-Shape Discrimination (PSD) techniques developed to work under conditions of high pileup, and the performance of these algorithms with different detection materials. Simulated and real data from modern LiF/ZnS scintillator systems are evaluated with these techniques and the relationship between the performance under pileup and material characteristics (e.g., neutron peak width and total light collection efficiency) are determined, to allow for an optimal choice of detector and material.« less

  5. Validation of the FFM PD count technique for screening personality pathology in later middle-aged and older adults.

    PubMed

    Van den Broeck, Joke; Rossi, Gina; De Clercq, Barbara; Dierckx, Eva; Bastiaansen, Leen

    2013-01-01

    Research on the applicability of the five factor model (FFM) to capture personality pathology coincided with the development of a FFM personality disorder (PD) count technique, which has been validated in adolescent, young, and middle-aged samples. This study extends the literature by validating this technique in an older sample. Five alternative FFM PD counts based upon the Revised NEO Personality Inventory (NEO PI-R) are computed and evaluated in terms of both convergent and divergent validity with the Assessment of DSM-IV Personality Disorders Questionnaire (shortly ADP-IV; DSM-IV, Diagnostic and Statistical Manual of Mental Disorders - Fourth edition). For the best working count for each PD normative data are presented, from which cut-off scores are derived. The validity of these cut-offs and their usefulness as a screening tool is tested against both a categorical (i.e., the DSM-IV - Text Revision), and a dimensional (i.e., the Dimensional Assessment of Personality Pathology; DAPP) measure of personality pathology. All but the Antisocial and Obsessive-Compulsive counts exhibited adequate convergent and divergent validity, supporting the use of this method in older adults. Using the ADP-IV and the DAPP - Short Form as validation criteria, results corroborate the use of the FFM PD count technique to screen for PDs in older adults, in particular for the Paranoid, Borderline, Histrionic, Avoidant, and Dependent PDs. Given the age-neutrality of the NEO PI-R and the considerable lack of valid personality assessment tools, current findings appear to be promising for the assessment of pathology in older adults.

  6. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, J.A.; Kopp, M.K.

    1980-05-23

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  7. Multianode cylindrical proportional counter for high count rates

    DOEpatents

    Hanson, James A.; Kopp, Manfred K.

    1981-01-01

    A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.

  8. Positron Scanner for Locating Brain Tumors

    DOE R&D Accomplishments Database

    Rankowitz, S.; Robertson, J. S.; Higinbotham, W. A.; Rosenblum, M. J.

    1962-03-01

    A system is described that makes use of positron emitting isotopes for locating brain tumors. This system inherently provides more information about the distribution of radioactivity in the head in less time than existing scanners which use one or two detectors. A stationary circular array of 32 scintillation detectors scans a horizontal layer of the head from many directions simultaneously. The data, consisting of the number of counts in all possible coincidence pairs, are coded and stored in the memory of a Two-Dimensional Pulse-Height Analyzer. A unique method of displaying and interpreting the data is described that enables rapid approximate analysis of complex source distribution patterns. (auth)

  9. An instrument for continuous measurement of 220Rn (and 222Rn) using delayed coincidences between 220Rn and 216Po

    NASA Astrophysics Data System (ADS)

    Bigu, J.; Elliott, J.

    1994-05-01

    An instrument has been developed for continuous monitoring of 220Rn. The method of data analysis is based on delayed coincidences between 220Rn and 216Po. The instrument basically consists of a scaler equipped with a photomultiplier tube (PMT) to which a scintillation cell (SC) of the flow through type is optically coupled. The scaler is equipped with a pulse output (P/O) port which provides a TTL pulse, +5 V in amplitude and 5 to 10 μs duration for each nuclear event recorded by the SC and its associated electronic circuitry. The P/O port is connected to a 32 bit counter/timer unit operating at 1 MHz which records and stores the time of arrival of pulses. For laboratory use, the counter/timer is connected to the serial port of a laptop PC. However, for field applications, where space and weight pose severe practical limitations, the PC is substituted by an expanded counter/timer unit which incorporates a muprocessor for data analysis, a LCD for data display, and a keypad to key in function instructions. Furthermore, some additional hardware permits the measurement of 220Rn flux density, J( 220Rn) , from soils and other materials. Because total α-particle count, as well as delayed (α - α) coincidence rates are recorded in two separate channels, the method permits the measurement of 222Rn in addition to 220Rn. The method is particularly useful for low concentration levels. The sensitivity of the method primarily depends on the volume of the SC. For a low volume SC (˜0.16 l), a sensitivity of 0.2 h -1/Bq m -3 for 220Rn and 1.4 h -1/Bq m -3 for 222Rn are readily attainable. For a large volume (1.5 l) SC (external PMT used), the sensitivity for 220Rn is ≥ 1.5 h -1/Bq m -3, depending on the SC design and the operating sampling flowrate. (Note: h -1 stands for counts per hour). The above instrument has been used extensively at the National Radon/Thoron Test Facility (NRTTF) of the Elliot Lake Laboratory for routine monitoring of 220Rn levels since 1992. It has also been used for outdoor and indoor 220Rn measurements, as well as for the determination of J( 220Rn) from earthen materials and the like.

  10. The influence of atmospheric turbulence on partially coherent two-photon entangled field

    NASA Astrophysics Data System (ADS)

    Qiu, Y.; She, W.

    2012-09-01

    The propagation of a two-photon field from down-conversion of a partially coherent Gaussian Schell-model (GSM) pump beam in free space has been reported. However, the propagation of this two-photon field through a turbulent atmosphere has not been investigated yet. In this paper, an analytical expression of the coincidence count rate of the two-photon entangled field is derived. Unlike what has been reported, the field is from a parameter down-conversion of a partially coherent dark hollow pump beam and propagates through a turbulent atmosphere. The effects of the propagation parameters on the coincidence count rate are evaluated and illustrated. The results show that the pump beam parameters and atmospheric turbulence can evidently affect the detection probability of the photon pair at two different positions. It is found that the detection probability of the two-photon field is higher, and thus less susceptible to turbulence, if the field is produced by a lower mode of partially coherent pump beam.

  11. The coincidence counting technique for orders of magnitude background reduction in data obtained with the magnetic recoil spectrometer at OMEGA and the NIF.

    PubMed

    Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S

    2011-07-01

    A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.

  12. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z.

    PubMed

    Lahmann, B; Milanese, L M; Han, W; Gatu Johnson, M; Séguin, F H; Frenje, J A; Petrasso, R D; Hahn, K D; Jones, B

    2016-11-01

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protons at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.

  13. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lahmann, B.; Milanese, L. M.; Han, W.

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less

  14. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lahmann, B., E-mail: lahmann@mit.edu; Milanese, L. M.; Han, W.

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.« less

  15. Channel-capacity gain in entanglement-assisted communication protocols based exclusively on linear optics, single-photon inputs, and coincidence photon counting

    DOE PAGES

    Lougovski, P.; Uskov, D. B.

    2015-08-04

    Entanglement can effectively increase communication channel capacity as evidenced by dense coding that predicts a capacity gain of 1 bit when compared to entanglement-free protocols. However, dense coding relies on Bell states and when implemented using photons the capacity gain is bounded by 0.585 bits due to one's inability to discriminate between the four optically encoded Bell states. In this research we study the following question: Are there alternative entanglement-assisted protocols that rely only on linear optics, coincidence photon counting, and separable single-photon input states and at the same time provide a greater capacity gain than 0.585 bits? In thismore » study, we show that besides the Bell states there is a class of bipartite four-mode two-photon entangled states that facilitate an increase in channel capacity. We also discuss how the proposed scheme can be generalized to the case of two-photon N-mode entangled states for N=6,8.« less

  16. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE PAGES

    Lahmann, B.; Milanese, L. M.; Han, W.; ...

    2016-07-20

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less

  17. Incorporating delayed neutrons into the point-model equations routinely used for neutron coincidence counting in nuclear safeguards

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2016-09-21

    Here, we extend the familiar Bӧhnel point-model equations, which are routinely used to interpret neutron coincidence counting rates, by including the contribution of delayed neutrons. After developing the necessary equations we use them to show, by providing some numerical results, what the quantitative impact of neglecting delayed neutrons is across the full range of practical nuclear safeguards applications. The influence of delayed neutrons is predicted to be small for the types of deeply sub-critical assay problems which concern the nuclear safeguards community, smaller than uncertainties arising from other factors. This is most clearly demonstrated by considering the change in themore » effective (α,n)-to-spontaneous fission prompt-neutron ratio that the inclusion of delayed neutrons gives rise to. That the influence of delayed neutrons is small is fortunate, and our results justify the long standing practice of simply neglecting them in the analysis of field measurements.« less

  18. Cosmic-muon intensity measurement and overburden estimation in a building at surface level and in an underground facility using two BC408 scintillation detectors coincidence counting system.

    PubMed

    Zhang, Weihua; Ungar, Kurt; Liu, Chuanlei; Mailhot, Maverick

    2016-10-01

    A series of measurements have been recently conducted to determine the cosmic-muon intensities and attenuation factors at various indoor and underground locations for a gamma spectrometer. For this purpose, a digital coincidence spectrometer was developed by using two BC408 plastic scintillation detectors and an XIA LLC Digital Gamma Finder (DGF)/Pixie-4 software and card package. The results indicate that the overburden in the building at surface level absorbs a large part of cosmic ray protons while attenuating the cosmic-muon intensity by 20-50%. The underground facility has the largest overburden of 39 m water equivalent, where the cosmic-muon intensity is reduced by a factor of 6. The study provides a cosmic-muon intensity measurement and overburden assessment, which are important parameters for analysing the background of an HPGe counting system, or for comparing the background of similar systems. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Search for optical bursts from the gamma ray burst source GBS 0526-66

    NASA Astrophysics Data System (ADS)

    Seetha, S.; Sreenivasaiah, K. V.; Marar, T. M. K.; Kasturirangan, K.; Rao, U. R.; Bhattacharyya, J. C.

    1985-08-01

    Attempts were made to detect optical bursts from the gamma-ray burst source GBS 0526-66 during Dec. 31, 1984 to Jan. 2, 1985 and Feb. 23 to Feb. 24, 1985, using the one meter reflector of the Kavalur Observatory. Jan. 1, 1985 coincided with the zero phase of the predicted 164 day period of burst activity from the source (Rothschild and Lingenfelter, 1984). A new optical burst photon counting system with adjustable trigger threshold was used in parallel with a high speed photometer for the observations. The best time resolution was 1 ms and maximum count rate capability was 255,000 counts s(-1). Details of the instrumentation and observational results are presented.

  20. Analysis of the width-w non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices☆

    PubMed Central

    Krenn, Daniel

    2013-01-01

    In this work the number of occurrences of a fixed non-zero digit in the width-w non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange’s method. PMID:23805020

  1. Analysis of the width-[Formula: see text] non-adjacent form in conjunction with hyperelliptic curve cryptography and with lattices.

    PubMed

    Krenn, Daniel

    2013-06-17

    In this work the number of occurrences of a fixed non-zero digit in the width-[Formula: see text] non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange's method.

  2. Characterizing ICF Neutron Diagnostics on the nTOF line at SUNY Geneseo

    NASA Astrophysics Data System (ADS)

    Simone, Angela; Padalino, Stephen; Turner, Ethan; Ginnane, Mary Kate; Dubois, Natalie; Fletcher, Kurtis; Giordano, Michael; Lawson-Keister, Patrick; Harrison, Hannah; Visca, Hannah; Sangster, Craig; Regan, Sean

    2014-10-01

    Charged particle beams from the Geneseo 1.7 MV tandem Pelletron accelerator produce nuclear reactions that emit neutrons in the range of 0.5 to 17.9 MeV via the d(d,n)3He and 11B(d,n)12C reactions. The neutron energy and flux can be adjusted by controlling the accelerator beam current and potential. This adjustable neutron source makes it possible to calibrate ICF and HEDP neutron scintillator diagnostics. However, gamma rays which are often present during an accelerator-based calibration are difficult to differentiate from neutron signals in scintillators. To identify neutrons from gamma rays and to determine their energy, a permanent neutron time-of-flight (nTOF) line is being constructed. By detecting the scintillator signal in coincidence with an associated charged particle (ACP) produced in the reaction, the identity of the neutron can be known and its energy determined by time of flight. Using a 100% efficient surface barrier detector to count the ACPs, the absolute efficiency of the scintillator as a function of neutron energy can be determined. This is done by determining the ratio of the ACP counts in the singles spectrum to coincidence counts for matched solid angles of the SBD and scintillator. Funded in part by a LLE contract through the DOE.

  3. Sea-Ice Freeboard Retrieval Using Digital Photon-Counting Laser Altimetry

    NASA Technical Reports Server (NTRS)

    Farrell, Sinead L.; Brunt, Kelly M.; Ruth, Julia M.; Kuhn, John M.; Connor, Laurence N.; Walsh, Kaitlin M.

    2015-01-01

    Airborne and spaceborne altimeters provide measurements of sea-ice elevation, from which sea-ice freeboard and thickness may be derived. Observations of the Arctic ice pack by satellite altimeters indicate a significant decline in ice thickness, and volume, over the last decade. NASA's Ice, Cloud and land Elevation Satellite-2 (ICESat-2) is a next-generation laser altimeter designed to continue key sea-ice observations through the end of this decade. An airborne simulator for ICESat-2, the Multiple Altimeter Beam Experimental Lidar (MABEL), has been deployed to gather pre-launch data for mission development. We present an analysis of MABEL data gathered over sea ice in the Greenland Sea and assess the capabilities of photon-counting techniques for sea-ice freeboard retrieval. We compare freeboard estimates in the marginal ice zone derived from MABEL photon-counting data with coincident data collected by a conventional airborne laser altimeter. We find that freeboard estimates agree to within 0.03m in the areas where sea-ice floes were interspersed with wide leads, and to within 0.07m elsewhere. MABEL data may also be used to infer sea-ice thickness, and when compared with coincident but independent ice thickness estimates, MABEL ice thicknesses agreed to within 0.65m or better.

  4. 65Zn and 133Ba standardizing by photon-photon coincidence counting

    NASA Astrophysics Data System (ADS)

    Loureiro, Jamir S.; da Cruz, Paulo A. L.; Iwahara, Akira; Delgado, José U.; Lopes, Ricardo T.

    2018-03-01

    The LNMRI/Brazil has deployed a system using X-gamma coincidence technique for the standardizing radionuclide, which present simple and complex decay scheme with X-rays of energy below 100 keV. The work was carried on radionuclide metrology laboratory using a sodium iodide detector, for gamma photons, in combination with a high purity germanium detector for X-rays. Samples of 65Zn and 133Ba were standardized and the results for both radionuclides showed good precision and accuracy when compared with reference values. The standardization differences were 0.72 % for 65Zn and 0.48 % for 133Ba samples.

  5. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    DOEpatents

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  6. Thoron detection with an active Radon exposure meter—First results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irlinger, J., E-mail: josef.irlinger@helmholtz-muenchen.de; Wielunski, M.; Rühm, W.

    For state-of-the-art discrimination of Radon and Thoron several measurement techniques can be used, such as active sampling, electrostatic collection, delayed coincidence method, and alpha-particle-spectroscopy. However, most of the devices available are bulky and show high power consumption, rendering them unfeasible for personal exposition monitoring. Based on a Radon exposure meter previously realized at the Helmholtz Center Munich (HMGU), a new electronic prototype for Radon/Thoron monitoring is currently being developed, which features small size and weight. Operating with pin-diode detectors, the low-power passive-sampling device can be used for continuous concentration measurements, employing alpha-particle-spectroscopy and coincidence event registration to distinguish decays originatingmore » either from Radon or Thoron isotopes and their decay products. In open geometry, preliminary calibration measurements suggest that one count per hour is produced by a 11 Bq m{sup −3} Radon atmosphere or by a 15 Bq m{sup −3} Thoron atmosphere. Future efforts will concentrate on measurements in mixed Radon/Thoron atmospheres.« less

  7. Implementation of the TDCR liquid scintillation method at CNEA-LMR, Argentina.

    PubMed

    Arenillas, Pablo; Cassette, Philippe

    2006-01-01

    During the last two years, a triple-to-double coincidence ratio (TDCR) system was assembled and adjusted at the CNEA-LMR, Argentina. The new counting system will add complementary capabilities to the absolute measurements section of the CNEA-LMR. This work describes its implementation and validation. Several checks and a set of beta-emitting standard solutions were used in order to perform the validation experiments. In preliminary measurements, a 3H LNHB solution with reference activity concentration of (119.7+/-0.9) kBq/g on 11 November 2003 was used. The CNEA-LMR TDCR counter gave, at the same reference date, an activity concentration of (120+/-1) kBq/g. Results and improvements are presented in detail. Concerning the asymmetry of the system, the quantum efficiency of the three photomultiplier tubes was studied for different operating conditions of the focusing voltage. The counter also includes an automatic system to change the efficiency by defocusing the photomultipliers and on the other hand, it was coupled to a HPGe detector to also measure beta-gamma coincidences.

  8. Detection of Neutrons with Scintillation Counters

    DOE R&D Accomplishments Database

    Hofstadter, R.

    1948-11-01

    Detection of slow neutrons by: detection of single gamma rays following capture by cadmium or mercury; detection of more than one gamma ray by observing coincidences after capture; detection of heavy charged particles after capture in lithium or baron nuclei; possible use of anthracene for counting fast neutrons investigated briefly.

  9. Quantification of fluorescent samples by photon-antibunching

    NASA Astrophysics Data System (ADS)

    Kurz, Anton; Schwering, Michael; Herten, Dirk-Peter

    2012-02-01

    Typical problems in molecular biology, like oligomerization of proteins, appear on non-resolvable length scales. Therefore a method which allows counting numbers of fluorescent emitters beyond this barrier can help to unveil these questions. One approach engaging this task makes use of the photon antibunching (PAB) effect. Most fluorophores are single photon emitters. Therefore upon a narrow excitation pulse they might only run through one excitation cycle and emit one photon at a time. This behavior is known as PAB. By analyzing coincident events of photon detections (Coincidence Analysis, CCA ) over many excitation cycles the number of fluorophores residing in the confocal volume can be estimated. Simulations have shown that up to 40 fluorophores can be distinguished with a reasonable error. In follow-up experiments five fluorophores could be distinguished by CCA. In this work the method is applied to a whole sample set and statistical variance and robustness are determined. CCA is critical to several parameters like photo stability, background noise, label efficiency and photopysical properties of the dye, like brightness and blinking. Therefore a reasonable scheme for analysis is introduced and setup parameters are optimized. To proof the superiority of CCA, it has been applied to estimate the number of dyes for a well-defined probe and the results have been compared with bleach step analysis (BS analysis), a method based on the ability to observe single bleach-steps.

  10. Fast counting electronics for neutron coincidence counting

    DOEpatents

    Swansen, James E.

    1987-01-01

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a .sup.3 He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output ) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  11. Fast counting electronics for neutron coincidence counting

    DOEpatents

    Swansen, J.E.

    1985-03-05

    An amplifier-discriminator is tailored to output a very short pulse upon an above-threshold input from a detector which may be a /sup 3/He detector. The short pulse output is stretched and energizes a light emitting diode (LED) to provide a visual output of operation and pulse detection. The short pulse is further fed to a digital section for processing and possible ORing with other like generated pulses. Finally, the output (or ORed output) is fed to a derandomizing buffer which converts the rapidly and randomly occurring pulses into synchronized and periodically spaced-apart pulses for the accurate counting thereof. Provision is also made for the internal and external disabling of each individual channel of amplifier-discriminators in an ORed plurality of same.

  12. Image Accumulation in Pixel Detector Gated by Late External Trigger Signal and its Application in Imaging Activation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakubek, J.; Cejnarova, A.; Platkevic, M.

    Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less

  13. AN EMPIRICAL METHOD FOR IMPROVING THE QUALITY OF RXTE HEXTE SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Javier A.; Steiner, James F.; McClintock, Jeffrey E.

    2016-03-01

    We have developed a correction tool to improve the quality of Rossi X-ray Timing Explorer (RXTE) High Energy X-ray Timing Experiment (HEXTE) spectra by employing the same method we used earlier to improve the quality of RXTE Proportional Counter Array (PCA) spectra. We fit all of the hundreds of HEXTE spectra of the Crab individually to a simple power-law model, some 37 million counts in total for Cluster A and 39 million counts for Cluster B, and we create for each cluster a combined spectrum of residuals. We find that the residual spectrum of Cluster A is free of instrumental artifacts while that of Clustermore » B contains significant features with amplitudes ∼1%; the most prominent is in the energy range 30–50 keV, which coincides with the iodine K edge. Starting with the residual spectrum for Cluster B, via an iterative procedure we created the calibration tool hexBcorr for correcting any Cluster B spectrum of interest. We demonstrate the efficacy of the tool by applying it to Cluster B spectra of two bright black holes, which contain several million counts apiece. For these spectra, application of the tool significantly improves the goodness of fit, while affecting only slightly the broadband fit parameters. The tool may be important for the study of spectral features, such as cyclotron lines, a topic that is beyond the scope of this paper.« less

  14. RALPH: An online computer program for acquisition and reduction of pulse height data

    NASA Technical Reports Server (NTRS)

    Davies, R. C.; Clark, R. S.; Keith, J. E.

    1973-01-01

    A background/foreground data acquisition and analysis system incorporating a high level control language was developed for acquiring both singles and dual parameter coincidence data from scintillation detectors at the Radiation Counting Laboratory at the NASA Manned Spacecraft Center in Houston, Texas. The system supports acquisition of gamma ray spectra in a 256 x 256 coincidence matrix (utilizing disk storage) and simultaneous operation of any of several background support and data analysis functions. In addition to special instruments and interfaces, the hardware consists of a PDP-9 with 24K core memory, 256K words of disk storage, and Dectape and Magtape bulk storage.

  15. Highly efficient entanglement swapping and teleportation at telecom wavelength

    PubMed Central

    Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide

    2015-01-01

    Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links. PMID:25791212

  16. Highly efficient entanglement swapping and teleportation at telecom wavelength.

    PubMed

    Jin, Rui-Bo; Takeoka, Masahiro; Takagi, Utako; Shimizu, Ryosuke; Sasaki, Masahide

    2015-03-20

    Entanglement swapping at telecom wavelengths is at the heart of quantum networking in optical fiber infrastructures. Although entanglement swapping has been demonstrated experimentally so far using various types of entangled photon sources both in near-infrared and telecom wavelength regions, the rate of swapping operation has been too low to be applied to practical quantum protocols, due to limited efficiency of entangled photon sources and photon detectors. Here we demonstrate drastic improvement of the efficiency at telecom wavelength by using two ultra-bright entangled photon sources and four highly efficient superconducting nanowire single photon detectors. We have attained a four-fold coincidence count rate of 108 counts per second, which is three orders higher than the previous experiments at telecom wavelengths. A raw (net) visibility in a Hong-Ou-Mandel interference between the two independent entangled sources was 73.3 ± 1.0% (85.1 ± 0.8%). We performed the teleportation and entanglement swapping, and obtained a fidelity of 76.3% in the swapping test. Our results on the coincidence count rates are comparable with the ones ever recorded in teleportation/swapping and multi-photon entanglement generation experiments at around 800 nm wavelengths. Our setup opens the way to practical implementation of device-independent quantum key distribution and its distance extension by the entanglement swapping as well as multi-photon entangled state generation in telecom band infrastructures with both space and fiber links.

  17. Concentration Independent Calibration of β-γ Coincidence Detector Using 131mXe and 133Xe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntyre, Justin I.; Cooper, Matthew W.; Carman, April J.

    Absolute efficiency calibration of radiometric detectors is frequently difficult and requires careful detector modeling and accurate knowledge of the radioactive source used. In the past we have calibrated the b-g coincidence detector of the Automated Radioxenon Sampler/Analyzer (ARSA) using a variety of sources and techniques which have proven to be less than desirable.[1] A superior technique has been developed that uses the conversion-electron (CE) and x-ray coincidence of 131mXe to provide a more accurate absolute gamma efficiency of the detector. The 131mXe is injected directly into the beta cell of the coincident counting system and no knowledge of absolute sourcemore » strength is required. In addition, 133Xe is used to provide a second independent means to obtain the absolute efficiency calibration. These two data points provide the necessary information for calculating the detector efficiency and can be used in conjunction with other noble gas isotopes to completely characterize and calibrate the ARSA nuclear detector. In this paper we discuss the techniques and results that we have obtained.« less

  18. Optical Design Considerations for Efficient Light Collection from Liquid Scintillation Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Douglas, Matthew; Erchinger, Jennifer L.

    2015-01-01

    Liquid scintillation counters measure charged particle-emitting radioactive isotopes and are used for environmental studies, nuclear chemistry, and life science. Alpha and beta emissions arising from the material under study interact with the scintillation cocktail to produce light. The prototypical liquid scintillation counter employs low-level photon-counting detectors to measure the arrival of the scintillation light produced as a result of the dissolved material under study interacting with the scintillation cocktail. For reliable operation the counting instrument must convey the scintillation light to the detectors efficiently and predictably. Current best practices employ the use of two or more detectors for coincidence processingmore » to discriminate true scintillation events from background events due to instrumental effects such as photomultiplier tube dark rates, tube flashing, or other light emission not generated in the scintillation cocktail vial. In low background liquid scintillation counters additional attention is paid to shielding the scintillation cocktail from naturally occurring radioactive material (NORM) present in the laboratory and within the instruments construction materials. Low background design is generally at odds with optimal light collection. This study presents the evolution of a light collection design for liquid scintillation counting in a low background shield. The basic approach to achieve both good light collection and a low background measurement is described. The baseline signals arising from the scintillation vial are modeled and methods to efficiently collect scintillation light are presented as part of the development of a customized low-background, high sensitivity liquid scintillation counting system.« less

  19. Modern Measurements of Uranium Decay Rates

    NASA Astrophysics Data System (ADS)

    Parsons-Moss, T.; Faye, S. A.; Williams, R. W.; Wang, T. F.; Renne, P. R.; Mundil, R.; Harrison, M.; Bandong, B. B.; Moody, K.; Knight, K. B.

    2015-12-01

    It has been widely recognized that accurate and precise decay constants (λ) are critical to geochronology as highlighted by the EARTHTIME initiative, particularly the calibration benchmarks λ235U and λ238U. [1] Alpha counting experiments in 1971[2] measured λ235U and λ238U with ~0.1% precision, but have never been independently validated. We are embarking on new direct measurements of λ235U, λ238U, λ234Th, and λ234U using independent approaches for each nuclide. For the measurement of λ235U, highly enriched 235U samples will be chemically purified and analyzed for U concentration and isotopic composition by multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS). Thin films will be electrodeposited from these solutions and the α activity will be measured in an α-γ coincidence counting apparatus, which allows reduced uncertainty in counting efficiency while achieving adequate counting statistics. For λ238U measurement we will measure ingrowth of 234Th in chemically purified, isotopically enriched 238U solutions, by quantitatively separating the Th and allowing complete decay to 234U. All of the measurements will be done using MC-ICP-MS aiming at 0.05% precision. This approach is expected to result in values of λ238U with less than 0.1% uncertainty, if combined with improved λ234Th measements. These will be achieved using direct decay measurements with an E-ΔE charged particle telescope in coincidence with a gamma detector. This system allows measurement of 234Th β-decay and simultaneous detection and identification of α particles emitted by the 234U daughter, thus observing λ234U at the same time. The high-precision λ234U obtained by the direct activity measurements can independently verify the commonly used values obtained by indirect methods.[3] An overarching goal of the project is to ensure the quality of results including metrological traceability in order to facilitate implementation across diverse disciplines. [1] T.M. Harrison et al., (2015) It's About Time:Opportunities and Challenges for U.S. Geological Survey. Institute of Geophysics and Planetary Physics Publication 6539, University of California, Los Angeles [2] A. H. Jaffey et al., Physical Review C, 4, 5, (1971), 1889-1906 [3] H. Cheng et al., Chemical Geology, 169, (2000), 17-33

  20. Rejection of randomly coinciding events in Li_2^{100}MoO_4 scintillating bolometers using light detectors based on the Neganov-Luke effect

    NASA Astrophysics Data System (ADS)

    Chernyak, D. M.; Danevich, F. A.; Dumoulin, L.; Giuliani, A.; Mancuso, M.; Marcillac, P. de; Marnieros, S.; Nones, C.; Olivieri, E.; Poda, D. V.; Tretyak, V. I.

    2017-01-01

    Random coincidences of nuclear events can be one of the main background sources in low-temperature calorimetric experiments looking for neutrinoless double-beta decay, especially in those searches based on scintillating bolometers embedding the promising double-beta candidate ^{100} Mo, because of the relatively short half-life of the two-neutrino double-beta decay of this nucleus. We show in this work that randomly coinciding events of the two-neutrino double-beta decay of ^{100} Mo in enriched Li_2^{100} MoO_4 detectors can be effectively discriminated by pulse-shape analysis in the light channel if the scintillating bolometer is provided with a Neganov-Luke light detector, which can improve the signal-to-noise ratio by a large factor, assumed here at the level of {˜ }750 on the basis of preliminary experimental results obtained with these devices. The achieved pile-up rejection efficiency results in a very low contribution, of the order of {˜ }6× 10^{-5} counts/(keV\\cdot kg\\cdot y), to the background counting rate in the region of interest for a large volume ({˜ }90 cm^3) Li_2^{100} MoO_4 detector. This background level is very encouraging in view of a possible use of the Li_2^{100} MoO_4 solution for a bolometric tonne-scale next-generation experiment as that proposed in the CUPID project.

  1. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less

  2. Dual R3R5 tropism characterizes cerebrospinal fluid HIV-1 isolates from individuals with high cerebrospinal fluid viral load.

    PubMed

    Karlsson, Ulf; Antonsson, Liselotte; Ljungberg, Bengt; Medstrand, Patrik; Esbjörnsson, Joakim; Jansson, Marianne; Gisslen, Magnus

    2012-09-10

    To study the use of major and alternative coreceptors by HIV-1 isolates obtained from paired plasma and cerebrospinal fluid (CSF) samples. Paired plasma and CSF isolates from HIV-1-infected individuals with varying clinical, virologic, and immunologic parameters were assessed for the ability to infect indicator cells expressing a panel of coreceptors with documented expression in the central nervous system (CNS). HIV-1 isolates obtained from plasma and CSF in 28 individuals with varying viral load, CD4 T-cell counts, and with or without AIDS-defining disease were analyzed for the ability to infect NP2.CD4 cells stably expressing a panel of HIV coreceptors (CCR5, CXCR4, CCR3, CXCR6, GPR1, APJ, ChemR23, RDC-1 or BLT1). All isolates from both plasma and CSF utilized CCR5 and/or CXCR4. However, the ability to use both CCR3 and CCR5 (R3R5) was more pronounced in CSF isolates and correlated with high CSF viral load and low CD4 T-cell count. Notably, four out of five CSF isolates of subtype C origin exhibited CXCR6 use, which coincided with high CSF viral load despite preserved CD4 T-cell counts. The use of other alternative coreceptors was less pronounced. Dual-tropic R3R5 HIV-1 isolates in CSF coincide with high CSF viral load and low CD4 T-cell counts. Frequent CXCR6 use by CSF-derived subtype C isolates indicates that subtype-specific differences in coreceptor use may exist that will not be acknowledged when assessing plasma virus isolates. The findings may also bare relevance for HIV-1 replication within the CNS, and consequently, for the neuropathogenesis of AIDS.

  3. Theoretical and Experimental Investigations of Coincidences in Poisson Distributed Pulse Trains and Spectral Distortion Caused by Pulse Pileup.

    NASA Astrophysics Data System (ADS)

    Bristow, Quentin

    1990-01-01

    Part one of this two-part study is concerned with the multiple coincidences in pulse trains from X-ray and gamma radiation detectors which are the cause of pulse pileup. A sequence of pulses with inter-arrival times less than tau, the resolving time of the pulse-height analysis system used to acquire spectra, is called a multiple pulse string. Such strings can be classified on the basis of the number of pulses they contain, or the number of resolving times they cover. The occurrence rates of such strings are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a NaI(Tl) scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Part two of the study is concerned with a theoretical analysis of pulse pileup and the development of a discrete correction algorithm, based on the use of a function to simulate the coincidence spectrum produced by partial sums of pulses. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern ADC's, were used to generate pileup spectra due to coincidences between two pulses, (1st order pileup) and three pulses (2nd order pileup), for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum which can be regarded as an impulse response for a particular pulse shape. The use of a flat spectrum (identical count rates in all channels) in the simulations, and in a parallel theoretical analysis, showed the 1st order pileup distorted the spectrum to a linear ramp with a pileup tail. The correction algorithm was successfully applied to correct entire spectra for 1st and 2nd order pileup; both those generated by Monte Carlo simulations and in addition some real spectra acquired with a laboratory multichannel analysis system.

  4. On the Waring problem for polynomial rings

    PubMed Central

    Fröberg, Ralf; Ottaviani, Giorgio; Shapiro, Boris

    2012-01-01

    In this note we discuss an analog of the classical Waring problem for . Namely, we show that a general homogeneous polynomial of degree divisible by k≥2 can be represented as a sum of at most kn k-th powers of homogeneous polynomials in . Noticeably, kn coincides with the number obtained by naive dimension count. PMID:22460787

  5. Progress in development of neutron energy spectrometer for deuterium plasma operation in KSTAR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomita, H., E-mail: tomita@nagoya-u.jp; Yamashita, F.; Nakayama, Y.

    2014-11-15

    Two types of DD neutron energy spectrometer (NES) are under development for deuterium plasma operation in KSTAR to understand behavior of beam ions in the plasma. One is based on the state-of-the-art nuclear emulsion technique. The other is based on a coincidence detection of a recoiled proton and a scattered neutron caused by an elastic scattering of an incident DD neutron, which is called an associated particle coincidence counting-NES. The prototype NES systems were installed at J-port in KSTAR in 2012. During the 2012 and 2013 experimental campaigns, multiple shots-integrated neutron spectra were preliminarily obtained by the nuclear emulsion-based NESmore » system.« less

  6. Progress in development of neutron energy spectrometer for deuterium plasma operation in KSTARa)

    NASA Astrophysics Data System (ADS)

    Tomita, H.; Yamashita, F.; Nakayama, Y.; Morishima, K.; Yamamoto, Y.; Sakai, Y.; Cheon, M. S.; Isobe, M.; Ogawa, K.; Hayashi, S.; Kawarabayashi, J.; Iguchi, T.

    2014-11-01

    Two types of DD neutron energy spectrometer (NES) are under development for deuterium plasma operation in KSTAR to understand behavior of beam ions in the plasma. One is based on the state-of-the-art nuclear emulsion technique. The other is based on a coincidence detection of a recoiled proton and a scattered neutron caused by an elastic scattering of an incident DD neutron, which is called an associated particle coincidence counting-NES. The prototype NES systems were installed at J-port in KSTAR in 2012. During the 2012 and 2013 experimental campaigns, multiple shots-integrated neutron spectra were preliminarily obtained by the nuclear emulsion-based NES system.

  7. Post-precipitation bias in band-tailed pigeon surveys conducted at mineral sites

    USGS Publications Warehouse

    Overton, C.T.; Schmitz, R.A.; Casazza, Michael L.

    2005-01-01

    Many animal surveys to estimate populations or index trends include protocol prohibiting counts during rain but fail to address effects of rainfall preceding the count. Prior research on Pacific Coast band-tailed pigeons (Patagioenas fasciata monilis) documented declines in use of mineral sites during rainfall. We hypothesized that prior precipitation was associated with a short-term increase in use of mineral sites following rain. We conducted weekly counts of band-tailed pigeons at 19 Pacific Northwest mineral sites in 2001 and 20 sites in 2002. Results from regression analysis indicated higher counts ???2 days after rain (11.31??5.00% [x????SE]) compared to ???3 days. Individual index counts conducted ???2 days after rain were biased high, resulting in reduced ability to accurately estimate population trends. Models of band-tailed pigeon visitation rates throughout the summer showed increased mineral-site counts during both June and August migration periods, relative to the July breeding period. Our research supported previous studies recommending that mineral-site counts used to index the band-tailed pigeon population be conducted during July. We further recommend conducting counts >3 days after rain to avoid weather-related bias in index estimation. The design of other population sampling strategies that rely on annual counts should consider the influence of aberrant weather not only coincident with but also preceding surveys if weather patterns are thought to influence behavior or detection probability of target species.

  8. On the accuracy of gamma spectrometric isotope ratio measurements of uranium

    NASA Astrophysics Data System (ADS)

    Ramebäck, H.; Lagerkvist, P.; Holmgren, S.; Jonsson, S.; Sandström, B.; Tovedal, A.; Vesterlund, A.; Vidmar, T.; Kastlander, J.

    2016-04-01

    The isotopic composition of uranium was measured using high resolution gamma spectrometry. Two acid solutions and two samples in the form of UO2 pellets were measured. The measurements were done in close geometries, i.e. directly on the endcap of the high purity germanium detector (HPGe). Applying no corrections for count losses due to true coincidence summing (TCS) resulted in up to about 40% deviation in the abundance of 235U from the results obtained with mass spectrometry. However, after correction for TCS, excellent agreement was achieved between the results obtained using two different measurement methods, or a certified value. Moreover, after corrections, the fitted relative response curves correlated excellently with simulated responses, for the different geometries, of the HPGe detector.

  9. Background Conditions for the October 29, 2003 Solar Flare by the AVS-F Apparatus Data

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Lyapin, A. R.; Troitskaya, E. V.

    The background model for AVS-F apparatus onboard CORONAS-F satellite for the October 29, 2003 X10-class solar flare is discussed in the presented work. This background model developed for AVS-F counts rate in the low- and high-energy spectral ranges in both individual channels and summarized. Count rate were approximated by polynomials of high order taking into account the mean count rate in the geomagnetic equatorial region at the different orbits parts and Kp-index averaged on 5 bins in time interval from -24 to -12 hours before the time of geomagnetic equator passing. The observed averaged counts rate on equator in the region of geomagnetic latitude ±5o and estimated minimum count rate values are in coincidence within statistical errors for all selected orbits parts used for background modeling. This model will used to refine the estimated energy of registered during the solar flare spectral features and detailed analysis of their temporal profiles behavior both in corresponding energy bands and in summarized energy range.

  10. Dark information of black hole radiation raised by dark energy

    NASA Astrophysics Data System (ADS)

    Ma, Yu-Han; Chen, Jin-Fu; Sun, Chang-Pu

    2018-06-01

    The "lost" information of black hole through the Hawking radiation was discovered being stored in the correlation among the non-thermally radiated particles (Parikh and Wilczek, 2000 [31], Zhang et al., 2009 [16]). This correlation information, which has not yet been proved locally observable in principle, is named by dark information. In this paper, we systematically study the influences of dark energy on black hole radiation, especially on the dark information. Calculating the radiation spectrum in the existence of dark energy by the approach of canonical typicality, which is reconfirmed by the quantum tunneling method, we find that the dark energy will effectively lower the Hawking temperature, and thus makes the black hole has longer life time. It is also discovered that the non-thermal effect of the black hole radiation is enhanced by dark energy so that the dark information of the radiation is increased. Our observation shows that, besides the mechanical effect (e.g., gravitational lensing effect), the dark energy rises the stored dark information, which could be probed by a non-local coincidence measurement similar to the coincidence counting of the Hanbury-Brown-Twiss experiment in quantum optics.

  11. Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument

    NASA Astrophysics Data System (ADS)

    Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; Menlove, Howard O.; Flaska, Marek; Pozzi, Sara A.

    2017-07-01

    The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrument that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.

  12. Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.

    The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less

  13. Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument

    DOE PAGES

    Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; ...

    2017-01-05

    The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less

  14. Standardisation and half-life of 89Zr.

    PubMed

    García-Toraño, E; Peyrés, V; Roteta, M; Mejuto, M; Sánchez-Cabezudo, A; Romero, E

    2018-04-01

    The nuclide 89 Zr is being tested for the labelling of compounds with long blood circulation times. It decays by beta plus emission (22.8%) and by electron capture (77.2%) to 89 Y. Its half-life has been determined by following the decay rate with two measurement systems; an Ionisation Chamber and an HPGe detector. The combination of six results gives a value of T 1/2 = 78.333 (38) h, slightly lower than the DDEP recommended value of 78.42 (13) h. This radionuclide has also been standardised by liquid scintillation counting, 4πγ counting and coincidence techniques. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. PULSE SORTER

    DOEpatents

    Wade, E.J.

    1958-07-29

    An apparatus is described for counting and recording the number of electrical pulses occurring in each of a timed sequence of groups of pulses. The particular feature of the invention resides in a novel timing circuit of the univibrator type which provides very accurately timed pulses for opening each of a series of coincidence channels in sequence. The univibrator is shown incorporated in a pulse analyzing system wherein a series of pulse counting channels are periodically opened in order, one at a time, for a predetermtned open time interval, so that only one channel will be open at the time of occurrence of any of the electrical pulses to be sorted.

  16. A method for screening of plant species for space use

    NASA Technical Reports Server (NTRS)

    Goeschl, J. D.; Sauer, R. L.; Scheld, H. W.

    1986-01-01

    A cost-effective methodology which monitors numerous dynamic aspects of carbon assimilation and allocation kinetics in live, intact plants is discussed. Analogous methods can apply to nitrogen uptake and allocation. This methodology capitalizes on the special properties of the short-lived, positron-gamma emitting isotope C-11 especially when applied as CO2-11 in a special extended square wave (ESW) pattern. The 20.4 minute half-life allows for repeated or continuous experiments on the same plant over periods of minutes, hours, days, or weeks. The steady-state isotope equilibrium approached during the ESW experiments, and the parameters which can be analyzed by this technique are also direct results of that short half-life. Additionally, the paired .511 MeV gamma rays penetrate any amount of tissue and their 180 deg opposite orientation provides good collimation and allows coincidence counting which nearly eliminates background.

  17. Quantitative Electron-Excited X-Ray Microanalysis of Borides, Carbides, Nitrides, Oxides, and Fluorides with Scanning Electron Microscopy/Silicon Drift Detector Energy-Dispersive Spectrometry (SEM/SDD-EDS) and NIST DTSA-II.

    PubMed

    Newbury, Dale E; Ritchie, Nicholas W M

    2015-10-01

    A scanning electron microscope with a silicon drift detector energy-dispersive X-ray spectrometer (SEM/SDD-EDS) was used to analyze materials containing the low atomic number elements B, C, N, O, and F achieving a high degree of accuracy. Nearly all results fell well within an uncertainty envelope of ±5% relative (where relative uncertainty (%)=[(measured-ideal)/ideal]×100%). Quantification was performed with the standards-based "k-ratio" method with matrix corrections calculated based on the Pouchou and Pichoir expression for the ionization depth distribution function, as implemented in the NIST DTSA-II EDS software platform. The analytical strategy that was followed involved collection of high count (>2.5 million counts from 100 eV to the incident beam energy) spectra measured with a conservative input count rate that restricted the deadtime to ~10% to minimize coincidence effects. Standards employed included pure elements and simple compounds. A 10 keV beam was employed to excite the K- and L-shell X-rays of intermediate and high atomic number elements with excitation energies above 3 keV, e.g., the Fe K-family, while a 5 keV beam was used for analyses of elements with excitation energies below 3 keV, e.g., the Mo L-family.

  18. A SYSTEM FOR CONTINUOUS MEASUREMENT OF RADIOACTIVITY IN FLOWING STREAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rapkin, E.; Gibbs, J.A.

    1962-10-31

    An apparatus for the determination of alpha or BETA radioactivity in either circulating liquid or gas streams was developed. Solid anthracene crystals are used. The detector consists of a Lucite light pipe coated with titanium dioxide and coupled to two photomultipliers which are in turn fed to appropriate coincidence type circuitry. The detection cell, which consists of a 9-mm OD glass tube with appropriate fittings on either end, was packed with anthracene crystals. A glass frit, or glass wool, was incorporated in the cell on the downstream side to contain the anthracene and a pledget of glass wool was placedmore » above the anthracene on the upstream side. Carbon-14 counting efficiency was found to be of the order of 50% with a coincident background from 100 divisions to infinity of less than 40 cpm at 900 v. Tritium counting efficiency was in the range of 7% and the integral background from 100 divisions to infinity was about 90 cpm at 1130 v. Discussion is also given on the electronics of the detector and the performance in closed flowing systems and gas analysis. (P.C.H.)« less

  19. True-coincidence correction when using an LEPD for the determination of the lanthanides in the environment via k0-based INAA.

    PubMed

    Freitas, M C; De Corte, F

    1994-01-01

    As part of a recent study on the environmental effects caused by the operation of a coal-fired power station at Sines, Portugal, k0-based instrumental neutron activation analysis (INAA) was used for the determination of the lanthanides (and also of tantalum and uranium) in plant leaves and lichens. In view of the accuracy and sensitivity of the determinations, it was advantageous to make use of a low-energy photon detector (LEPD). To begin with, in the present article, a survey is given of the former developments leading to user-friendly procedures for detection efficiency calibration of the LEPD and for correction for true-coincidence (cascade summing) effects. As a continuation of this, computer coincidence correction factors are now tabulated for the relevant low-energetic gamma-rays of the analytically interesting lanthanide, tantalum, and uranium radionuclides. Also the 140.5-keV line of 99Mo/99mTc is included, molybdenum being the comparator chosen when counting using an LEPD.

  20. Distributing entanglement and single photons through an intra-city, free-space quantum channel.

    PubMed

    Resch, K; Lindenthal, M; Blauensteiner, B; Böhm, H; Fedrizzi, A; Kurtsiefer, C; Poppe, A; Schmitt-Manderbach, T; Taraba, M; Ursin, R; Walther, P; Weier, H; Weinfurter, H; Zeilinger, A

    2005-01-10

    We have distributed entangled photons directly through the atmosphere to a receiver station 7.8 km away over the city of Vienna, Austria at night. Detection of one photon from our entangled pairs constitutes a triggered single photon source from the sender. With no direct time-stable connection, the two stations found coincidence counts in the detection events by calculating the cross-correlation of locally-recorded time stamps shared over a public internet channel. For this experiment, our quantum channel was maintained for a total of 40 minutes during which time a coincidence lock found approximately 60000 coincident detection events. The polarization correlations in those events yielded a Bell parameter, S=2.27+/-0.019, which violates the CHSH-Bell inequality by 14 standard deviations. This result is promising for entanglement-based freespace quantum communication in high-density urban areas. It is also encouraging for optical quantum communication between ground stations and satellites since the length of our free-space link exceeds the atmospheric equivalent.

  1. Method for position emission mammography image reconstruction

    DOEpatents

    Smith, Mark Frederick

    2004-10-12

    An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.

  2. Highly charged ion secondary ion mass spectroscopy

    DOEpatents

    Hamza, Alex V.; Schenkel, Thomas; Barnes, Alan V.; Schneider, Dieter H.

    2001-01-01

    A secondary ion mass spectrometer using slow, highly charged ions produced in an electron beam ion trap permits ultra-sensitive surface analysis and high spatial resolution simultaneously. The spectrometer comprises an ion source producing a primary ion beam of highly charged ions that are directed at a target surface, a mass analyzer, and a microchannel plate detector of secondary ions that are sputtered from the target surface after interaction with the primary beam. The unusually high secondary ion yield permits the use of coincidence counting, in which the secondary ion stops are detected in coincidence with a particular secondary ion. The association of specific molecular species can be correlated. The unique multiple secondary nature of the highly charged ion interaction enables this new analytical technique.

  3. Studies of a Next-Generation Silicon-Photomultiplier-Based Time-of-Flight PET/CT System.

    PubMed

    Hsu, David F C; Ilan, Ezgi; Peterson, William T; Uribe, Jorge; Lubberink, Mark; Levin, Craig S

    2017-09-01

    This article presents system performance studies for the Discovery MI PET/CT system, a new time-of-flight system based on silicon photomultipliers. System performance and clinical imaging were compared between this next-generation system and other commercially available PET/CT and PET/MR systems, as well as between different reconstruction algorithms. Methods: Spatial resolution, sensitivity, noise-equivalent counting rate, scatter fraction, counting rate accuracy, and image quality were characterized with the National Electrical Manufacturers Association NU-2 2012 standards. Energy resolution and coincidence time resolution were measured. Tests were conducted independently on two Discovery MI scanners installed at Stanford University and Uppsala University, and the results were averaged. Back-to-back patient scans were also performed between the Discovery MI, Discovery 690 PET/CT, and SIGNA PET/MR systems. Clinical images were reconstructed using both ordered-subset expectation maximization and Q.Clear (block-sequential regularized expectation maximization with point-spread function modeling) and were examined qualitatively. Results: The averaged full widths at half maximum (FWHMs) of the radial/tangential/axial spatial resolution reconstructed with filtered backprojection at 1, 10, and 20 cm from the system center were, respectively, 4.10/4.19/4.48 mm, 5.47/4.49/6.01 mm, and 7.53/4.90/6.10 mm. The averaged sensitivity was 13.7 cps/kBq at the center of the field of view. The averaged peak noise-equivalent counting rate was 193.4 kcps at 21.9 kBq/mL, with a scatter fraction of 40.6%. The averaged contrast recovery coefficients for the image-quality phantom were 53.7, 64.0, 73.1, 82.7, 86.8, and 90.7 for the 10-, 13-, 17-, 22-, 28-, and 37-mm-diameter spheres, respectively. The average photopeak energy resolution was 9.40% FWHM, and the average coincidence time resolution was 375.4 ps FWHM. Clinical image comparisons between the PET/CT systems demonstrated the high quality of the Discovery MI. Comparisons between the Discovery MI and SIGNA showed a similar spatial resolution and overall imaging performance. Lastly, the results indicated significantly enhanced image quality and contrast-to-noise performance for Q.Clear, compared with ordered-subset expectation maximization. Conclusion: Excellent performance was achieved with the Discovery MI, including 375 ps FWHM coincidence time resolution and sensitivity of 14 cps/kBq. Comparisons between reconstruction algorithms and other multimodal silicon photomultiplier and non-silicon photomultiplier PET detector system designs indicated that performance can be substantially enhanced with this next-generation system. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  4. Simulations of a PSD Plastic Neutron Collar for Assaying Fresh Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hausladen, Paul; Newby, Jason; McElroy, Robert Dennis

    The potential performance of a notional active coincidence collar for assaying uranium fuel based on segmented detectors constructed from the new PSD plastic fast organic scintillator with pulse shape discrimination capability was investigated in simulation. Like the International Atomic Energy Agency's present Uranium Neutron Collar for LEU (UNCL), the PSD plastic collar would also function by stimulating fission in the 235U content of the fuel with a moderated 241Am/Li neutron source and detecting instances of induced fission via neutron coincidence counting. In contrast to the moderated detectors of the UNCL, the fast time scale of detection in the scintillator eliminatesmore » statistical errors due to accidental coincidences that limit the performance of the UNCL. However, the potential to detect a single neutron multiple times historically has been one of the properties of organic scintillator detectors that has prevented their adoption for international safeguards applications. Consequently, as part of the analysis of simulated data, a method was developed by which true neutron-neutron coincidences can be distinguished from inter-detector scatter that takes advantage of the position and timing resolution of segmented detectors. Then, the performance of the notional simulated coincidence collar was evaluated for assaying a variety of fresh fuels, including some containing burnable poisons and partial defects. In these simulations, particular attention was paid to the analysis of fast mode measurements. In fast mode, a Cd liner is placed inside the collar to shield the fuel from the interrogating source and detector moderators, thereby eliminating the thermal neutron flux that is most sensitive to the presence of burnable poisons that are ubiquitous in modern nuclear fuels. The simulations indicate that the predicted precision of fast mode measurements is similar to what can be achieved by the present UNCL in thermal mode. For example, the statistical accuracy of a ten-minute measurement of fission coincidences collected in fast mode will be approximately 1% for most fuels of interest, yielding a ~1.4% error after subtraction of a five minute measurement of the spontaneous fissions from 238U in the fuel, a ~2% error in analyzed linear density after accounting for the slope of the calibration curve, and a ~2.9% total error after addition of an assumed systematic error of 2%.« less

  5. Validation of daily increments in otoliths of northern squawfish larvae

    USGS Publications Warehouse

    Wertheimer, R.H.; Barfoot, C.A.

    1998-01-01

    Otoliths from laboratory-reared northern squawfish, Ptychocheilus oregonensis, larvae were examined to determine the periodicity of increment deposition. Increment deposition began in both sagittae and lapilli after hatching. Reader counts indicated that increment formation was daily in sagittae of 1-29-day-old larvae. However, increment counts from lapilli were significantly less than the known ages of northern squawfish larvae, possibly because some increments were not detectable. Otolith readability and age agreement among readers were greatest for young (<11 days) northern squawfish larvae. This was primarily because a transitional zone of low-contrast material began forming in otoliths of 8-11-day-old larvae and persisted until approximately 20 days after hatching. Formation of the transition zone appeared to coincide with the onset of exogenous feeding and continued through yolk sac absorption. Our results indicate that aging wild-caught northern squawfish larvae using daily otolith increment counts is possible.

  6. Two-neutrino double-β decay of 150Nd to excited final states in 150Sm

    NASA Astrophysics Data System (ADS)

    Kidd, M. F.; Esterline, J. H.; Finch, S. W.; Tornow, W.

    2014-11-01

    Background: Double-β decay is a rare nuclear process in which two neutrons in the nucleus are converted to two protons with the emission of two electrons and two electron antineutrinos. Purpose: We measured the half-life of the two-neutrino double-β decay of 150Nd to excited final states of 150Sm by detecting the deexcitation γ rays of the daughter nucleus. Method: This study yields the first detection of the coincidence γ rays from the 0 1+ excited state of 150Sm. These γ rays have energies of 333.97 and 406.52 keV and are emitted in coincidence through a 01+→21+→0gs+ transition. Results: The enriched Nd2O3 sample consisted of 40.13 g 150Nd and was observed for 642.8 days at the Kimballton Underground Research Facility, producing 21.6 net events in the region of interest. This count rate gives a half-life of T1 /2=[1 .07-0.25+0.45(stat ) ±0.07 (syst ) ] ×1020 yr. The effective nuclear matrix element was found to be 0.0465 -0.0054+0.0098. Finally, lower limits were obtained for decays to higher excited final states. Conclusions: Our half-life measurement agrees within uncertainties with another recent measurement in which no coincidence was employed. Our nuclear matrix element calculation may have an impact on a recent neutrinoless double-β decay nuclear matrix element calculation which implies that the decay to the first excited state in 150Sm is favored over that to the ground state.

  7. Absorption and backscatter of internal conversion electrons in the measurements of surface contamination of ¹³⁷Cs.

    PubMed

    Yunoki, A; Kawada, Y; Yamada, T; Unno, Y; Sato, Y; Hino, Y

    2013-11-01

    We measured 4π and 2π counting efficiencies for internal conversion electrons (ICEs), gross β-particles and also β-rays alone with various source conditions regarding absorber and backing foil thickness using e-X coincidence technique. Dominant differences regarding the penetration, attenuation and backscattering properties among ICEs and β-rays were revealed. Although the abundance of internal conversion electrons of (137)Cs-(137)Ba is only 9.35%, 60% of gross counts may be attributed to ICEs in worse source conditions. This information will be useful for radionuclide metrology and for surface contamination monitoring. © 2013 Elsevier Ltd. All rights reserved.

  8. Performance evaluation and optimization of the MiniPET-II scanner

    NASA Astrophysics Data System (ADS)

    Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo

    2013-04-01

    This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.

  9. The D-D Neutron Generator as an Alternative to Am(Li) Isotopic Neutron Source in the Active Well Coincidence Counter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McElroy, Robert Dennis; Cleveland, Steven L.

    The 235U mass assay of bulk uranium items, such as oxide canisters, fuel pellets, and fuel assemblies, is not achievable by traditional gamma-ray assay techniques due to the limited penetration of the item by the characteristic 235U gamma rays. Instead, fast neutron interrogation methods such as active neutron coincidence counting must be used. For international safeguards applications, the most commonly used active neutron systems, the Active Well Coincidence Counter (AWCC), Uranium Neutron Collar (UNCL) and 252Cf Shuffler, rely on fast neutron interrogation using an isotopic neutron source [i.e., 252Cf or Am(Li)] to achieve better measurement accuracies than are possible usingmore » gamma-ray techniques for high-mass, high-density items. However, the Am(Li) sources required for the AWCC and UNCL systems are no longer manufactured, and newly produced systems rely on limited supplies of sources salvaged from disused instruments. The 252Cf shuffler systems rely on the use of high-output 252Cf sources, which while still available have become extremely costly for use in routine operations and require replacement every five to seven years. Lack of a suitable alternative neutron interrogation source would leave a potentially significant gap in the safeguarding of uranium processing facilities. In this work, we made use of Oak Ridge National Laboratory’s (ORNL’s) Large Volume Active Well Coincidence Counter (LV-AWCC) and a commercially available deuterium-deuterium (D-D) neutron generator to examine the potential of the D-D neutron generator as an alternative to the isotopic sources. We present the performance of the LV-AWCC with D-D generator for the assay of 235U based on the results of Monte Carlo N-Particle (MCNP) simulations and measurements of depleted uranium (DU), low enriched uranium (LEU), and highly enriched uranium (HEU) items.« less

  10. Centennial and Extreme Climate Variability in the Last 1500 Year from the Belize Central Shelf Lagoon (Central America): Successive Droughts and Floods Linked to the Demise of the Mayan Civilization

    NASA Astrophysics Data System (ADS)

    Droxler, A. W.; Agar Cetin, A.; Bentley, S. J.

    2014-12-01

    This study focuses on the last 1500 yr precipitation record archived in the mixed carbonate/siliciclastic sediments accumulated in the Belize Central Shelf Lagoon, part of the Yucatan Peninsula eastern continental margin, proximal to the land areas where the Mayan Civilization thrived and then abruptly collapsed. This study is mainly based upon the detailed analyses of cores, BZE-RH-SVC-58 and 68, retrieved in 30 and 19 m of water depth from Elbow Caye Lagoon and English Caye Channel, respectively. The core timeframe is well-constrained by AMS radiocarbon dating of benthic foraminifera, Quinqueloculina. Carbonate content was determined by carbonate bomb, particle size fractions with a Malvern Master Sizer 2000 particle size analyzer, and element (Ti, Si, K, Fe, Al, Ca, and Sr) counts via X-Ray Fluorescence (XRF). The variations of elements such as Ti and K counts, and Ti/Al in these two cores have recorded, in the past past 1500 years, the weathering rate variations of the adjacent Maya Mountain, defining alternating periods of high precipitation and droughts, linked to large climate fluctuations and extreme events, highly influenced by the ITCZ latitudinal migration. The CE 800-900 century just preceding the Medieval Climate Anomaly (MCA), characterized by unusually low Ti counts and Ti/Al, is interpreted to represent a time of low precipitation and resulting severe droughts in the Yucatan Peninsula, contemporaneous with the Mayan Terminal Classic Collapse. High Ti counts and Ti/Al, although highly variable, during the MCA (CE 900-1350) are interpreted as an unusually warm period characterized by two 100-to-250 years-long intervals of higher precipitation when the number of tropical cyclones peaked. These two intervals of high precipitation during the MCA are separated by a century (CE 1000 -1100) of severe droughts and low tropical storm frequency coinciding with the collapse of Chichen Itza (CE 1040-1100). The Little Ice Age (CE 1350-1850), several centuries during which Ti counts and Ti/Al reach minimum values, is characterized by systematic drier and colder climate conditions with low frequency of tropical cyclones. Two extreme Ti and K count minima might coincide with historical drought times and related Caribbean-wide famines in the year CE 1535 and the last third of the 18th century (CE 1765-1800).

  11. A Conway-Maxwell-Poisson (CMP) model to address data dispersion on positron emission tomography.

    PubMed

    Santarelli, Maria Filomena; Della Latta, Daniele; Scipioni, Michele; Positano, Vincenzo; Landini, Luigi

    2016-10-01

    Positron emission tomography (PET) in medicine exploits the properties of positron-emitting unstable nuclei. The pairs of γ- rays emitted after annihilation are revealed by coincidence detectors and stored as projections in a sinogram. It is well known that radioactive decay follows a Poisson distribution; however, deviation from Poisson statistics occurs on PET projection data prior to reconstruction due to physical effects, measurement errors, correction of deadtime, scatter, and random coincidences. A model that describes the statistical behavior of measured and corrected PET data can aid in understanding the statistical nature of the data: it is a prerequisite to develop efficient reconstruction and processing methods and to reduce noise. The deviation from Poisson statistics in PET data could be described by the Conway-Maxwell-Poisson (CMP) distribution model, which is characterized by the centring parameter λ and the dispersion parameter ν, the latter quantifying the deviation from a Poisson distribution model. In particular, the parameter ν allows quantifying over-dispersion (ν<1) or under-dispersion (ν>1) of data. A simple and efficient method for λ and ν parameters estimation is introduced and assessed using Monte Carlo simulation for a wide range of activity values. The application of the method to simulated and experimental PET phantom data demonstrated that the CMP distribution parameters could detect deviation from the Poisson distribution both in raw and corrected PET data. It may be usefully implemented in image reconstruction algorithms and quantitative PET data analysis, especially in low counting emission data, as in dynamic PET data, where the method demonstrated the best accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Beam-on imaging of short-lived positron emitters during proton therapy

    NASA Astrophysics Data System (ADS)

    Buitenhuis, H. J. T.; Diblen, F.; Brzezinski, K. W.; Brandenburg, S.; Dendooven, P.

    2017-06-01

    In vivo dose delivery verification in proton therapy can be performed by positron emission tomography (PET) of the positron-emitting nuclei produced by the proton beam in the patient. A PET scanner installed in the treatment position of a proton therapy facility that takes data with the beam on will see very short-lived nuclides as well as longer-lived nuclides. The most important short-lived nuclide for proton therapy is 12N (Dendooven et al 2015 Phys. Med. Biol. 60 8923-47), which has a half-life of 11 ms. The results of a proof-of-principle experiment of beam-on PET imaging of short-lived 12N nuclei are presented. The Philips Digital Photon Counting Module TEK PET system was used, which is based on LYSO scintillators mounted on digital SiPM photosensors. A 90 MeV proton beam from the cyclotron at KVI-CART was used to investigate the energy and time spectra of PET coincidences during beam-on. Events coinciding with proton bunches, such as prompt gamma rays, were removed from the data via an anti-coincidence filter with the cyclotron RF. The resulting energy spectrum allowed good identification of the 511 keV PET counts during beam-on. A method was developed to subtract the long-lived background from the 12N image by introducing a beam-off period into the cyclotron beam time structure. We measured 2D images and 1D profiles of the 12N distribution. A range shift of 5 mm was measured as 6  ±  3 mm using the 12N profile. A larger, more efficient, PET system with a higher data throughput capability will allow beam-on 12N PET imaging of single spots in the distal layer of an irradiation with an increased signal-to-background ratio and thus better accuracy. A simulation shows that a large dual panel scanner, which images a single spot directly after it is delivered, can measure a 5 mm range shift with millimeter accuracy: 5.5  ±  1.1 mm for 1  ×  108 protons and 5.2  ±  0.5 mm for 5  ×  108 protons. This makes fast and accurate feedback on the dose delivery during treatment possible.

  13. Application of TDCR-Geant4 modeling to standardization of 63Ni.

    PubMed

    Thiam, C; Bobin, C; Chauvenet, B; Bouchard, J

    2012-09-01

    As an alternative to the classical TDCR model applied to liquid scintillation (LS) counting, a stochastic approach based on the Geant4 toolkit is presented for the simulation of light emission inside the dedicated three-photomultiplier detection system. To this end, the Geant4 modeling includes a comprehensive description of optical properties associated with each material constituting the optical chamber. The objective is to simulate the propagation of optical photons from their creation in the LS cocktail to the production of photoelectrons in the photomultipliers. First validated for the case of radionuclide standardization based on Cerenkov emission, the scintillation process has been added to a TDCR-Geant4 modeling using the Birks expression in order to account for the light-emission nonlinearity owing to ionization quenching. The scintillation yield of the commercial Ultima Gold LS cocktail has been determined from double-coincidence detection efficiencies obtained for (60)Co and (54)Mn with the 4π(LS)β-γ coincidence method. In this paper, the stochastic TDCR modeling is applied for the case of the standardization of (63)Ni (pure β(-)-emitter; E(max)=66.98 keV) and the activity concentration is compared with the result given by the classical model. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. A physics investigation of deadtime losses in neutron counting at low rates with Cf252

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Louise G; Croft, Stephen

    2009-01-01

    {sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less

  15. Stimulated Raman Spectroscopy with Entangled Light: Enhanced Resolution and Pathway Selection

    PubMed Central

    2015-01-01

    We propose a novel femtosecond stimulated Raman spectroscopy (FSRS) technique that combines entangled photons with interference detection to select matter pathways and enhance the resolution. Following photoexcitation by an actinic pump, the measurement uses a pair of broad-band entangled photons; one (signal) interacts with the molecule and together with a third narrow-band pulse induces the Raman process. The other (idler) photon provides a reference for the coincidence measurement. This interferometric photon coincidence counting detection allows one to separately measure the Raman gain and loss signals, which is not possible with conventional probe transmission detection. Entangled photons further provide a unique temporal and spectral detection window that can better resolve fast excited-state dynamics compared to classical and correlated disentangled states of light. PMID:25177427

  16. Coincidence detection of spatially correlated photon pairs with a monolithic time-resolving detector array.

    PubMed

    Unternährer, Manuel; Bessire, Bänz; Gasparini, Leonardo; Stoppa, David; Stefanov, André

    2016-12-12

    We demonstrate coincidence measurements of spatially entangled photons by means of a multi-pixel based detection array. The sensor, originally developed for positron emission tomography applications, is a fully digital 8×16 silicon photomultiplier array allowing not only photon counting but also per-pixel time stamping of the arrived photons with an effective resolution of 265 ps. Together with a frame rate of 500 kfps, this property exceeds the capabilities of conventional charge-coupled device cameras which have become of growing interest for the detection of transversely correlated photon pairs. The sensor is used to measure a second-order correlation function for various non-collinear configurations of entangled photons generated by spontaneous parametric down-conversion. The experimental results are compared to theory.

  17. Short-term responses of birds to prescribed fire in fire-suppressed forests of California

    Treesearch

    Bagne Karen; Kathryn Purcell

    2011-01-01

    Prescribed fire is one tool for restoring fire-suppressed forests, but application of fire during spring coincides with breeding and arrival of migrant birds. We examined effects of low-severity prescribed fires on counts of birds in a managed forest in the Sierra Nevada of California immediately, 1 year, and 3–6 years after fire was applied in spring. Of 26 species...

  18. Development of a Modern Cosmic Ray Telescope based on Silicon Photomultipliers for use in High Schools

    NASA Astrophysics Data System (ADS)

    Ruiz Castruita, Daniel; Niduaza, Rommel; Hernandez, Victor; Knox, Adrian; Ramos, Daniel; Fan, Sewan; Fatuzzo, Laura

    2015-04-01

    Lately, a new light sensor technology based on the breakdown phenomenon in the reverse biased silicon diode has found many applications that span from particle physics to medical imaging science. The silicon photomultiplier (SiPM) has several notable advantages compared to conventional photomultiplier tubes which include: lower cost, lower operating voltage and the ability to measure very weak light signals at the single photon level. At this conference meeting, we describe our efforts to implement SiPMs as read out light detectors for plastic scintillators in a cosmic ray telescope for use in high schools. In particular, we describe our work in designing, testing and assembling the cosmic ray telescope. We include a high gain preamplifier, a custom coincidence circuit using fast comparators to discriminate the SiPM signal amplitudes and a monovibrator IC for lengthening the singles and coincidence logic pulses. An Arduino micro-controller and program sketches are used for processing and storing the singles and coincidence counts data. Results from our measurements would be illustrated and presented. US Department of Education Title V Grant Award PO31S090007.

  19. Measuring the number of independent emitters in single-molecule fluorescence images and trajectories using coincident photons.

    PubMed

    Weston, Kenneth D; Dyck, Martina; Tinnefeld, Philip; Müller, Christian; Herten, Dirk P; Sauer, Markus

    2002-10-15

    A simple new approach is described and demonstrated for measuring the number of independent emitters along with the fluorescence intensity, lifetime, and emission wavelength for trajectories and images of single molecules and multichromophoric systems using a single PC plug-in card for time-correlated single-photon counting. The number of independent emitters present in the detection volume can be determined using the interphoton times in a manner similar to classical antibunching experiments. In contrast to traditional coincidence analysis based on pulsed laser excitation and direct measurement of coincident photon pairs using a time-to-amplitude converter, the interphoton distances are retrieved afterward by recording the absolute arrival time of each photon with nanosecond time resolution on two spectrally separated detectors. Intensity changes that result from fluctuations of a photophysical parameter can be distinguished from fluctuations due to changes in the number of emitters (e.g., photobleaching) in single chromophore and multichromophore intensity trajectories. This is the first report to demonstrate imaging with contrast based on the number of independently emitting species within the detection volume.

  20. Dose Calibration of the ISS-RAD Fast Neutron Detector

    NASA Technical Reports Server (NTRS)

    Zeitlin, C.

    2015-01-01

    The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.

  1. A New Pulse Pileup Rejection Method Based on Position Shift Identification

    NASA Astrophysics Data System (ADS)

    Gu, Z.; Prout, D. L.; Taschereau, R.; Bai, B.; Chatziioannou, A. F.

    2016-02-01

    Pulse pileup events degrade the signal-to-noise ratio (SNR) of nuclear medicine data. When such events occur in multiplexed detectors, they cause spatial misposition, energy spectrum distortion and degraded timing resolution, which leads to image artifacts. Pulse pileup is pronounced in PETbox4, a bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. In that system, the combination of high absolute sensitivity, long scintillator decay time (BGO) and highly multiplexed electronics lead to a significant fraction of pulse pileup, reached at lower total activity than for comparable instruments. In this manuscript, a new pulse pileup rejection method named position shift rejection (PSR) is introduced. The performance of PSR is compared with a conventional leading edge rejection (LER) method and with no pileup rejection implemented (NoPR). A comprehensive digital pulse library was developed for objective evaluation and optimization of the PSR and LER, in which pulse waveforms were directly recorded from real measurements exactly representing the signals to be processed. Physical measurements including singles event acquisition, peak system sensitivity and NEMA NU-4 image quality phantom were also performed in the PETbox4 system to validate and compare the different pulse pile-up rejection methods. The evaluation of both physical measurements and model pulse trains demonstrated that the new PSR performs more accurate pileup event identification and avoids erroneous rejection of valid events. For the PETbox4 system, this improvement leads to a significant recovery of sensitivity at low count rates, amounting to about 1/4th of the expected true coincidence events, compared to the LER method. Furthermore, with the implementation of PSR, optimal image quality can be achieved near the peak noise equivalent count rate (NECR).

  2. Development and Operation of a High Resolution Positron Emission Tomography System to Perform Metabolic Studies on Small Animals.

    NASA Astrophysics Data System (ADS)

    Hogan, Matthew John

    A positron emission tomography system designed to perform high resolution imaging of small volumes has been characterized. Two large area planar detectors, used to detect the annihilation gamma rays, formed a large aperture stationary positron camera. The detectors were multiwire proportional chambers coupled to high density lead stack converters. Detector efficiency was 8%. The coincidence resolving time was 500 nsec. The maximum system sensitivity was 60 cps/(mu)Ci for a solid angle of acceptance of 0.74(pi) St. The maximum useful coincidence count rate was 1500 cps and was limited by electronic dead time. Image reconstruction was done by performing a 3-dimensional deconvolution using Fourier transform methods. Noise propagation during reconstruction was minimized by choosing a 'minimum norm' reconstructed image. In the stationary detector system (with a limited angle of acceptance for coincident events) statistical uncertainty in the data limited reconstruction in the direction normal to the detector surfaces. Data from a rotated phantom showed that detector rotation will correct this problem. Resolution was 4 mm in planes parallel to the detectors and (TURN)15 mm in the normal direction. Compton scattering of gamma rays within a source distribution was investigated using both simulated and measured data. Attenuation due to scatter was as high as 60%. For small volume imaging the Compton background was identified and an approximate correction was performed. A semiquantitative blood flow measurement to bone in the leg of a cat using the ('18)F('-) ion was performed. The results were comparable to investigations using more conventional techniques. Qualitative scans using ('18)F labelled deoxy -D-glucose to assess brain glucose metabolism in a rhesus monkey were also performed.

  3. NEMA NU 2-2007 performance measurements of the Siemens Inveon™ preclinical small animal PET system

    PubMed Central

    Kemp, Brad J; Hruska, Carrie B; McFarland, Aaron R; Lenox, Mark W; Lowe, Val J

    2010-01-01

    National Electrical Manufacturers Association (NEMA) NU 2-2007 performance measurements were conducted on the Inveon™ preclinical small animal PET system developed by Siemens Medical Solutions. The scanner uses 1.51 × 1.51 × 10 mm LSO crystals grouped in 20 × 20 blocks; a tapered light guide couples the LSO crystals of a block to a position-sensitive photomultiplier tube. There are 80 rings with 320 crystals per ring and the ring diameter is 161 mm. The transaxial and axial fields of view (FOVs) are 100 and 127 mm, respectively. The scanner can be docked to a CT scanner; the performance characteristics of the CT component are not included herein. Performance measurements of spatial resolution, sensitivity, scatter fraction and count rate performance were obtained for different energy windows and coincidence timing window widths. For brevity, the results described here are for an energy window of 350–650 keV and a coincidence timing window of 3.43 ns. The spatial resolution at the center of the transaxial and axial FOVs was 1.56, 1.62 and 2.12 mm in the tangential, radial and axial directions, respectively, and the system sensitivity was 36.2 cps kBq−1 for a line source (7.2% for a point source). For mouse- and rat-sized phantoms, the scatter fraction was 5.7% and 14.6%, respectively. The peak noise equivalent count rate with a noisy randoms estimate was 1475 kcps at 130 MBq for the mouse-sized phantom and 583 kcps at 74 MBq for the rat-sized phantom. The performance measurements indicate that the Inveon™ PET scanner is a high-resolution tomograph with excellent sensitivity that is capable of imaging at a high count rate. PMID:19321924

  4. NEMA NU 2-2007 performance measurements of the Siemens Inveon™ preclinical small animal PET system

    NASA Astrophysics Data System (ADS)

    Kemp, Brad J.; Hruska, Carrie B.; McFarland, Aaron R.; Lenox, Mark W.; Lowe, Val J.

    2009-04-01

    National Electrical Manufacturers Association (NEMA) NU 2-2007 performance measurements were conducted on the Inveon™ preclinical small animal PET system developed by Siemens Medical Solutions. The scanner uses 1.51 × 1.51 × 10 mm LSO crystals grouped in 20 × 20 blocks; a tapered light guide couples the LSO crystals of a block to a position-sensitive photomultiplier tube. There are 80 rings with 320 crystals per ring and the ring diameter is 161 mm. The transaxial and axial fields of view (FOVs) are 100 and 127 mm, respectively. The scanner can be docked to a CT scanner; the performance characteristics of the CT component are not included herein. Performance measurements of spatial resolution, sensitivity, scatter fraction and count rate performance were obtained for different energy windows and coincidence timing window widths. For brevity, the results described here are for an energy window of 350-650 keV and a coincidence timing window of 3.43 ns. The spatial resolution at the center of the transaxial and axial FOVs was 1.56, 1.62 and 2.12 mm in the tangential, radial and axial directions, respectively, and the system sensitivity was 36.2 cps kBq-1 for a line source (7.2% for a point source). For mouse- and rat-sized phantoms, the scatter fraction was 5.7% and 14.6%, respectively. The peak noise equivalent count rate with a noisy randoms estimate was 1475 kcps at 130 MBq for the mouse-sized phantom and 583 kcps at 74 MBq for the rat-sized phantom. The performance measurements indicate that the Inveon™ PET scanner is a high-resolution tomograph with excellent sensitivity that is capable of imaging at a high count rate.

  5. Image charge multi-role and function detectors

    NASA Astrophysics Data System (ADS)

    Milnes, James; Lapington, Jon S.; Jagutzki, Ottmar; Howorth, Jon

    2009-06-01

    The image charge technique used with microchannel plate imaging tubes provides several operational and practical benefits by serving to isolate the electronic image readout from the detector. The simple dielectric interface between detector and readout provides vacuum isolation and no vacuum electrical feed-throughs are required. Since the readout is mechanically separate from the detector, an image tube of generic design can be simply optimised for various applications by attaching it to different readout devices and electronics. We present imaging performance results using a single image tube with a variety of readout devices suited to differing applications: (a) A four electrode charge division tetra wedge anode, optimised for best spatial resolution in photon counting mode. (b) A cross delay line anode, enabling higher count rate, and the possibility of discriminating near co-incident events, and an event timing resolution of better than 1 ns. (c) A multi-anode readout connected, either to a multi-channel oscilloscope for analogue measurements of fast optical pulses, or alternately, to a multi-channel time correlated single photon counting (TCSPC) card.

  6. Rate and timing cues associated with the cochlear amplifier: level discrimination based on monaural cross-frequency coincidence detection.

    PubMed

    Heinz, M G; Colburn, H S; Carney, L H

    2001-10-01

    The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.

  7. Influence of photon energy cuts on PET Monte Carlo simulation results.

    PubMed

    Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan

    2012-07-01

    The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.

  8. Bi-photon spectral correlation measurements from a silicon nanowire in the quantum and classical regimes

    PubMed Central

    Jizan, Iman; Helt, L. G.; Xiong, Chunle; Collins, Matthew J.; Choi, Duk-Yong; Joon Chae, Chang; Liscidini, Marco; Steel, M. J.; Eggleton, Benjamin J.; Clark, Alex S.

    2015-01-01

    The growing requirement for photon pairs with specific spectral correlations in quantum optics experiments has created a demand for fast, high resolution and accurate source characterisation. A promising tool for such characterisation uses classical stimulated processes, in which an additional seed laser stimulates photon generation yielding much higher count rates, as recently demonstrated for a χ(2) integrated source in A. Eckstein et al. Laser Photon. Rev. 8, L76 (2014). In this work we extend these results to χ(3) integrated sources, directly measuring for the first time the relation between spectral correlation measurements via stimulated and spontaneous four wave mixing in an integrated optical waveguide, a silicon nanowire. We directly confirm the speed-up due to higher count rates and demonstrate that this allows additional resolution to be gained when compared to traditional coincidence measurements without any increase in measurement time. As the pump pulse duration can influence the degree of spectral correlation, all of our measurements are taken for two different pump pulse widths. This allows us to confirm that the classical stimulated process correctly captures the degree of spectral correlation regardless of pump pulse duration, and cements its place as an essential characterisation method for the development of future quantum integrated devices. PMID:26218609

  9. Conversion factors from counts to chemical ratios for the EURITRACK tagged neutron inspection system

    NASA Astrophysics Data System (ADS)

    El Kanawati, W.; Perot, B.; Carasco, C.; Eleon, C.; Valkovic, V.; Sudac, D.; Obhodas, J.

    2011-10-01

    The EURopean Illicit TRAfficking Countermeasures Kit (EURITRACK) uses 14 MeV neutrons produced by the 3H(d,n) 4H fusion reaction to detect explosives and narcotics in cargo containers. Reactions induced by fast neutrons produce gamma rays, which are detected in coincidence with the associated alpha particle to determine the neutron direction. In addition, the neutron path length is obtained from a time-of-flight measurement, thus allowing the origin of the gamma rays inside the container to be determined. Information concerning the chemical composition of the target material is obtained from the analysis of the energy spectrum. The carbon, oxygen, and nitrogen relative count contributions must be converted to chemical proportions to distinguish illicit and benign organic materials. An extensive set of conversion factors based on Monte Carlo numerical simulations has been calculated, taking into account neutron slowing down and photon attenuation in the cargo materials. An experimental validation of the method is presented by comparing the measured chemical fractions of known materials, in the form of bare samples or hidden in a cargo container, to their real chemical composition. Examples of application to real cargo containers are also reported, as well as simulated data with explosives and illicit drugs.

  10. Intracellular Signaling Defects Contribute to TH17 Dysregulation during HIV Infection

    DTIC Science & Technology

    2014-05-16

    review of biochemistry 62:543-85 353. Xu H, Wang X, Liu DX, Moroney-Rasmussen T, Lackner AA, Veazey RS. 2012. IL-17-producing innate lymphoid cells ...maximum, CD4+ cell counts (blue) decline sharply at first because of trapping in lymphoid tissues but then rise again to a moderately subnormal level...then disseminates to draining lymph nodes and other lymphoid tissues, where it infects CD4+ target cells (42; 206; 241). Dissemination coincides

  11. Vertical high-precision Michelson wavemeter

    NASA Astrophysics Data System (ADS)

    Morales, A.; de Urquijo, J.; Mendoza, A.

    1993-01-01

    We have designed and tested a traveling, Michelson-type vertical wavemeter for the wavelength measurement of tunable continuous-wave lasers in the visible part of the spectrum. The interferometer has two movable corner cubes, suspending vertically from a driving setup resembling Atwood's machine. To reduce the fraction-of-fringe error, a vernier-type coincidence circuit was used. Although simple, this wavemeter has a relative precision of 3.2 parts in 109 for an overall fringe count of about 7×106.

  12. Search for neutrinoless double-electron capture of 156Dy

    NASA Astrophysics Data System (ADS)

    Finch, S. W.; Tornow, W.

    2015-12-01

    Background: Multiple large collaborations are currently searching for neutrinoless double-β decay, with the ultimate goal of differentiating the Majorana-Dirac nature of the neutrino. Purpose: Investigate the feasibility of resonant neutrinoless double-electron capture, an experimental alternative to neutrinoless double-β decay. Method: Two clover germanium detectors were operated underground in coincidence to search for the de-excitation γ rays of 156Gd following the neutrinoless double-electron capture of 156Dy. 231.95 d of data were collected at the Kimballton underground research facility with a 231.57 mg enriched 156Dy sample. Results: No counts were seen above background and half-life limits are set at O (1016-1018) yr for the various decay modes of 156Dy. Conclusion: Low background spectra were efficiently collected in the search for neutrinoless double-electron capture of 156Dy, although the low natural abundance and associated lack of large quantities of enriched samples hinders the experimental reach.

  13. Theoretical and experimental investigations of coincidences in Poisson distributed pulse trains and spectral distortion caused by pulse pileup

    NASA Astrophysics Data System (ADS)

    Bristow, Quentin

    1990-03-01

    The occurrence rates of pulse strings, or sequences of pulses with interarrival times less than the resolving time of the pulse-height analysis system used to acquire spectra, are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern analog to digital converters, were used to generate pileup spectra due to coincidences between two pulses (first order pileup) and three pulses (second order pileup) for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum. The use of a flat spectrum showed the first order pileup distorted the spectrum to a linear ramp with a pileup tail. A correction algorithm was successfully applied to correct entire spectra (simulated and real) for first and second order pileups.

  14. Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects

    PubMed Central

    Taguchi, Katsuyuki; Zhang, Mengxi; Frey, Eric C.; Wang, Xiaolan; Iwanczyk, Jan S.; Nygard, Einar; Hartsough, Neal E.; Tsui, Benjamin M. W.; Barber, William C.

    2011-01-01

    Purpose: Recently, photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed for potential use in clinical computed tomography (CT) scanners. These PCXDs have great potential to improve the quality of CT images due to the absence of electronic noise and weights applied to the counts and the additional spectral information. With high count rates encountered in clinical CT, however, coincident photons are recorded as one event with a higher or lower energy due to the finite speed of the PCXD. This phenomenon is called a “pulse pileup event” and results in both a loss of counts (called “deadtime losses”) and distortion of the recorded energy spectrum. Even though the performance of PCXDs is being improved, it is essential to develop algorithmic methods based on accurate models of the properties of detectors to compensate for these effects. To date, only one PCXD (model DXMCT-1, DxRay, Inc., Northridge, CA) has been used for clinical CT studies. The aim of that study was to evaluate the agreement between data measured by DXMCT-1 and those predicted by analytical models for the energy response, the deadtime losses, and the distorted recorded spectrum caused by pulse pileup effects. Methods: An energy calibration was performed using 99mTc (140 keV), 57Co (122 keV), and an x-ray beam obtained with four x-ray tube voltages (35, 50, 65, and 80 kVp). The DXMCT-1 was placed 150 mm from the x-ray focal spot; the count rates and the spectra were recorded at various tube current values from 10 to 500 μA for a tube voltage of 80 kVp. Using these measurements, for each pulse height comparator we estimated three parameters describing the photon energy-pulse height curve, the detector deadtime τ, a coefficient k that relates the x-ray tube current I to an incident count rate a by a=k×I, and the incident spectrum. The mean pulse shape of all comparators was acquired in a separate study and was used in the model to estimate the distorted recorded spectrum. The agreement between data measured by the DXMCT-1 and those predicted by the models was quantified by the coefficient of variation (COV), i.e., the root mean square difference divided by the mean of the measurement. Results: Photon energy versus pulse height curves calculated with an analytical model and those measured using the DXMCT-1 were in agreement within 0.2% in terms of the COV. The COV between the output count rates measured and those predicted by analytical models was 2.5% for deadtime losses of up to 60%. The COVs between spectra measured and those predicted by the detector model were within 3.7%–7.2% with deadtime losses of 19%–46%. Conclusions: It has been demonstrated that the performance of the DXMCT-1 agreed exceptionally well with the analytical models regarding the energy response, the count rate, and the recorded spectrum with pulse pileup effects. These models will be useful in developing methods to compensate for these effects in PCXD-based clinical CT systems. PMID:21452746

  15. Global analysis of gully composition using manual and automated exploration of CRISM imagery

    NASA Astrophysics Data System (ADS)

    Allender, Elyse; Stepinski, Tomasz F.

    2018-03-01

    Gully formations on Mars have been the focus of many morphological and mineralogical studies aimed at inferring the mechanisms of their formation and evolution. In this paper we have analyzed 354 globally distributed gully-bearing Full Resolution Targeted (FRT) Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) images. The primary goal of the analysis was to identify all spectrally distinct deposits in these images (if any) and to classify them into hydrated and non-hydrated categories using only CRISM summary parameters (Viviano-Beck et al., 2014). Such approach makes possible to analyze a very large set of all distinct deposits in 354 images. We found that 68% of these images lack any distinct deposits, 8% of images contain non-hydrated deposits which coincide with the gullies and 24% of images contain hydrated deposits which coincide with the gullies. These results are compared with the recent analysis of 110 CRISM images by Nuñez et al. (2016) who also found that most gullies coincide with indistinct deposits, but, contrary to our findings, they found a predominance of non-hydrated minerals among distinct deposits. We attribute this discrepancy in part to their smaller and geographically biased sample of images, and in part to differing protocols of categorizing images. The discrepancy between the two surveys is further increased if we count all deposits in FRT gully-bearing images, not just deposits directly coinciding with the gullies, obtaining 44% indistinct, 15% non-hydrated, and 41% hydrated images. The secondary goal of this study was to perform the same image survey using a recently developed automated method in order to assess its accuracy and thus its feasibility for performing future surveys. We found the overall accuracy of the auto-mapper to be 76.2% but its accuracy for discovering distinct deposits, and in particular, distinct hydrated deposits was lower. We attributed the deficiencies of the auto-mapper primarily to its sensitivity to presence of noise in images and especially to presence of speckle noise. It is however worth noting that qualitatively both manual and automatic surveys arrived at the same overall conclusion.

  16. Characterizations of double pulsing in neutron multiplicity and coincidence counting systems

    DOE PAGES

    Koehler, Katrina E.; Henzl, Vladimir; Croft, Stephen; ...

    2016-06-29

    Passive neutron coincidence/multiplicity counters are subject to non-ideal behavior, such as double pulsing and dead time. It has been shown in the past that double-pulsing exhibits a distinct signature in a Rossi-alpha distribution, which is not readily noticed using traditional Multiplicity Shift Register analysis. But, it has been assumed that the use of a pre-delay in shift register analysis removes any effects of double pulsing. Here, we use high-fidelity simulations accompanied by experimental measurements to study the effects of double pulsing on multiplicity rates. By exploiting the information from the double pulsing signature peak observable in the Rossi-alpha distribution, themore » double pulsing fraction can be determined. Algebraic correction factors for the multiplicity rates in terms of the double pulsing fraction have been developed. We also discuss the role of these corrections across a range of scenarios.« less

  17. Experimental demonstration of Klyshko's advanced-wave picture using a coincidence-count based, camera-enabled imaging system

    NASA Astrophysics Data System (ADS)

    Aspden, Reuben S.; Tasca, Daniel S.; Forbes, Andrew; Boyd, Robert W.; Padgett, Miles J.

    2014-04-01

    The Klyshko advanced-wave picture is a well-known tool useful in the conceptualisation of parametric down-conversion (SPDC) experiments. Despite being well-known and understood, there have been few experimental demonstrations illustrating its validity. Here, we present an experimental demonstration of this picture using a time-gated camera in an image-based coincidence measurement. We show an excellent agreement between the spatial distributions as predicted by the Klyshko picture and those obtained using the SPDC photon pairs. An interesting speckle feature is present in the Klyshko predictive images due to the spatial coherence of the back-propagated beam in the multi-mode fibre. This effect can be removed by mechanically twisting the fibre, thus degrading the spatial coherence of the beam and time-averaging the speckle pattern, giving an accurate correspondence between the predictive and SPDC images.

  18. Long-term performance evaluation of positron emission tomography: analysis and proposal of a maintenance protocol for long-term utilization.

    PubMed

    Watanuki, Shoichi; Tashiro, Manabu; Miyake, Masayasu; Ishikawa, Yoichi; Itoh, Masatoshi; Yanai, Kazuhiko; Sakemi, Yasuhiro; Fukuda, Hiroshi; Ishii, Keizo

    2010-07-01

    Positron emission tomography (PET) scanners require periodic monitoring in order to maintain scanner performance. The aim of the present study was to examine the deterioration of PET scanner performance caused by aging. We retrospectively examined PET scanner performance alterations in terms of sensitivity, spatial resolution, false coincidences due to scatter and random coincidences based on 13 years of follow-up data, including data when the PET scanner underwent an overhaul at the 10th year after installation. Sensitivity and scatter fraction were calculated by using cross calibration factor (CCF) measurement data, which are collected routinely. Efficacy of the examining the sensitivity and scatter was confirmed by NEMA measurements. Trans-axial resolution was measured as full width at half-maximum (FWHM) and full width at tenth-maximum (FWTM) at 0-20 cm offset from the field of view (FOV) center at the time of installation, 8 years after installation, and immediately after the overhaul. Random coincidence rate fraction was measured in a wide range of count rates before and after the overhaul. The results indicated that the total reduction of sensitivity during the first 10 years was 41% of the initial value in terms of NEMA measurement, and that the annual reduction of sensitivity progressed at a rate of 4.7% per year in terms of CCF measurement data. The changes in sensitivity can be calculated using CCF measurement data. Regarding the spatial resolution, mean FWHM and FWTM values were increased by 1.7 and 3.6%, respectively, in 8 years after installation. The relative scatter fraction was significantly increased compared with that before the overhaul. The random fraction decreased by 10-15% after the overhaul within a certain range of random count rates (1-120 kcps). In the case of our scanner, the parameter that displayed the largest change was the sensitivity, and this change was thought to be caused by the reduction of photomultiplier tube (PMT) gain, although the changes in PMT gain can cause various types of performance deterioration, as investigated in this study. We observed that the sensitivity of our PET scanner generally deteriorated due to aging. Sensitivity monitoring using CCF measurements can be an easy and useful method for monitoring and maintaining the performance of PET scanners against aging. Since the data were obtained from a single scanner, the authors would encourage the initiation of a follow-up study involving various scanners.

  19. Performance evaluation of a high-resolution brain PET scanner using four-layer MPPC DOI detectors.

    PubMed

    Watanabe, Mitsuo; Saito, Akinori; Isobe, Takashi; Ote, Kibo; Yamada, Ryoko; Moriya, Takahiro; Omura, Tomohide

    2017-08-18

    A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8  ×  8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm 2 crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm 2 size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter 22 Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq -1 as measured using an 18 F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter  ×  70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml -1 at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.

  20. Performance evaluation of a high-resolution brain PET scanner using four-layer MPPC DOI detectors

    NASA Astrophysics Data System (ADS)

    Watanabe, Mitsuo; Saito, Akinori; Isobe, Takashi; Ote, Kibo; Yamada, Ryoko; Moriya, Takahiro; Omura, Tomohide

    2017-09-01

    A high-resolution positron emission tomography (PET) scanner, dedicated to brain studies, was developed and its performance was evaluated. A four-layer depth of interaction detector was designed containing five detector units axially lined up per layer board. Each of the detector units consists of a finely segmented (1.2 mm) LYSO scintillator array and an 8  ×  8 array of multi-pixel photon counters. Each detector layer has independent front-end and signal processing circuits, and the four detector layers are assembled as a detector module. The new scanner was designed to form a detector ring of 430 mm diameter with 32 detector modules and 168 detector rings with a 1.2 mm pitch. The total crystal number is 655 360. The transaxial and axial field of views (FOVs) are 330 mm in diameter and 201.6 mm, respectively, which are sufficient to measure a whole human brain. The single-event data generated at each detector module were transferred to the data acquisition servers through optical fiber cables. The single-event data from all detector modules were merged and processed to create coincidence event data in on-the-fly software in the data acquisition servers. For image reconstruction, the high-resolution mode (HR-mode) used a 1.2 mm2 crystal segment size and the high-speed mode (HS-mode) used a 4.8 mm2 size by collecting 16 crystal segments of 1.2 mm each to reduce the computational cost. The performance of the brain PET scanner was evaluated. For the intrinsic spatial resolution of the detector module, coincidence response functions of the detector module pair, which faced each other at various angles, were measured by scanning a 0.25 mm diameter 22Na point source. The intrinsic resolutions were obtained with 1.08 mm full width at half-maximum (FWHM) and 1.25 mm FWHM on average at 0 and 22.5 degrees in the first layer pair, respectively. The system spatial resolutions were less than 1.0 mm FWHM throughout the whole FOV, using a list-mode dynamic RAMLA (LM-DRAMA). The system sensitivity was 21.4 cps kBq-1 as measured using an 18F line source aligned with the center of the transaxial FOV. High count rate capability was evaluated using a cylindrical phantom (20 cm diameter  ×  70 cm length), resulting in 249 kcps in true and 27.9 kcps at 11.9 kBq ml-1 at the peak count in a noise equivalent count rate (NECR_2R). Single-event data acquisition and on-the-fly software coincidence detection performed well, exceeding 25 Mcps and 2.3 Mcps for single and coincidence count rates, respectively. Using phantom studies, we also demonstrated its imaging capabilities by means of a 3D Hoffman brain phantom and an ultra-micro hot-spot phantom. The images obtained were of acceptable quality for high-resolution determination. As clinical and pre-clinical studies, we imaged brains of a human and of small animals.

  1. The Coincident Coherence of Extreme Doppler Velocity Events with p-mode Patches in the Solar Photosphere.

    NASA Astrophysics Data System (ADS)

    McClure, Rachel Lee

    2018-06-01

    Observations of the solar photosphere show many spatially compact Doppler velocity events with short life spans and extreme values. In the IMaX spectropolarimetric inversion data of the first flight of the SUNRISE balloon in 2009 these striking flashes in the intergranule lanes and complementary outstanding values in the centers of granules have line of sight Doppler velocity values in excess of 4 sigma from the mean. We conclude that values outside 4 sigma are a result from the superposition of the granulation flows and the p-modes.To determine how granulation and p-modes contribute to these outstanding Doppler events, I separate the two components using the Fast Fourier Transform. I produce the power spectrum of the spatial wave frequencies and their corresponding frequency in time for each image, and create a k-omega filter to separate the two components. Using the filtered data, test the hypothesis that extreme events occur because of strict superposition between the p-mode Doppler velocities and the granular velocities. I compare event counts from the observational data to those produced by random superposition of the two flow components and find that the observational event counts are consistent with the model event counts in the limit of small number statistics. Poisson count probabilities of event numbers observed are consistent with expected model count probability distributions.

  2. Measuring the radon concentration in air meting van de radonconcentratie in lucht

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aten, J.B.T.; Bierhuizen, H.W.J.; Vanhoek, L.P.

    1975-01-01

    A simple transportable apparatus for measurement of the radon concentration in the air of a workshop was developed. An air sample is sucked through a filter and the decay curve of the alpha activity is measured. The counting speed 40 min after sampling gives an indication of the radon activity. The apparatus was calibrated by analyzing an analogous decay curve obtained with a big filter and a big air sample, the activity being measured with an anti-coincidence counter. (GRA)

  3. Equivalence between the Arquès-Walsh sequence formula and the number of connected Feynman diagrams for every perturbation order in the fermionic many-body problem

    NASA Astrophysics Data System (ADS)

    Castro, E.

    2018-02-01

    From the perturbative expansion of the exact Green function, an exact counting formula is derived to determine the number of different types of connected Feynman diagrams. This formula coincides with the Arquès-Walsh sequence formula in the rooted map theory, supporting the topological connection between Feynman diagrams and rooted maps. A classificatory summing-terms approach is used, in connection to discrete mathematical theory.

  4. Primary Epstein-Barr virus infection and probable parvovirus B19 reactivation resulting in fulminant hepatitis and fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis.

    PubMed

    Karrasch, Matthias; Felber, Jörg; Keller, Peter M; Kletta, Christine; Egerer, Renate; Bohnert, Jürgen; Hermann, Beate; Pfister, Wolfgang; Theis, Bernhard; Petersen, Iver; Stallmach, Andreas; Baier, Michael

    2014-11-01

    A case of primary Epstein-Barr virus (EBV) infection/parvovirus B19 reactivation fulfilling five of eight criteria for hemophagocytic lymphohistiocytosis (HLH) is presented. Despite two coinciding viral infections, massive splenomegaly, and fulminant hepatitis, the patient had a good clinical outcome, probably due to an early onset form of HLH with normal leukocyte count, normal natural killer (NK) cell function, and a lack of hemophagocytosis.

  5. Proceedings of the 1997 Battlespace Atmospherics Conference 2-4 December 1997

    DTIC Science & Technology

    1998-03-01

    sensor capabilities are highlighted in our SSM/I section, where coincident passive microwave and Visible/Infrared products are created automatically ...field of view 60 ’. ^igure 1. Effect of changing sensor fiele of view on received signal The signal at 0° field of view is the directly transmitted...not used here because the sensor is a photon counting device and that irradiance does not add up spectrally in the same way as photon flux. In the UVS

  6. Time stamping of single optical photons with 10 ns resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin

    High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc. Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Here, photon counting is already widely used in X-ray imaging, where the high energy of the photons makes their detection easier.

  7. Time stamping of single optical photons with 10 ns resolution

    DOE PAGES

    Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin; ...

    2017-05-08

    High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc. Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Here, photon counting is already widely used in X-ray imaging, where the high energy of the photons makes their detection easier.

  8. FASEA: A FPGA Acquisition System and Software Event Analysis for liquid scintillation counting

    NASA Astrophysics Data System (ADS)

    Steele, T.; Mo, L.; Bignell, L.; Smith, M.; Alexiev, D.

    2009-10-01

    The FASEA (FPGA based Acquisition and Software Event Analysis) system has been developed to replace the MAC3 for coincidence pulse processing. The system uses a National Instruments Virtex 5 FPGA card (PXI-7842R) for data acquisition and a purpose developed data analysis software for data analysis. Initial comparisons to the MAC3 unit are included based on measurements of 89Sr and 3H, confirming that the system is able to accurately emulate the behaviour of the MAC3 unit.

  9. Rapid Antiretroviral Therapy Initiation for Women in an HIV-1 Prevention Clinical Trial Experiencing Primary HIV-1 Infection during Pregnancy or Breastfeeding.

    PubMed

    Morrison, Susan; John-Stewart, Grace; Egessa, John J; Mubezi, Sezi; Kusemererwa, Sylvia; Bii, Dennis K; Bulya, Nulu; Mugume, Francis; Campbell, James D; Wangisi, Jonathan; Bukusi, Elizabeth A; Celum, Connie; Baeten, Jared M

    2015-01-01

    During an HIV-1 prevention clinical trial in East Africa, we observed 16 cases of primary HIV-1 infection in women coincident with pregnancy or breastfeeding. Nine of eleven pregnant women initiated rapid combination antiretroviral therapy (ART), despite having CD4 counts exceeding national criteria for ART initiation; breastfeeding women initiated ART or replacement feeding. Rapid ART initiation during primary HIV-1 infection during pregnancy and breastfeeding is feasible in this setting.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, Alexander M.; Deller, Timothy W.; Maramraju, Sri Harsha

    Purpose: The GE SIGNA PET/MR is a new whole body integrated time-of-flight (ToF)-PET/MR scanner from GE Healthcare. The system is capable of simultaneous PET and MR image acquisition with sub-400 ps coincidence time resolution. Simultaneous PET/MR holds great potential as a method of interrogating molecular, functional, and anatomical parameters in clinical disease in one study. Despite the complementary imaging capabilities of PET and MRI, their respective hardware tends to be incompatible due to mutual interference. In this work, the GE SIGNA PET/MR is evaluated in terms of PET performance and the potential effects of interference from MRI operation. Methods: Themore » NEMA NU 2-2012 protocol was followed to measure PET performance parameters including spatial resolution, noise equivalent count rate, sensitivity, accuracy, and image quality. Each of these tests was performed both with the MR subsystem idle and with continuous MR pulsing for the duration of the PET data acquisition. Most measurements were repeated at three separate test sites where the system is installed. Results: The scanner has achieved an average of 4.4, 4.1, and 5.3 mm full width at half maximum radial, tangential, and axial spatial resolutions, respectively, at 1 cm from the transaxial FOV center. The peak noise equivalent count rate (NECR) of 218 kcps and a scatter fraction of 43.6% are reached at an activity concentration of 17.8 kBq/ml. Sensitivity at the center position is 23.3 cps/kBq. The maximum relative slice count rate error below peak NECR was 3.3%, and the residual error from attenuation and scatter corrections was 3.6%. Continuous MR pulsing had either no effect or a minor effect on each measurement. Conclusions: Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.« less

  11. A new approach for the estimation of phytoplankton cell counts associated with algal blooms.

    PubMed

    Nazeer, Majid; Wong, Man Sing; Nichol, Janet Elizabeth

    2017-07-15

    This study proposes a method for estimating phytoplankton cell counts associated with an algal bloom, using satellite images coincident with in situ and meteorological parameters. Satellite images from Landsat Thematic Mapper (TM), Enhanced Thematic Mapper Plus (ETM+), Operational Land Imager (OLI) and HJ-1 A/B Charge Couple Device (CCD) sensors were integrated with the meteorological observations to provide an estimate of phytoplankton cell counts. All images were atmospherically corrected using the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) atmospheric correction method with a possible error of 1.2%, 2.6%, 1.4% and 2.3% for blue (450-520nm), green (520-600nm), red (630-690nm) and near infrared (NIR 760-900nm) wavelengths, respectively. Results showed that the developed Artificial Neural Network (ANN) model yields a correlation coefficient (R) of 0.95 with the in situ validation data with Sum of Squared Error (SSE) of 0.34cell/ml, Mean Relative Error (MRE) of 0.154cells/ml and a bias of -504.87. The integration of the meteorological parameters with remote sensing observations provided a promising estimation of the algal scum as compared to previous studies. The applicability of the ANN model was tested over Hong Kong as well as over Lake Kasumigaura, Japan and Lake Okeechobee, Florida USA, where algal blooms were also reported. Further, a 40-year (1975-2014) red tide occurrence map was developed and revealed that the eastern and southern waters of Hong Kong are more vulnerable to red tides. Over the 40 years, 66% of red tide incidents were associated with the Dinoflagellates group, while the remainder were associated with the Diatom group (14%) and several other minor groups (20%). The developed technology can be applied to other similar environments in an efficient and cost-saving manner. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Blytheville AFB, Arkansas. Water quality management survey. Final report 11-14 Apr 83

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    New, G.R.; Gibson, D.P. Jr.

    1983-05-01

    The USAF OEHL conducted an on site water quality management survey at Blytheville AFB. Main areas of interest were (1) the wastewater treatment plant effluent fecal coliform count, and residual chlorine content, and (2) the stream sampling protocol. The drinking water plant, landfill and industrial shops were also included in the survey. Results of the survey indicated that the low residual chlorine content caused high fecal coliform counts in the wastewater effluent. The chemical parameters sampled in the stream monitoring program did not coincide with the requirements of the State of Arkansas and required modification. Recommendations were made to increasemore » the residual chlorine content of the wastewater effluent and to increase the mixing of the chlorine contact chamber. A list of the chemical parameters was included in the report for stream monitoring.« less

  13. Design and performance of A 3He-free coincidence counter based on parallel plate boron-lined proportional technology

    DOE PAGES

    Henzlova, D.; Menlove, H. O.; Marlow, J. B.

    2015-07-01

    Thermal neutron counters utilized and developed for deployment as non-destructive assay (NDA) instruments in the field of nuclear safeguards traditionally rely on 3He-based proportional counting systems. 3He-based proportional counters have provided core NDA detection capabilities for several decades and have proven to be extremely reliable with range of features highly desirable for nuclear facility deployment. Facing the current depletion of 3He gas supply and the continuing uncertainty of options for future resupply, a search for detection technologies that could provide feasible short-term alternative to 3He gas was initiated worldwide. As part of this effort, Los Alamos National Laboratory (LANL) designedmore » and built a 3He-free full scale thermal neutron coincidence counter based on boron-lined proportional technology. The boronlined technology was selected in a comprehensive inter-comparison exercise based on its favorable performance against safeguards specific parameters. This paper provides an overview of the design and initial performance evaluation of the prototype High Level Neutron counter – Boron (HLNB). The initial results suggest that current HLNB design is capable to provide ~80% performance of a selected reference 3He-based coincidence counter (High Level Neutron Coincidence Counter, HLNCC). Similar samples are expected to be measurable in both systems, however, slightly longer measurement times may be anticipated for large samples in HLNB. The initial evaluation helped to identify potential for further performance improvements via additional tailoring of boron-layer thickness.« less

  14. Modeling the performance of a photon counting x-ray detector for CT: energy response and pulse pileup effects.

    PubMed

    Taguchi, Katsuyuki; Zhang, Mengxi; Frey, Eric C; Wang, Xiaolan; Iwanczyk, Jan S; Nygard, Einar; Hartsough, Neal E; Tsui, Benjamin M W; Barber, William C

    2011-02-01

    Recently, photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed for potential use in clinical computed tomography (CT) scanners. These PCXDs have great potential to improve the quality of CT images due to the absence of electronic noise and weights applied to the counts and the additional spectral information. With high count rates encountered in clinical CT, however, coincident photons are recorded as one event with a higher or lower energy due to the finite speed of the PCXD. This phenomenon is called a "pulse pileup event" and results in both a loss of counts (called "deadtime losses") and distortion of the recorded energy spectrum. Even though the performance of PCXDs is being improved, it is essential to develop algorithmic methods based on accurate models of the properties of detectors to compensate for these effects. To date, only one PCXD (model DXMCT-1, DxRay, Inc., Northridge, CA) has been used for clinical CT studies. The aim of that study was to evaluate the agreement between data measured by DXMCT-1 and those predicted by analytical models for the energy response, the deadtime losses, and the distorted recorded spectrum caused by pulse pileup effects. An energy calibration was performed using 99mTc (140 keV), 57Co (122 keV), and an x-ray beam obtained with four x-ray tube voltages (35, 50, 65, and 80 kVp). The DXMCT-1 was placed 150 mm from the x-ray focal spot; the count rates and the spectra were recorded at various tube current values from 10 to 500 microA for a tube voltage of 80 kVp. Using these measurements, for each pulse height comparator we estimated three parameters describing the photon energy-pulse height curve, the detector deadtime tau, a coefficient k that relates the x-ray tube current I to an incident count rate a by a = k x I, and the incident spectrum. The mean pulse shape of all comparators was acquired in a separate study and was used in the model to estimate the distorted recorded spectrum. The agreement between data measured by the DXMCT-1 and those predicted by the models was quantified by the coefficient of variation (COV), i.e., the root mean square difference divided by the mean of the measurement. Photon energy versus pulse height curves calculated with an analytical model and those measured using the DXMCT-1 were in agreement within 0.2% in terms of the COV. The COV between the output count rates measured and those predicted by analytical models was 2.5% for deadtime losses of up to 60%. The COVs between spectra measured and those predicted by the detector model were within 3.7%-7.2% with deadtime losses of 19%-46%. It has been demonstrated that the performance of the DXMCT-1 agreed exceptionally well with the analytical models regarding the energy response, the count rate, and the recorded spectrum with pulse pileup effects. These models will be useful in developing methods to compensate for these effects in PCXD-based clinical CT systems.

  15. MATHEMATICS PANEL QUARTERLY PROGRESS REPORT FOR PERIOD ENDING JULY 31, 1952

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, C.L. ed.

    1952-10-27

    The background and status of the following projects of the Mathematics Panel are reported: test problems for the ORAC arithmetic units errors in matrix operations; basic studies in the Monte Carlo methods A Sturm-Liouville problems approximate steady-state solution of the equation of continuity; estimation of volume of lymph space; xradiation effects on respiration rates in grasshopper embnyos; temperature effects in irradiation experiments with yeast; LD/sub 50/ estimation for burros and swine exposed to gamma radiation; thermal-neutron penetration in tissues; kinetics of HBr-HBrO/sub 3/ reaction; isotope effect in reaction rate constants; experimental determination of diffusivity coefficientss Dirac wave equationss fitting amore » calibration curves beta decay (field factors); neutron decay theorys calculation of internal conversion coefficients with screening; estimation of alignment ratios; optimum allocation of counting times calculation of coincidence probabilities for a double-crystal detectors reactor inequalities; heat flow in long rectangular tubes; solving an equation by numerical methods; numerical integration; evalvation of a functions depigmentation of a biological dosimeter. (L.M.T.)« less

  16. Performance Evaluation of the microPET®—FOCUS-F120

    NASA Astrophysics Data System (ADS)

    Laforest, Richard; Longford, Desmond; Siegel, Stefan; Newport, Danny F.; Yap, Jeffrey

    2007-02-01

    microPETreg-Focus-F120 is the latest model of dedicated small animal PET scanners from CTI-Concorde Microsystems LLC, (Knoxville, TN). This scanner, based on the geometry of the microPET-R4, takes advantage of several detector modifications to the coincidence processing electronics that improve the image resolution, sensitivity, and counting rate performance as compared to the predecessor models. This work evaluates the performance of the Focus-F120 system and shows its improvement over the earlier models. In particular, the spatial resolution is shown to improve from 2.32 to 1.69 mm at 5 mm radial distance and the peak absolute sensitivity increases from 4.1% to 7.1% compared to the microPET-R4. The counting rate capability, expressed in noise equivalent counting rate (NEC-1R), was shown to peak at over 800 kcps at 88 MBq for both systems using a mouse phantom. For this small phantom, the NECR counting rate is limited by the data transmission bandwidth between the scanner and the acquisition console. The rat-like phantom showed peak NEC-1R value at 300 kcps at 140 MBq. Evaluation of image quality and quantitation accuracy was also performed using specially designed phantoms and animal experiments

  17. Avian leucocyte counting using the hemocytometer

    USGS Publications Warehouse

    Dein, F.J.; Wilson, A.; Fischer, D.; Langenberg, P.

    1994-01-01

    Automated methods for counting leucocytes in avian blood are not available because of the presence of nucleated erythrocytes and thrombocytes. Therefore, total white blood cell counts are performed by hand using a hemocytometer. The Natt and Herrick and the Unopette methods are the most common stain and diluent preparations for this procedure. Replicate hemocytometer counts using these two methods were performed on blood from four birds of different species. Cells present in each square of the hemocytometer were counted. Counting cells in the corner, side, or center hemocytometer squares produced statistically equivalent results; counting four squares per chamber provided a result similar to that obtained by counting nine squares; and the Unopette method was more precise for hemocytometer counting than was the Natt and Herrick method. The Unopette method is easier to learn and perform but is an indirect process, utilizing the differential count from a stained smear. The Natt and Herrick method is a direct total count, but cell identification is more difficult.

  18. Cascaded systems analysis of photon counting detectors

    PubMed Central

    Xu, J.; Zbijewski, W.; Gang, G.; Stayman, J. W.; Taguchi, K.; Lundqvist, M.; Fredenberg, E.; Carrino, J. A.; Siewerdsen, J. H.

    2014-01-01

    Purpose: Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). Methods: A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1–7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. Results: The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. Conclusions: The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems. PMID:25281959

  19. A new method to reduce the statistical and systematic uncertainty of chance coincidence backgrounds measured with waveform digitizers

    DOE PAGES

    O'Donnell, John M.

    2015-06-30

    We present a new method for measuring chance-coincidence backgrounds during the collection of coincidence data. The method relies on acquiring data with near-zero dead time, which is now realistic due to the increasing deployment of flash electronic-digitizer (waveform digitizer) techniques. An experiment designed to use this new method is capable of acquiring more coincidence data, and a much reduced statistical fluctuation of the measured background. A statistical analysis is presented, and us ed to derive a figure of merit for the new method. Factors of four improvement over other analyses are realistic. The technique is illustrated with preliminary data takenmore » as part of a program to make new measurements of the prompt fission neutron spectra at Los Alamo s Neutron Science Center. In conclusion, it is expected that the these measurements will occur in a regime where the maximum figure of merit will be exploited« less

  20. Threats to security and ischaemic heart disease deaths: the case of homicides in Mexico.

    PubMed

    Lee, Eileen H; Bruckner, Tim A

    2017-02-01

    Ischaemic heart disease (IHD) ranks as the leading cause of death worldwide. Whereas much attention focuses on behavioural and lifestyle factors, less research examines the role of acute, ambient stressors. An unprecedented rise in homicides in Mexico over the past decade and the attendant media coverage and publicity have raised international concern regarding its potential health sequelae. We hypothesize that the rise in homicides in Mexico acts as an ecological threat to security and elevates the risk of both transient ischaemic events and myocardial infarctions, thereby increasing IHD deaths. We applied time-series methods to monthly counts of IHD deaths and homicides in Mexico for 156 months spanning January 2000 to December 2012. Methods controlled for strong temporal patterns in IHD deaths, the unemployment rate and changes in the population size at risk. After controlling for trend and seasonality in IHD deaths, a 1-unit increase in the logged count of homicides coincides with a 7% increase in the odds of IHD death in that same month (95% confidence interval: 0.04 - 0.10). Inference remains robust to additional sensitivity checks, including a state-level fixed effects analysis. Our findings indicate that the elevated level of homicides in Mexico serves as a population-level stressor that acutely increases the risk of IHD death. This research adds to the growing literature documenting the role of ambient threats, or perceived threats, to security on cardiovascular health. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association

  1. Quantifying radionuclide signatures from a γ-γ coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ-γ system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ-γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  2. A novel cross-satellite based assessment of the spatio-temporal development of a cyanobacterial harmful algal bloom

    NASA Astrophysics Data System (ADS)

    Page, Benjamin P.; Kumar, Abhishek; Mishra, Deepak R.

    2018-04-01

    As the frequency of cyanobacterial harmful algal blooms (CyanoHABs) become more common in recreational lakes and water supply reservoirs, demand for rapid detection and temporal monitoring will be imminent for effective management. The goal of this study was to demonstrate a novel and potentially operational cross-satellite based protocol for synoptic monitoring of rapidly evolving and increasingly common CyanoHABs in inland waters. The analysis involved a novel way to cross-calibrate a chlorophyll-a (Chl-a) detection model for the Landsat-8 OLI sensor from the relationship between the normalized difference chlorophyll index and the floating algal index derived from Sentinel-2A on a coinciding overpass date during the summer CyanoHAB bloom in Utah Lake. This aided in the construction of a time-series phenology of the Utah Lake CyanoHAB event. Spatio-temporal cyanobacterial density maps from both Sentinel-2A and Landsat-8 sensors revealed that the bloom started in the first week of July 2016 (July 3rd, mean cell count: 9163 cells/mL), reached peak in mid-July (July 15th, mean cell count: 108176 cells/mL), and reduced in August (August 24th, mean cell count: 9145 cells/mL). Analysis of physical and meteorological factors suggested a complex interaction between landscape processes (high surface runoff), climatic conditions (high temperature, high rainfall followed by negligible rainfall, stable wind), and water quality (low water level, high Chl-a) which created a supportive environment for triggering these blooms in Utah Lake. This cross satellite-based monitoring methods can be a great tool for regular monitoring and will reduce the budget cost for monitoring and predicting CyanoHABs in large lakes.

  3. Moments of the Particle Phase-Space Density at Freeze-out and Coincidence Probabilities

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyż, W.; Zalewski, K.

    2005-10-01

    It is pointed out that the moments of phase-space particle density at freeze-out can be determined from the coincidence probabilities of the events observed in multiparticle production. A method to measure the coincidence probabilities is described and its validity examined.

  4. A Trial of the Effect of Micronutrient Supplementation on Treatment Outcome, T Cell Counts, Morbidity, and Mortality in Adults with Pulmonary Tuberculosis

    PubMed Central

    Villamor, Eduardo; Mugusi, Ferdinand; Urassa, Willy; Bosch, Ronald J.; Saathoff, Elmar; Matsumoto, Kenji; Meydani, Simin N.; Fawzi, Wafaie W.

    2008-01-01

    Background Tuberculosis (TB) often coincides with nutritional deficiencies. The effects of micronutrient supplementation on TB treatment outcomes, clinical complications, and mortality are uncertain. Methods We conducted a randomized, double-blind, placebo-controlled trial of micronutrients (vitamins A, B complex, C, and E, as well as selenium) in Dar es Salaam, Tanzania. We enrolled 471 human immunodeficiency virus (HIV)–infected and 416 HIV-negative adults with pulmonary TB at the time of initiating chemotherapy and monitored them for a median of 43 months. Results Micronutrients decreased the risk of TB recurrence by 45% overall (95% confidence interval [CI], 7% to 67%; P = .02) and by 63% in HIV-infected patients (95% CI, 8% to 85%; P = .02). There were no significant effects on mortality overall; however, we noted a marginally significant 64% reduction of deaths in HIV-negative subjects (95% CI, −14% to 88%; P = .08). Supplementation increased CD3+ and CD4+ cell counts and decreased the incidence of extrapulmonary TB and genital ulcers in HIV-negative patients. Micronutrients reduced the incidence of peripheral neuropathy by 57% (95% CI, 41% to 69%; P < .001), irrespective of HIV status. There were no significant effects on weight gain, body composition, anemia, or HIV load. Conclusions Micronutrient supplementation could improve the outcome in patients undergoing TB chemotherapy in Tanzania. PMID:18471061

  5. The Impact of Gate Width Setting and Gate Utilization Factors on Plutonium Assay in Passive Correlated Neutron Counting

    DOE PAGES

    Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...

    2015-06-15

    In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less

  6. First full-beam PET acquisitions in proton therapy with a modular dual-head dedicated system

    NASA Astrophysics Data System (ADS)

    Sportelli, G.; Belcari, N.; Camarlinghi, N.; Cirrone, G. A. P.; Cuttone, G.; Ferretti, S.; Kraan, A.; Ortuño, J. E.; Romano, F.; Santos, A.; Straub, K.; Tramontana, A.; Del Guerra, A.; Rosso, V.

    2014-01-01

    During particle therapy irradiation, positron emitters with half-lives ranging from 2 to 20 min are generated from nuclear processes. The half-lives are such that it is possible either to detect the positron signal in the treatment room using an in-beam positron emission tomography (PET) system, right after the irradiation, or to quickly transfer the patient to a close PET/CT scanner. Since the activity distribution is spatially correlated with the dose, it is possible to use PET imaging as an indirect method to assure the quality of the dose delivery. In this work, we present a new dedicated PET system able to operate in-beam. The PET apparatus consists in two 10 cm × 10 cm detector heads. Each detector is composed of four scintillating matrices of 23 × 23 LYSO crystals. The crystal size is 1.9 mm × 1.9 mm × 16 mm. Each scintillation matrix is read out independently with a modularized acquisition system. The distance between the two opposing detector heads was set to 20 cm. The system has very low dead time per detector area and a 3 ns coincidence window, which is capable to sustain high single count rates and to keep the random counts relatively low. This allows a new full-beam monitoring modality that includes data acquisition also while the beam is on. The PET system was tested during the irradiation at the CATANA (INFN, Catania, Italy) cyclotron-based proton therapy facility. Four acquisitions with different doses and dose rates were analysed. In all cases the random to total coincidences ratio was equal or less than 25%. For each measurement we estimated the accuracy and precision of the activity range on a set of voxel lines within an irradiated PMMA phantom. Results show that the inclusion of data acquired during the irradiation, referred to as beam-on data, improves both the precision and accuracy of the range measurement with respect to data acquired only after irradiation. Beam-on data alone are enough to give precisions better than 1 mm when at least 5 Gy are delivered.

  7. Design and Optimization of a Dual-HPGe Gamma Spectrometer and Its Cosmic Veto System

    NASA Astrophysics Data System (ADS)

    Zhang, Weihua; Ro, Hyunje; Liu, Chuanlei; Hoffman, Ian; Ungar, Kurt

    2017-03-01

    In this paper, a dual high purity germanium (HPGe) gamma spectrometer detection system with an increased solid angle was developed. The detection system consists of a pair of Broad Energy Germanium (BE-5030p) detectors and an XIA LLC digital gamma finder/Pixie-4 data-acquisition system. A data file processor was developed containing five modules that parses Pixie-4 list-mode data output files and classifies detections into anticoincident/coincident events and their specific coincidence types (double/triple/quadruple) for further analysis. A novel cosmic veto system was installed in the detection system. It was designed to be easy to install around an existing system while still providing sufficient cosmic veto shielding comparable to other designs. This paper describes the coverage and efficiency of this cosmic veto and the data processing system. It has been demonstrated that the cosmic veto system can provide a mean background reduction of 66.1%, which results in a mean MDA improvement of 58.3%. The counting time to meet the required MDA for specific radionuclide can be reduced by a factor of 2-3 compared to those using a conventional HPGe system. This paper also provides an initial overview of coincidence timing distributions between an incoming event from a cosmic veto plate and HPGe detector.

  8. Intercomparison of methods for coincidence summing corrections in gamma-ray spectrometry--part II (volume sources).

    PubMed

    Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J

    2012-09-01

    The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Penguin head movement detected using small accelerometers: a proxy of prey encounter rate.

    PubMed

    Kokubun, Nobuo; Kim, Jeong-Hoon; Shin, Hyoung-Chul; Naito, Yasuhiko; Takahashi, Akinori

    2011-11-15

    Determining temporal and spatial variation in feeding rates is essential for understanding the relationship between habitat features and the foraging behavior of top predators. In this study we examined the utility of head movement as a proxy of prey encounter rates in medium-sized Antarctic penguins, under the presumption that the birds should move their heads actively when they encounter and peck prey. A field study of free-ranging chinstrap and gentoo penguins was conducted at King George Island, Antarctica. Head movement was recorded using small accelerometers attached to the head, with simultaneous monitoring for prey encounter or body angle. The main prey was Antarctic krill (>99% in wet mass) for both species. Penguin head movement coincided with a slow change in body angle during dives. Active head movements were extracted using a high-pass filter (5 Hz acceleration signals) and the remaining acceleration peaks (higher than a threshold acceleration of 1.0 g) were counted. The timing of head movements coincided well with images of prey taken from the back-mounted cameras: head movement was recorded within ±2.5 s of a prey image on 89.1±16.1% (N=7 trips) of images. The number of head movements varied largely among dive bouts, suggesting large temporal variations in prey encounter rates. Our results show that head movement is an effective proxy of prey encounter, and we suggest that the method will be widely applicable for a variety of predators.

  10. A cascaded model of spectral distortions due to spectral response effects and pulse pileup effects in a photon-counting x-ray detector for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cammin, Jochen, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Taguchi, Katsuyuki, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Xu, Jennifer

    Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra andmore » count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi et al., “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COV{sub W}), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COV{sub W}'s less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COV{sub W} was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE.« less

  11. A cascaded model of spectral distortions due to spectral response effects and pulse pileup effects in a photon-counting x-ray detector for CT

    PubMed Central

    Cammin, Jochen; Xu, Jennifer; Barber, William C.; Iwanczyk, Jan S.; Hartsough, Neal E.; Taguchi, Katsuyuki

    2014-01-01

    Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra and count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi , “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COVW), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COVW's less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COVW was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE. PMID:24694136

  12. Fractal universe and quantum gravity.

    PubMed

    Calcagni, Gianluca

    2010-06-25

    We propose a field theory which lives in fractal spacetime and is argued to be Lorentz invariant, power-counting renormalizable, ultraviolet finite, and causal. The system flows from an ultraviolet fixed point, where spacetime has Hausdorff dimension 2, to an infrared limit coinciding with a standard four-dimensional field theory. Classically, the fractal world where fields live exchanges energy momentum with the bulk with integer topological dimension. However, the total energy momentum is conserved. We consider the dynamics and the propagator of a scalar field. Implications for quantum gravity, cosmology, and the cosmological constant are discussed.

  13. A prototype of a portable TDCR system at ENEA.

    PubMed

    Capogni, Marco; De Felice, Pierino

    2014-11-01

    A prototype of a portable liquid scintillation counting system based on the Triple-to-Double Coincidence Ratio (TDCR) technique was developed at ENEA-INMRI in the framework of the European Metrofission project. The new device equipped with the CAEN digitizers was tested for the activity measurements of pure β-emitters ((99)Tc and (63)Ni). The list-mode data recorded by the digitizers were analyzed by software implemented in the CERN ROOT environment, which allows the application of pulse shape discrimination using the new device. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Modeling panel detection frequencies by queuing system theory: an application in gas chromatography olfactometry.

    PubMed

    Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A

    2004-10-01

    In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.

  15. Characterisation of a major phytoplankton bloom in the River Thames (UK) using flow cytometry and high performance liquid chromatography.

    PubMed

    Moorhouse, H L; Read, D S; McGowan, S; Wagner, M; Roberts, C; Armstrong, L K; Nicholls, D J E; Wickham, H D; Hutchins, M G; Bowes, M J

    2018-05-15

    Recent river studies have observed rapid phytoplankton dynamics, driven by diurnal cycling and short-term responses to storm events, highlighting the need to adopt new high-frequency characterisation methods to understand these complex ecological systems. This study utilised two such analytical methods; pigment analysis by high performance liquid chromatography (HPLC) and cell counting by flow cytometry (FCM), alongside traditional chlorophyll spectrophotometry and light microscopy screening, to characterise the major phytoplankton bloom of 2015 in the River Thames, UK. All analytical techniques observed a rapid increase in chlorophyll a concentration and cell abundances from March to early June, caused primarily by a diatom bloom. Light microscopy identified a shift from pennate to centric diatoms during this period. The initial diatom bloom coincided with increased HPLC peridinin concentrations, indicating the presence of dinoflagellates which were likely to be consuming the diatom population. The diatom bloom declined rapidly in early June, coinciding with a storm event. There were low chlorophyll a concentrations (by both HPLC and spectrophotometric methods) throughout July and August, implying low biomass and phytoplankton activity. However, FCM revealed high abundances of pico-chlorophytes and cyanobacteria through July and August, showing that phytoplankton communities remain active and abundant throughout the summer period. In combination, these techniques are able to simultaneously characterise a wider range of phytoplankton groups, with greater certainty, and provide improved understanding of phytoplankton functioning (e.g. production of UV inhibiting pigments by cyanobacteria in response to high light levels) and ecological status (through examination of pigment degradation products). Combined HPLC and FCM analyses offer rapid and cost-effective characterisation of phytoplankton communities at appropriate timescales. This will allow a more-targeted use of light microscopy to capture phytoplankton peaks or to investigate periods of rapid community succession. This will lead to greater system understanding of phytoplankton succession in response to biogeochemical drivers. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  16. QCD constituent counting rules for neutral vector mesons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodsky, Stanley J.; Lebed, Richard F.; Lyubovitskij, Valery E.

    QCD constituent counting rules define the scaling behavior of exclusive hadronic scattering and electromagnetic scattering amplitudes at high momentum transfer in terms of the total number of fundamental constituents in the initial and final states participating in the hard subprocess. The scaling laws reflect the twist of the leading Fock state for each hadron and hence the leading operator that creates the composite state from the vacuum. Thus, the constituent counting scaling laws can be used to identify the twist of exotic hadronic candidates such as tetraquarks and pentaquarks. Effective field theories must consistently implement the scaling rules in ordermore » to be consistent with the fundamental theory. Here in this paper, we examine how one can apply constituent counting rules for the exclusive production of one or two neutral vector mesons V 0 in e + e - annihilation, processes in which the V 0 can couple via intermediate photons. In the case of a (narrow) real V 0, the photon virtuality is fixed to a precise value s 1 = m2V 0, thus treating the V 0 as a single fundamental particle. Each real V 0 thus contributes to the constituent counting rules with NV0 = 1 . In effect, the leading operator underlying the V 0 has twist 1. Thus, in the specific physical case of single or double on-shell V 0 production via intermediate photons, the predicted scaling from counting rules coincides with vector-meson dominance (VMD), an effective theory that treats V 0 as an elementary field. However, the VMD prediction fails in the general case where the V 0 is not coupled through an elementary photon field, and then the leading-twist interpolating operator has twist NV 0 = 2 . Analogous effects appear in pp scattering processes.« less

  17. QCD constituent counting rules for neutral vector mesons

    NASA Astrophysics Data System (ADS)

    Brodsky, Stanley J.; Lebed, Richard F.; Lyubovitskij, Valery E.

    2018-02-01

    QCD constituent counting rules define the scaling behavior of exclusive hadronic scattering and electromagnetic scattering amplitudes at high momentum transfer in terms of the total number of fundamental constituents in the initial and final states participating in the hard subprocess. The scaling laws reflect the twist of the leading Fock state for each hadron and hence the leading operator that creates the composite state from the vacuum. Thus, the constituent counting scaling laws can be used to identify the twist of exotic hadronic candidates such as tetraquarks and pentaquarks. Effective field theories must consistently implement the scaling rules in order to be consistent with the fundamental theory. Here, we examine how one can apply constituent counting rules for the exclusive production of one or two neutral vector mesons V0 in e+e- annihilation, processes in which the V0 can couple via intermediate photons. In the case of a (narrow) real V0, the photon virtuality is fixed to a precise value s1=mV02, thus treating the V0 as a single fundamental particle. Each real V0 thus contributes to the constituent counting rules with NV0=1. In effect, the leading operator underlying the V0 has twist 1. Thus, in the specific physical case of single or double on-shell V0 production via intermediate photons, the predicted scaling from counting rules coincides with vector-meson dominance (VMD), an effective theory that treats V0 as an elementary field. However, the VMD prediction fails in the general case where the V0 is not coupled through an elementary photon field, and then the leading-twist interpolating operator has twist NV 0=2 . Analogous effects appear in p p scattering processes.

  18. QCD constituent counting rules for neutral vector mesons

    DOE PAGES

    Brodsky, Stanley J.; Lebed, Richard F.; Lyubovitskij, Valery E.

    2018-02-08

    QCD constituent counting rules define the scaling behavior of exclusive hadronic scattering and electromagnetic scattering amplitudes at high momentum transfer in terms of the total number of fundamental constituents in the initial and final states participating in the hard subprocess. The scaling laws reflect the twist of the leading Fock state for each hadron and hence the leading operator that creates the composite state from the vacuum. Thus, the constituent counting scaling laws can be used to identify the twist of exotic hadronic candidates such as tetraquarks and pentaquarks. Effective field theories must consistently implement the scaling rules in ordermore » to be consistent with the fundamental theory. Here in this paper, we examine how one can apply constituent counting rules for the exclusive production of one or two neutral vector mesons V 0 in e + e - annihilation, processes in which the V 0 can couple via intermediate photons. In the case of a (narrow) real V 0, the photon virtuality is fixed to a precise value s 1 = m2V 0, thus treating the V 0 as a single fundamental particle. Each real V 0 thus contributes to the constituent counting rules with NV0 = 1 . In effect, the leading operator underlying the V 0 has twist 1. Thus, in the specific physical case of single or double on-shell V 0 production via intermediate photons, the predicted scaling from counting rules coincides with vector-meson dominance (VMD), an effective theory that treats V 0 as an elementary field. However, the VMD prediction fails in the general case where the V 0 is not coupled through an elementary photon field, and then the leading-twist interpolating operator has twist NV 0 = 2 . Analogous effects appear in pp scattering processes.« less

  19. Coherence parameter measurements for neon and hydrogen

    NASA Astrophysics Data System (ADS)

    Wright, Robert; Hargreaves, Leigh; Khakoo, Murtadha; Zatsarinny, Oleg; Bartschat, Klaus; Stauffer, Al

    2015-09-01

    We present recent coherence parameter measurements for excitation of neon and hydrogen by 50 eV electrons. The measurements were made using a crossed electron/gas beam spectrometer, featuring a hemispherically selected electron energy analyzer for detecting scattered electrons and double-reflection VUV polarization analyzer to register fluorescence photons. Time-coincidence counting methods on the electron and photon signals were employed to determine Stokes Parameters at each scattering angle, with data measured at angles between 20 - 115 degrees. The data are compared with calculated results using the B-Spline R-Matrix (BSR) and Relativistic Distorted Wave (RDW) approaches. Measurements were made of both the linear (Plin and γ) and circular (Lperp) parameters for the lowest lying excited states in these two targets. We particularly focus on results in the Lperp parameter, which shows unusual behavior in these particular targets, including strong sign changes implying reversal of the angular momentum transfer. In the case of neon, the unusual behavior is well captured by the BSR, but not by other models.

  20. Detection efficiency calculation for photons, electrons and positrons in a well detector. Part I: Analytical model

    NASA Astrophysics Data System (ADS)

    Pommé, S.

    2009-06-01

    An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.

  1. Comparison of UWCC MOX fuel measurements to MCNP-REN calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abhold, M.; Baker, M.; Jie, R.

    1998-12-31

    The development of neutron coincidence counting has greatly improved the accuracy and versatility of neutron-based techniques to assay fissile materials. Today, the shift register analyzer connected to either a passive or active neutron detector is widely used by both domestic and international safeguards organizations. The continued development of these techniques and detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model, as it is currently used, fails to accurately predict detector response in highly multiplying mediums such as mixed-oxide (MOX) lightmore » water reactor fuel assemblies. For this reason, efforts have been made to modify the currently used Monte Carlo codes and to develop new analytical methods so that this model is not required to predict detector response. The authors describe their efforts to modify a widely used Monte Carlo code for this purpose and also compare calculational results with experimental measurements.« less

  2. Study of vegetation cover distribution using DVI, PVI, WDVI indices with 2D-space plot

    NASA Astrophysics Data System (ADS)

    Naji, Taghreed A. H.

    2018-05-01

    The present work aims to study the effect of using vegetation indices technique on image segmentation for subdividing an image into the homogeneous regions. Three of these vegetation indices technique has been adopted (i.e. Difference Vegetation-Index (DVI), Perpendicular Vegetation Index (PVI) and Weighted Difference Vegetation Index (WDVI)) for detecting and monitoring vegetation distribution and healthiness. Image binarization method being followed the implementation of the indices to isolating the vegetation areas from the image background. The separated agriculture regions from other land use regions and their percentages are presented for two years (2001 and 2002) of the (ETM+) scenes. The counted areas resulted from 2D-space plot technique and the separated vegetated areas resulted from the using of the vegetation indices are also presented. The separated agriculture regions from the implementation of the DVI-index have proved better than other used indices. Because it showed better coincident approximately with 2D-space plot segmentation.

  3. Aerobiological study in east-central Iberian Peninsula: pollen diversity and dynamics for major taxa.

    PubMed

    Pérez-Badia, Rosa; Rapp, Ana; Vaquero, Consolación; Fernández-González, Federico

    2011-01-01

    A study was made of airborne pollen counts in Cuenca (east-central Iberian Peninsula, Spain), using data obtained over a 3-year period (2008-2010). This is the first such study carried out in the World Heritage city of Cuenca, situated in the large region of Castilla-La Mancha. Air monitoring was performed using the sampling and analysis procedures recommended by the Spanish Aerobiology Network. Sampling commenced in mid- 2007, and provided the first recorded pollen-spectrum for the area. The greatest pollen-type diversity was recorded in spring, whilst the highest pollen counts (over 80 percent of the annual total) were observed between February and June. The lowest counts were found in September, November and December. The 10 leading taxa, in order of abundance, were: Cupressaceae, Quercus, Urticaceae, Pinus, Olea, Poaceae, Populus, Platanus, Chenopodiaceae-Amaranthaceae and Plantago. The pollen calendar was thus typically Mediterrean, and comprised the 27 pollen types reaching 10-day mean counts of over 1 grain/m(3) of air. Maximum concentration values during the day were recorded between 12:00-20:00, coinciding with the highest temperatures and lowest humidity levels. The pollen types responsible for most allergies in the city of Cuenca, ordered by the number of days on which risk levels were reached, were: Poaceae, Urticaceae, Cupressaceae, Olea, Platanus and Chenopodiaceae-Amaranthaceae.

  4. Predicted performance of a PG-SPECT system using CZT primary detectors and secondary Compton-suppression anti-coincidence detectors under near-clinical settings for boron neutron capture therapy

    NASA Astrophysics Data System (ADS)

    Hales, Brian; Katabuchi, Tatsuya; Igashira, Masayuki; Terada, Kazushi; Hayashizaki, Noriyosu; Kobayashi, Tooru

    2017-12-01

    A test version of a prompt-gamma single photon emission computed tomography (PG-SPECT) system for boron neutron capture therapy (BNCT) using a CdZnTe (CZT) semiconductor detector with a secondary BGO anti-Compton suppression detector has been designed. A phantom with healthy tissue region of pure water, and 2 tumor regions of 5 wt% borated polyethylene was irradiated to a fluence of 1.3 × 109 n/cm2. The number of 478 keV foreground, background, and net counts were measured for each detector position and angle. Using only experimentally measured net counts, an image of the 478 keV production from the 10B(n , α) 7Li* reaction was reconstructed. Using Monte Carlo simulation and the experimentally measured background counts, the reliability of the system under clinically accurate parameters was extrapolated. After extrapolation, it was found that the value of the maximum-value pixel in the reconstructed 478 keV γ-ray production image overestimates the simulated production by an average of 9.2%, and that the standard deviation associated with the same value is 11.4%.

  5. Mapping QTL influencing gastrointestinal nematode burden in Dutch Holstein-Friesian dairy cattle

    PubMed Central

    Coppieters, Wouter; Mes, Ted HM; Druet, Tom; Farnir, Frédéric; Tamma, Nico; Schrooten, Chris; Cornelissen, Albert WCA; Georges, Michel; Ploeger, Harm W

    2009-01-01

    Background Parasitic gastroenteritis caused by nematodes is only second to mastitis in terms of health costs to dairy farmers in developed countries. Sustainable control strategies complementing anthelmintics are desired, including selective breeding for enhanced resistance. Results and Conclusion To quantify and characterize the genetic contribution to variation in resistance to gastro-intestinal parasites, we measured the heritability of faecal egg and larval counts in the Dutch Holstein-Friesian dairy cattle population. The heritability of faecal egg counts ranged from 7 to 21% and was generally higher than for larval counts. We performed a whole genome scan in 12 paternal half-daughter groups for a total of 768 cows, corresponding to the ~10% most and least infected daughters within each family (selective genotyping). Two genome-wide significant QTL were identified in an across-family analysis, respectively on chromosomes 9 and 19, coinciding with previous findings in orthologous chromosomal regions in sheep. We identified six more suggestive QTL by within-family analysis. An additional 73 informative SNPs were genotyped on chromosome 19 and the ensuing high density map used in a variance component approach to simultaneously exploit linkage and linkage disequilibrium in an initial inconclusive attempt to refine the QTL map position. PMID:19254385

  6. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  7. Absolute Radiometric Calibration of Narrow-Swath Imaging Sensors with Reference to Non-Coincident Wide-Swath Sensors

    NASA Technical Reports Server (NTRS)

    McCorkel, Joel; Thome, Kurtis; Lockwood, Ronald

    2012-01-01

    An inter-calibration method is developed to provide absolute radiometric calibration of narrow-swath imaging sensors with reference to non-coincident wide-swath sensors. The method predicts at-sensor radiance using non-coincident imagery from the reference sensor and knowledge of spectral reflectance of the test site. The imagery of the reference sensor is restricted to acquisitions that provide similar view and solar illumination geometry to reduce uncertainties due to directional reflectance effects. Spectral reflectance of the test site is found with a simple iterative radiative transfer method using radiance values of a well-understood wide-swath sensor and spectral shape information based on historical ground-based measurements. At-sensor radiance is calculated for the narrow-swath sensor using this spectral reflectance and atmospheric parameters that are also based on historical in situ measurements. Results of the inter-calibration method show agreement on the 2 5 percent level in most spectral regions with the vicarious calibration technique relying on coincident ground-based measurements referred to as the reflectance-based approach. While the variability of the inter-calibration method based on non-coincident image pairs is significantly larger, results are consistent with techniques relying on in situ measurements. The method is also insensitive to spectral differences between the sensors by transferring to surface spectral reflectance prior to prediction of at-sensor radiance. The utility of this inter-calibration method is made clear by its flexibility to utilize image pairings with acquisition dates differing in excess of 30 days allowing frequent absolute calibration comparisons between wide- and narrow-swath sensors.

  8. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  9. A new method to measure the U-235 content in fresh LWR fuel assemblies via fast-neutron passive self-interrogation

    DOE PAGES

    Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.; ...

    2017-10-07

    The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less

  10. Combining new tools to assess renal function and morphology: a holistic approach to study the effects of aging and a congenital nephron deficit.

    PubMed

    Geraci, Stefania; Chacon-Caldera, Jorge; Cullen-McEwen, Luise; Schad, Lothar R; Sticht, Carsten; Puelles, Victor G; Bertram, John F; Gretz, Norbert

    2017-09-01

    Recently, new methods for assessing renal function in conscious mice (transcutaneous assessment) and for counting and sizing all glomeruli in whole kidneys (MRI) have been described. In the present study, these methods were used to assess renal structure and function in aging mice, and in mice born with a congenital low-nephron endowment. Age-related nephron loss was analyzed in adult C57BL/6 mice (10-50 wk of age), and congenital nephron deficit was assessed in glial cell line-derived neurotrophic factor heterozygous (GDNF HET)-null mutant mice. Renal function was measured through the transcutaneous quantitation of fluorescein isothiocyanate-sinistrin half-life ( t 1/2 ) in conscious mice. MRI was used to image, count, and size cationic-ferritin labeled glomeruli in whole kidneys ex vivo. Design-based stereology was used to validate the MRI measurements of glomerular number and mean volume. In adult C57BL/6 mice, older age was associated with fewer and larger glomeruli, and a rightward shift in the glomerular size distribution. These changes coincided with a decrease in renal function. GNDF HET mice had a congenital nephron deficit that was associated with glomerular hypertrophy and exacerbated by aging. These findings suggest that glomerular hypertrophy and hyperfiltration are compensatory processes that can occur in conjunction with both age-related nephron loss and congenital nephron deficiency. The combination of measurement of renal function in conscious animals and quantitation of glomerular number, volume, and volume distribution provides a powerful new tool for investigating aspects of renal aging and functional changes. Copyright © 2017 the American Physiological Society.

  11. A new method to measure the U-235 content in fresh LWR fuel assemblies via fast-neutron passive self-interrogation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menlove, Howard Olsen; Belian, Anthony P.; Geist, William H.

    The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd 2O 3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate themore » 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. Here, we have named the system the fast-neutron passive collar (FNPC).« less

  12. A new method to measure the U-235 content in fresh LWR fuel assemblies via fast-neutron passive self-interrogation

    NASA Astrophysics Data System (ADS)

    Menlove, Howard; Belian, Anthony; Geist, William; Rael, Carlos

    2018-01-01

    The purpose of this paper is to provide a solution to a decades old safeguards problem in the verification of the fissile concentration in fresh light water reactor (LWR) fuel assemblies. The problem is that the burnable poison (e.g. Gd2O3) addition to the fuel rods decreases the active neutron assay for the fuel assemblies. This paper presents a new innovative method for the verification of the 235U linear mass density in fresh LEU fuel assemblies that is insensitive to the burnable poison content. The technique makes use of the 238U atoms in the fuel rods to self-interrogate the 235U mass. The innovation for the new approach is that the 238U spontaneous fission (SF) neutrons from the rods induces fission reactions (IF) in the 235U that are time correlated with the SF source neutrons. Thus, the coincidence gate counting rate benefits from both the nu-bar of the 238U SF (2.07) and the 235U IF (2.44) for a fraction of the IF reactions. Whereas, the 238U SF background has no time-correlation boost. The higher the detection efficiency, the higher the correlated boost because background neutron counts from the SF are being converted to signal doubles. This time-correlation in the IF signal increases signal/background ratio that provides a good precision for the net signal from the 235U mass. The hard neutron energy spectrum makes the technique insensitive to the burnable poison loading where a Cd or Gd liner on the detector walls is used to prevent thermal-neutron reflection back into the fuel assembly from the detector. We have named the system the fast-neutron passive collar (FNPC).

  13. Spatial Extent of Relativistic Electron Precipitation from the Radiation Belts

    NASA Astrophysics Data System (ADS)

    Shekhar, Sapna

    Relativistic Electron Precipitation (REP) in the atmosphere can contribute signi- cantly to electron loss from the outer radiation belts. In order to estimate the contribution to this loss, it is important to estimate the spatial extent of the precipitation region. We observed REP with the 0° Medium Energy Proton Electron Detector (MEPED) on board Polar Orbiting Environmental Satellites (POES), for 15 years (2000-2014) and used both single and multi satellite measurements to estimate an average extent of the region of precipitation in L shell and Magnetic Local Time. In the duration of 15 years (2000-2014), 31035 REPs were found in this study. Events were found to split into two classes; one class of events coincided with proton precipitation in the P1 channel (30-80 keV), were located in the dusk and early morning sector, and were more localized in L shell and magnetic local time (dMLT 0-3 hrs, dL 0.25-0.5),whereas the other class of events did not include proton precipitation, and were located mostly in the midnight sector and were wider in L shell (dL 1-2.5) but localized in MLT (dMLT 0-3 hrs); both classes occurred mostly during the declining phase of the solar cycle and geomagnetically active times. The events located in the midnight sector for both classes were found to be associated with tail magnetic field stretching which could be due to the fact that they tend to occur mostly during geomagnetically active times, or could imply that precipitation is caused by current sheet scattering. Use of POES to infer information about the precipitation energy spectrum was also investigated, despite the coarse energy channels and contamination issues. In order to study the energy specicity of the REP events, a method to t exponential spectra to the REP events, wherever possible, was formulated and validated through comparisons with SAMPEX observed spectra. 18 events on POES were found to be in conjunction with SAMPEX in the years 2000-04. The exponentially tted spectra for these events obtained by Comess et al. [2013] were folded through NOAA POES geometric factors obtained by Yando et al. [2011] and the predicted count rates in E3 (> 300 keV) were found to be in agreement with the actual data in the MEPED 0° particle telescopes on board NOAA POES. After comparison and validation with SAMPEX an inversion method was developed and applied to the same POES events. Assuming exponential spectra, E3 (> 300 keV)/P6 (> 700 keV) electron count rate ratios along with P3 , P4 and P5 proton count rates of the POES MEPED 0° telescope were used to determine an e-folding energy for the electron spectra and compared with SAMPEX. The e-folding energies obtained from POES were found to be systematically lower but followed a similar trend as SAMPEX, and it was concluded that E3/P6 ratio could be used as a parameter to dene spectral hardness of POES REP events irrespective of spectral shape. Using this parameter, spatial variation of spectral hardness of REP events was investigated. It was found that very soft events were mostly found in the dusk midnight early morning MLT sectors and L 5-7 whereas the hardest events were located in the post noon sectors peaking at L 4-5. The hardest events peaked at lower L shells and less than 10% were coincident with low energy (30-80 keV) proton precipitation which has been previously used as a proxy for EMIC wave particle scattering (e.g. Carson et al. [2012], Sandanger et al. [2007]). The softer midnight events coinciding with proton precipitation were found to be associated with magnetic eld stretching.

  14. Search for Sub-TeV Gamma Rays Coincident with BATSE Gamma Ray Bursts

    NASA Astrophysics Data System (ADS)

    D'Andrea, C. P.; D'Andrea, Christopher; Gress, Joseph; Race, Doran

    2003-07-01

    project GRAND is a 100m × 100m air shower array of proportional wire chambers (PWCs). There are 64 stations each with eight 1.29 m2 PWC planes arranged in four orthogonal pairs placed vertically above one another to geometrically measure the angles of charged secondaries. A steel plate above the bottom pair of PWCs differentiates muons (which pass undeflected through the steel) from non-p enetrating particles. FLUKA Monte Carlo studies show that a TeV gamma ray striking the atmosphere at normal incidence produces 0.23 muons which reach ground level where their angles and identities are measured. Thus, paradoxically, secondary muons are used as a signature for gamma ray primaries. The data are examined for possible angular and time coincidences with eight gamma ray bursts (GRBs) detected by BATSE. Seven of the GRBs were selected because of their good acceptance by GRAND and high BATSE fluence. The eighth GRB was added due to its possible coincident detection by Milagrito. For each of the eight candidate GRBs, the number of excess counts during the BATSE T90 time interval and within ±5° of BATSE's direction was obtained. The highest statistical significance reported in this paper (2.7σ ) is for the event that was predicted to be the most likely to be observed (GRB 971110).

  15. Detection of special nuclear materials with the associate particle technique

    NASA Astrophysics Data System (ADS)

    Carasco, Cédric; Deyglun, Clément; Pérot, Bertrand; Eléon, Cyrille; Normand, Stéphane; Sannié, Guillaume; Boudergui, Karim; Corre, Gwenolé; Konzdrasovs, Vladimir; Pras, Philippe

    2013-04-01

    In the frame of the French trans-governmental R&D program against chemical, biological, radiological, nuclear and explosives (CBRN-E) threats, CEA is studying the detection of Special Nuclear Materials (SNM) by neutron interrogation with fast neutrons produced by an associated particle sealed tube neutron generator. The deuterium-tritium fusion reaction produces an alpha particle and a 14 MeV neutron almost back to back, allowing tagging neutron emission both in time and direction with an alpha particle position-sensitive sensor embedded in the generator. Fission prompt neutrons and gamma rays induced by tagged neutrons which are tagged by an alpha particle are detected in coincidence with plastic scintillators. This paper presents numerical simulations performed with the MCNP-PoliMi Monte Carlo computer code and with post processing software developed with the ROOT data analysis package. False coincidences due to neutron and photon scattering between adjacent detectors (cross talk) are filtered out to increase the selectivity between nuclear and benign materials. Accidental coincidences, which are not correlated to an alpha particle, are also taken into account in the numerical model, as well as counting statistics, and the time-energy resolution of the data acquisition system. Such realistic calculations show that relevant quantities of SNM (few kg) can be distinguished from cargo and shielding materials in 10 min acquisitions. First laboratory tests of the system under development in CEA laboratories are also presented.

  16. Non-proportionality study of CaMoO4 and GAGG:Ce scintillation crystals using Compton coincidence technique.

    PubMed

    Kaewkhao, J; Limkitjaroenporn, P; Chaiphaksa, W; Kim, H J

    2016-09-01

    In this study, the CCT technique and nuclear instrument module (NIM) setup for the measurements of coincidence electron energy spectra of calcium molybdate (CaMoO4) and cerium doped gadolinium aluminium gallium garnet (Gd3Al2Ga3O12:Ce or GAGG:Ce) scintillation crystals were carried out. The (137)Cs irradiated gamma rays with an energy (Eγ) of 662keV was used as a radioactive source. The coincidence electron energy spectra were recorded at seven scattering angles of 30°-120°. It was found that seven corresponding electron energies were in the range of 100.5-435.4keV. The results show that, for all electron energies, the electron energy peaks of CaMoO4 crystal yielded higher number of counts than those of GAGG:Ce crystal. The electron energy resolution, the light yield and non-proportionality were also determined. It was found that the energy resolutions are inverse proportional to the square root of electron energy for both crystals. Furthermore, the results show that the light yield of GAGG:Ce crystal is much higher than that of CaMoO4 crystal. It was also found that both CaMoO4 and GAGG:Ce crystals demonstrated good proportional property in the electron energy range of 260-435.4keV. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using planetary alignment seeds.

  18. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlationmore » coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.« less

  19. Simultaneous determination of radionuclides separable into natural decay series by use of time-interval analysis.

    PubMed

    Hashimoto, Tetsuo; Sanada, Yukihisa; Uezu, Yasuhiro

    2004-05-01

    A delayed coincidence method, time-interval analysis (TIA), has been applied to successive alpha- alpha decay events on the millisecond time-scale. Such decay events are part of the (220)Rn-->(216)Po ( T(1/2) 145 ms) (Th-series) and (219)Rn-->(215)Po ( T(1/2) 1.78 ms) (Ac-series). By using TIA in addition to measurement of (226)Ra (U-series) from alpha-spectrometry by liquid scintillation counting (LSC), two natural decay series could be identified and separated. The TIA detection efficiency was improved by using the pulse-shape discrimination technique (PSD) to reject beta-pulses, by solvent extraction of Ra combined with simple chemical separation, and by purging the scintillation solution with dry N(2) gas. The U- and Th-series together with the Ac-series were determined, respectively, from alpha spectra and TIA carried out immediately after Ra-extraction. Using the (221)Fr-->(217)At ( T(1/2) 32.3 ms) decay process as a tracer, overall yields were estimated from application of TIA to the (225)Ra (Np-decay series) at the time of maximum growth. The present method has proven useful for simultaneous determination of three radioactive decay series in environmental samples.

  20. Characteristic Performance Evaluation of a new SAGe Well Detector for Small and Large Sample Geometries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adekola, A.S.; Colaresi, J.; Douwen, J.

    2015-07-01

    Environmental scientific research requires a detector that has sensitivity low enough to reveal the presence of any contaminant in the sample at a reasonable counting time. Canberra developed the germanium detector geometry called Small Anode Germanium (SAGe) Well detector, which is now available commercially. The SAGe Well detector is a new type of low capacitance germanium well detector manufactured using small anode technology capable of advancing many environmental scientific research applications. The performance of this detector has been evaluated for a range of sample sizes and geometries counted inside the well, and on the end cap of the detector. Themore » detector has energy resolution performance similar to semi-planar detectors, and offers significant improvement over the existing coaxial and Well detectors. Energy resolution performance of 750 eV Full Width at Half Maximum (FWHM) at 122 keV γ-ray energy and resolution of 2.0 - 2.3 keV FWHM at 1332 keV γ-ray energy are guaranteed for detector volumes up to 425 cm{sup 3}. The SAGe Well detector offers an optional 28 mm well diameter with the same energy resolution as the standard 16 mm well. Such outstanding resolution performance will benefit environmental applications in revealing the detailed radionuclide content of samples, particularly at low energy, and will enhance the detection sensitivity resulting in reduced counting time. The detector is compatible with electric coolers without any sacrifice in performance and supports the Canberra Mathematical efficiency calibration method (In situ Object Calibration Software or ISOCS, and Laboratory Source-less Calibration Software or LABSOCS). In addition, the SAGe Well detector supports true coincidence summing available in the ISOCS/LABSOCS framework. The improved resolution performance greatly enhances detection sensitivity of this new detector for a range of sample sizes and geometries counted inside the well. This results in lower minimum detectable concentrations compared to Traditional Well detectors. The SAGe Well detectors are compatible with Marinelli beakers and compete very well with semi-planar and coaxial detectors for large samples in many applications. (authors)« less

  1. A study on evaluation of the dependences of the function and the shape in a 99 m Tc-DMSA renal scan on the difference in acquisition count

    NASA Astrophysics Data System (ADS)

    Dong, Kyung-Rae; Shim, Dong-Oh; Kim, Ho-Sung; Park, Yong-Soon; Chung, Woon-Kwan; Cho, Jae-Hwan

    2013-02-01

    In a nuclear medicine examination, methods to acquire a static image include the preset count method and the preset time method. The preset count method is used mainly in a static renal scan that utilizes 99 m Tc-DMSA (dimoercaptosuccinic acid) whereas the preset time method is used occasionally. When the preset count method is used, the same number of acquisition counts is acquired for each time, but the scan time varies. When the preset time method is used, the scan time is constant, but the number of counts acquired is not the same. Therefore, this study examined the dependence of the difference in information on the function and the shape of both sides of the kidneys on the counts acquired during a renal scan that utilizes 99 m Tc-DMSA. The study involved patients who had 40-60% relative function of one kidney among patients who underwent a 99 m Tc-DMSA renal scan in the Nuclear Medicine Department during the period from January 11 to March 31, 2012. A gamma camera was used to obtain the acquisition count continuously using 100,000 counts and 300,000 counts, and an acquisition time of 7 minutes (exceeding 300,000 counts). The function and the shape of the kidney were evaluated by measuring the relative function of both sides of the kidneys, the geometric mean, and the size of kidney before comparative analysis. According to the study results, neither the relative function nor the geometric mean of both sides of the kidneys varied significantly with the acquisition count. On the other hand, the size of the kidney tended to be larger with increasing acquisition count.

  2. On the Wigner law in dilute random matrices

    NASA Astrophysics Data System (ADS)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  3. Violations of a new inequality for classical fields

    NASA Technical Reports Server (NTRS)

    Franson, J. D.

    1992-01-01

    Two entangled photons incident upon two distant interferometers can give a coincidence counting rate that depends nonlocally on the sum of the phases of the two interferometers. It has recently been shown that experiments of this kind may violate a simple inequality that must be satisfied by any classical or semi-classical field theory. The inequality provides a graphic illustration of the lack of objective realism of the electric field. The results of a recent experiment which violates this inequality and in which the optical path length between the two interferometers was greater than 100 m are briefly described.

  4. Evaluation of retrieval methods of daytime convective boundary layer height based on lidar data

    NASA Astrophysics Data System (ADS)

    Li, Hong; Yang, Yi; Hu, Xiao-Ming; Huang, Zhongwei; Wang, Guoyin; Zhang, Beidou; Zhang, Tiejun

    2017-04-01

    The atmospheric boundary layer height is a basic parameter in describing the structure of the lower atmosphere. Because of their high temporal resolution, ground-based lidar data are widely used to determine the daytime convective boundary layer height (CBLH), but the currently available retrieval methods have their advantages and drawbacks. In this paper, four methods of retrieving the CBLH (i.e., the gradient method, the idealized backscatter method, and two forms of the wavelet covariance transform method) from lidar normalized relative backscatter are evaluated, using two artificial cases (an idealized profile and a case similar to real profile), to test their stability and accuracy. The results show that the gradient method is suitable for high signal-to-noise ratio conditions. The idealized backscatter method is less sensitive to the first estimate of the CBLH; however, it is computationally expensive. The results obtained from the two forms of the wavelet covariance transform method are influenced by the selection of the initial input value of the wavelet amplitude. Further sensitivity analysis using real profiles under different orders of magnitude of background counts show that when different initial input values are set, the idealized backscatter method always obtains consistent CBLH. For two wavelet methods, the different CBLH are always obtained with the increase in the wavelet amplitude when noise is significant. Finally, the CBLHs as measured by three lidar-based methods are evaluated by as measured from L-band soundings. The boundary layer heights from two instruments coincide with ±200 m in most situations.

  5. Simulation of neutron production using MCNPX+MCUNED.

    PubMed

    Erhard, M; Sauvan, P; Nolte, R

    2014-10-01

    In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Evaluation of mouse red blood cell and platelet counting with an automated hematology analyzer.

    PubMed

    Fukuda, Teruko; Asou, Eri; Nogi, Kimiko; Goto, Kazuo

    2017-10-07

    An evaluation of mouse red blood cell (RBC) and platelet (PLT) counting with an automated hematology analyzer was performed with three strains of mice, C57BL/6 (B6), BALB/c (BALB) and DBA/2 (D2). There were no significant differences in RBC and PLT counts between manual and automated optical methods in any of the samples, except for D2 mice. For D2, RBC counts obtained using the manual method were significantly lower than those obtained using the automated optical method (P<0.05), and PLT counts obtained using the manual method were higher than those obtained using the automated optical method (P<0.05). An automated hematology analyzer can be used for RBC and PLT counting; however, an appropriate method should be selected when D2 mice samples are used.

  7. Cosmic veto gamma-spectrometry for Comprehensive Nuclear-Test-Ban Treaty samples

    NASA Astrophysics Data System (ADS)

    Burnett, J. L.; Davies, A. V.

    2014-05-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is supported by a global network of monitoring stations that perform high-resolution gamma-spectrometry on air filter samples for the identification of 85 radionuclides. At the UK CTBT Radionuclide Laboratory (GBL15), a novel cosmic veto gamma-spectrometer has been developed to improve the sensitivity of station measurements, providing a mean background reduction of 80.8% with mean MDA improvements of 45.6%. The CTBT laboratory requirement for a 140Ba MDA is achievable after 1.5 days counting compared to 5-7 days using conventional systems. The system consists of plastic scintillation plates that detect coincident cosmic-ray interactions within an HPGe gamma-spectrometer using the Canberra LynxTM multi-channel analyser. The detector is remotely configurable using a TCP/IP interface and requires no dedicated coincidence electronics. It would be especially useful in preventing false-positives at remote station locations (e.g. Halley, Antarctica) where sample transfer to certified laboratories is logistically difficult. The improved sensitivity has been demonstrated for a CTBT air filter sample collected after the Fukushima incident.

  8. Investigation of Self Triggered Cosmic Ray Detectors using Silicon Photomultiplier

    NASA Astrophysics Data System (ADS)

    Knox, Adrian; Niduaza, Rommel; Hernandez, Victor; Ruiz, Daniel; Ramos, Daniel; Fan, Sewan; Fatuzzo, Laura; Ritt, Stefan

    2015-04-01

    The silicon photomultiplier (SiPM) is a highly sensitive light detector capable of measuring single photons. It costs a fraction of the photomultiplier tube and operates slightly above the breakdown voltage. At this conference we describe our investigation of SiPM, the multipixel photon counters (MPPC) from Hamamatsu as readout detectors for plastic scintillators working for detecting cosmic ray particles. Our setup consists of scintillator sheets embedded with blue to green wavelength shifting fibers optically coupled to MPPCs to detect scintillating light. Four detector assemblies would be constructed and arranged to work in self triggered mode. Using custom matching tee boxes, the amplified MPPC signals are fed to discriminators with threshold set to give a reasonable coincidence count rate. Moreover, the detector waveforms are digitized using a 5 Giga Samples per second waveform digitizer, the DRS4, and triggered with the coincidence logic to capture the MPPC waveforms. Offline analysis of the digitized waveforms is accomplished using the CERN package PAW and results of our experiments and the data analysis would also be discussed. US Department of Education Title V Grant Number PO31S090007.

  9. Suicide risk in relation to air pollen counts: a study based on data from Danish registers

    PubMed Central

    Qin, Ping; Waltoft, Berit L; Mortensen, Preben B; Postolache, Teodor T

    2013-01-01

    Objectives Since the well-observed spring peak of suicide incidents coincides with the peak of seasonal aeroallergens as tree-pollen, we want to document an association between suicide and pollen exposure with empirical data from Denmark. Design Ecological time series study. Setting Data on suicide incidents, air pollen counts and meteorological status were retrieved from Danish registries. Participants 13 700 suicide incidents over 1304 consecutive weeks were obtained from two large areas covering 2.86 million residents. Primary and secondary outcome measures Risk of suicide associated with pollen concentration was assessed using a time series Poisson-generalised additive model. Results We noted a significant association between suicide risk and air pollen counts. A change of pollen counts levels from 0 to ‘10–<30’ grains/m3 air was associated with a relative risk of 1.064, that is, a 6.4% increase in weekly number of suicides in the population, and from 0 to ‘30–100’ grains, a relative risk of 1.132. The observed association remained significant after controlling for effects of region, calendar time, temperature, cloud cover and humidity. Meanwhile, we observed a significant sex difference that suicide risk in men started to rise when there was a small increase of air pollen, while the risk in women started to rise until pollen grains reached a certain level. High levels of pollen had slightly stronger effect on risk of suicide in individuals with mood disorder than those without the disorder. Conclusions The observed association between suicide risk and air pollen counts supports the hypothesis that aeroallergens, acting as immune triggers, may precipitate suicide. PMID:23793651

  10. Paradoxical drop in circulating neutrophil count following granulocyte-colony stimulating factor and stem cell factor administration in rhesus macaques.

    PubMed

    Gordon, Brent C; Revenis, Amy M; Bonifacino, Aylin C; Sander, William E; Metzger, Mark E; Krouse, Allen E; Usherson, Tatiana N; Donahue, Robert E

    2007-06-01

    Granulocyte colony-stimulating factor (G-CSF) is frequently used therapeutically to treat chronic or transient neutropenia and to mobilize hematopoietic stem cells. Shortly following G-CSF administration, we observed a dramatic transient drop in circulating neutrophil number. This article characterizes this effect in a rhesus macaque animal model. Hematologic changes were monitored following subcutaneous (SQ) administration of G-CSF. G-CSF was administered as a single SQ dose at 10 microg/kg or 50 microg/kg. It was also administered (10 microg/kg) in combination with stem cell factor (SCF; 200 microg/kg) over 5 days. Flow cytometry was performed on serial blood samples to detect changes in cell surface adhesion protein expression. Neutrophil count dramatically declined 30 minutes after G-CSF administration. This decline was observed whether 10 microg/kg G-CSF was administered in combination with SCF over 5 days, or given as a single 10 microg/kg dose. At a single 50 microg/kg dose, the decline accelerated to 15 minutes. Neutrophil count returned to baseline after 120 minutes and rapidly increased thereafter. An increase in CD11a and CD49d expression coincided with the drop in neutrophil count. A transient paradoxical decline in neutrophil count was observed following administration of G-CSF either alone or in combination with SCF. This decline accelerated with the administration of a higher dose of G-CSF and was associated with an increase in CD11a and CD49d expression. It remains to be determined whether this decline in circulating neutrophils is associated with an increase in endothelial margination and/or entrance into extravascular compartments.

  11. Simultaneous Forbush decreases and associated geomagnetic storms during the last three solar cycles

    NASA Astrophysics Data System (ADS)

    Okpala, K. C.

    2013-12-01

    Forbush decrease (FD) are observed reduction in galactic cosmic ray (GCR) intensity as measured by ground neutron monitors. FD is associated with increased activity of the sun as reflected in the size of the interplanetary coronal mass ejections passing around the Earth and the corotating regions in the Heliosphere. Since the interplanetary anisotropy evolves itself during a geomagnetic storm in addition to the reconfiguration of external magnetospheric currents, it is expected that changes in transmissivity of cosmic rays of glactic origin will occur during Geomagnetic storms. In this study we examine over one hundred and fifty (150) FD events and associated geomagnetic storms over the last three solar cycles from 1970 to 2003. The negative peaks of the FDs and the Dst coincided for most of the events (~70%). There was good correlation (>0.65) between the FDs and Dst. Fresh evidence of the influence of external magnetospheric currents on the count rates of the neutron monitors stations during periods of Forbush decreases (FDs) is provided. This evidence is observed as sudden increases in the count rates during the main phase of simultaneous FD. The magnitude of the sudden rise in the count rates of Neutron monitors and peak dst correlated well (>0.50) both for high latitude and mid latitude stations.

  12. Evaluation of Am–Li neutron spectra data for active well type neutron multiplicity measurements of uranium

    DOE PAGES

    Goddard, Braden; Croft, Stephen; Lousteau, Angela; ...

    2016-05-25

    Safeguarding nuclear material is an important and challenging task for the international community. One particular safeguards technique commonly used for uranium assay is active neutron correlation counting. This technique involves irradiating unused uranium with ( α,n) neutrons from an Am-Li source and recording the resultant neutron pulse signal which includes induced fission neutrons. Although this non-destructive technique is widely employed in safeguards applications, the neutron energy spectra from an Am-Li sources is not well known. Several measurements over the past few decades have been made to characterize this spectrum; however, little work has been done comparing the measured spectra ofmore » various Am-Li sources to each other. This paper examines fourteen different Am-Li spectra, focusing on how these spectra affect simulated neutron multiplicity results using the code Monte Carlo N-Particle eXtended (MCNPX). Two measurement and simulation campaigns were completed using Active Well Coincidence Counter (AWCC) detectors and uranium standards of varying enrichment. The results of this work indicate that for standard AWCC measurements, the fourteen Am-Li spectra produce similar doubles and triples count rates. Finally, the singles count rates varied by as much as 20% between the different spectra, although they are usually not used in quantitative analysis.« less

  13. Prediction of gastrointestinal disease with over-the-counter diarrheal remedy sales records in the San Francisco Bay Area.

    PubMed

    Kirian, Michelle L; Weintraub, June M

    2010-07-20

    Water utilities continue to be interested in implementing syndromic surveillance for the enhanced detection of waterborne disease outbreaks. The authors evaluated the ability of sales of over-the-counter diarrheal remedies available from the National Retail Data Monitor to predict endemic and epidemic gastrointestinal disease in the San Francisco Bay Area. Time series models were fit to weekly diarrheal remedy sales and diarrheal illness case counts. Cross-correlations between the pre-whitened residual series were calculated. Diarrheal remedy sales model residuals were regressed on the number of weekly outbreaks and outbreak-associated cases. Diarrheal remedy sales models were used to auto-forecast one week-ahead sales. The sensitivity and specificity of signals, generated by observed diarrheal remedy sales exceeding the upper 95% forecast confidence interval, in predicting weekly outbreaks were calculated. No significant correlations were identified between weekly diarrheal remedy sales and diarrhea illness case counts, outbreak counts, or the number of outbreak-associated cases. Signals generated by forecasting with the diarrheal remedy sales model did not coincide with outbreak weeks more reliably than signals chosen randomly. This work does not support the implementation of syndromic surveillance for gastrointestinal disease with data available though the National Retail Data Monitor.

  14. Retrievals of Thick Cloud Optical Depth from the Geoscience Laser Altimeter System (GLAS) by Calibration of Solar Background Signal

    NASA Technical Reports Server (NTRS)

    Yang, Yuekui; Marshak, Alexander; Chiu, J. Christine; Wiscombe, Warren J.; Palm, Stephen P.; Davis, Anthony B.; Spangenberg, Douglas A.; Nguyen, Louis; Spinhirne, James D.; Minnis, Patrick

    2008-01-01

    Laser beams emitted from the Geoscience Laser Altimeter System (GLAS), as well as other space-borne laser instruments, can only penetrate clouds to a limit of a few optical depths. As a result, only optical depths of thinner clouds (< about 3 for GLAS) are retrieved from the reflected lidar signal. This paper presents a comprehensive study of possible retrievals of optical depth of thick clouds using solar background light and treating GLAS as a solar radiometer. To do so we first calibrate the reflected solar radiation received by the photon-counting detectors of GLAS' 532 nm channel, which is the primary channel for atmospheric products. The solar background radiation is regarded as a noise to be subtracted in the retrieval process of the lidar products. However, once calibrated, it becomes a signal that can be used in studying the properties of optically thick clouds. In this paper, three calibration methods are presented: (I) calibration with coincident airborne and GLAS observations; (2) calibration with coincident Geostationary Operational Environmental Satellite (GOES) and GLAS observations of deep convective clouds; (3) calibration from the first principles using optical depth of thin water clouds over ocean retrieved by GLAS active remote sensing. Results from the three methods agree well with each other. Cloud optical depth (COD) is retrieved from the calibrated solar background signal using a one-channel retrieval. Comparison with COD retrieved from GOES during GLAS overpasses shows that the average difference between the two retrievals is 24%. As an example, the COD values retrieved from GLAS solar background are illustrated for a marine stratocumulus cloud field that is too thick to be penetrated by the GLAS laser. Based on this study, optical depths for thick clouds will be provided as a supplementary product to the existing operational GLAS cloud products in future GLAS data releases.

  15. Identification of CSF fistulas by radionuclide counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Y.; Kunishio, K.; Sunami, N.

    1990-07-01

    A radionuclide counting method, performed with the patient prone and the neck flexed, was used successfully to diagnose CSF rhinorrhea in two patients. A normal radionuclide ratio (radionuclide counts in pledget/radionuclide counts in 1-ml blood sample) was obtained in 11 normal control subjects. Significance was determined to be a ratio greater than 0.37. Use of radionuclide counting method of determining CSF rhinorrhea is recommended when other methods have failed to locate a site of leakage or when posttraumatic meningitis suggests subclinical CSF rhinorrhea.

  16. NEMA NU 2-2012 performance studies for the SiPM-based ToF-PET component of the GE SIGNA PET/MR system.

    PubMed

    Grant, Alexander M; Deller, Timothy W; Khalighi, Mohammad Mehdi; Maramraju, Sri Harsha; Delso, Gaspar; Levin, Craig S

    2016-05-01

    The GE SIGNA PET/MR is a new whole body integrated time-of-flight (ToF)-PET/MR scanner from GE Healthcare. The system is capable of simultaneous PET and MR image acquisition with sub-400 ps coincidence time resolution. Simultaneous PET/MR holds great potential as a method of interrogating molecular, functional, and anatomical parameters in clinical disease in one study. Despite the complementary imaging capabilities of PET and MRI, their respective hardware tends to be incompatible due to mutual interference. In this work, the GE SIGNA PET/MR is evaluated in terms of PET performance and the potential effects of interference from MRI operation. The NEMA NU 2-2012 protocol was followed to measure PET performance parameters including spatial resolution, noise equivalent count rate, sensitivity, accuracy, and image quality. Each of these tests was performed both with the MR subsystem idle and with continuous MR pulsing for the duration of the PET data acquisition. Most measurements were repeated at three separate test sites where the system is installed. The scanner has achieved an average of 4.4, 4.1, and 5.3 mm full width at half maximum radial, tangential, and axial spatial resolutions, respectively, at 1 cm from the transaxial FOV center. The peak noise equivalent count rate (NECR) of 218 kcps and a scatter fraction of 43.6% are reached at an activity concentration of 17.8 kBq/ml. Sensitivity at the center position is 23.3 cps/kBq. The maximum relative slice count rate error below peak NECR was 3.3%, and the residual error from attenuation and scatter corrections was 3.6%. Continuous MR pulsing had either no effect or a minor effect on each measurement. Performance measurements of the ToF-PET whole body GE SIGNA PET/MR system indicate that it is a promising new simultaneous imaging platform.

  17. BGO as a hybrid scintillator / Cherenkov radiator for cost-effective time-of-flight PET

    NASA Astrophysics Data System (ADS)

    Brunner, S. E.; Schaart, D. R.

    2017-06-01

    Due to detector developments in the last decade, the time-of-flight (TOF) method is now commonly used to improve the quality of positron emission tomography (PET) images. Clinical TOF-PET systems based on L(Y)SO:Ce crystals and silicon photomultipliers (SiPMs) with coincidence resolving times (CRT) between 325 ps and 400 ps FWHM have recently been developed. Before the introduction of L(Y)SO:Ce, BGO was used in many PET systems. In addition to a lower price, BGO offers a superior attenuation coefficient and a higher photoelectric fraction than L(Y)SO:Ce. However, BGO is generally considered an inferior TOF-PET scintillator. In recent years, TOF-PET detectors based on the Cherenkov effect have been proposed. However, the low Cherenkov photon yield in the order of  ˜10 photons per event complicates energy discrimination-a severe disadvantage in clinical PET. The optical characteristics of BGO, in particular its high transparency down to 310 nm and its high refractive index of  ˜2.15, are expected to make it a good Cherenkov radiator. Here, we study the feasibility of combining event timing based on Cherenkov emission with energy discrimination based on scintillation in BGO, as a potential approach towards a cost-effective TOF-PET detector. Rise time measurements were performed using a time-correlated single photon counting (TCSPC) setup implemented on a digital photon counter (DPC) array, revealing a prompt luminescent component likely to be due to Cherenkov emission. Coincidence timing measurements were performed using BGO crystals with a cross-section of 3 mm  ×  3 mm and five different lengths between 3 mm and 20 mm, coupled to DPC arrays. Non-Gaussian coincidence spectra with a FWHM of 200 ps were obtained with the 27 mm3 BGO cubes, while FWHM values as good as 330 ps were achieved with the 20 mm long crystals. The FWHM value was found to improve with decreasing temperature, while the FWTM value showed the opposite trend.

  18. BGO as a hybrid scintillator / Cherenkov radiator for cost-effective time-of-flight PET.

    PubMed

    Brunner, S E; Schaart, D R

    2017-06-07

    Due to detector developments in the last decade, the time-of-flight (TOF) method is now commonly used to improve the quality of positron emission tomography (PET) images. Clinical TOF-PET systems based on L(Y)SO:Ce crystals and silicon photomultipliers (SiPMs) with coincidence resolving times (CRT) between 325 ps and 400 ps FWHM have recently been developed. Before the introduction of L(Y)SO:Ce, BGO was used in many PET systems. In addition to a lower price, BGO offers a superior attenuation coefficient and a higher photoelectric fraction than L(Y)SO:Ce. However, BGO is generally considered an inferior TOF-PET scintillator. In recent years, TOF-PET detectors based on the Cherenkov effect have been proposed. However, the low Cherenkov photon yield in the order of  ∼10 photons per event complicates energy discrimination-a severe disadvantage in clinical PET. The optical characteristics of BGO, in particular its high transparency down to 310 nm and its high refractive index of  ∼2.15, are expected to make it a good Cherenkov radiator. Here, we study the feasibility of combining event timing based on Cherenkov emission with energy discrimination based on scintillation in BGO, as a potential approach towards a cost-effective TOF-PET detector. Rise time measurements were performed using a time-correlated single photon counting (TCSPC) setup implemented on a digital photon counter (DPC) array, revealing a prompt luminescent component likely to be due to Cherenkov emission. Coincidence timing measurements were performed using BGO crystals with a cross-section of 3 mm  ×  3 mm and five different lengths between 3 mm and 20 mm, coupled to DPC arrays. Non-Gaussian coincidence spectra with a FWHM of 200 ps were obtained with the 27 mm 3 BGO cubes, while FWHM values as good as 330 ps were achieved with the 20 mm long crystals. The FWHM value was found to improve with decreasing temperature, while the FWTM value showed the opposite trend.

  19. HETEROTROPHIC PLATE COUNT (HPC) METHODOLOGY IN THE UNITED STATES

    EPA Science Inventory

    ABSTRACT

    In the United States (U.S.), the history of bacterial plate counting methods used for water can be traced largely through Standard Methods for the Examination of Water and Wastewater (Standard Methods). The bacterial count method has evolved from the original St...

  20. Mapping of Bird Distributions from Point Count Surveys

    Treesearch

    John R. Sauer; Grey W. Pendleton; Sandra Orsillo

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes...

  1. Platelet counting using the Coulter electronic counter.

    PubMed

    Eggleton, M J; Sharp, A A

    1963-03-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.

  2. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection

    PubMed Central

    Choudhry, Priya

    2016-01-01

    Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849

  3. On the measurement of airborne, angular-dependent sound transmission through supercritical bars.

    PubMed

    Shaw, Matthew D; Anderson, Brian E

    2012-10-01

    The coincidence effect is manifested by maximal sound transmission at angles at which trace wave number matching occurs. Coincidence effect theory is well-defined for unbounded thin plates using plane-wave excitation. However, experimental results for finite bars are known to diverge from theory near grazing angles. Prior experimental work has focused on pulse excitation. An experimental setup has been developed to observe coincidence using continuous- wave excitation and phased-array methods. Experimental results with an aluminum bar exhibit maxima at the predicted angles, showing that coincidence is observable using continuous waves. Transmission near grazing angles is seen to diverge from infinite plate theory.

  4. Image-based red cell counting for wild animals blood.

    PubMed

    Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia

    2010-01-01

    An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.

  5. A Statistical Study of Spatial Variation of Relativistic Electron Precipitation Energy Spectra With Polar Operational Environmental Satellites

    NASA Astrophysics Data System (ADS)

    Shekhar, S.; Millan, R. M.; Hudson, M. K.

    2018-05-01

    The mechanisms that drive relativistic electron precipitation (REP) from the radiation belts can be better understood with a better knowledge of the particle energies involved. National Oceanic and Atmospheric Administration Polar Operational Environmental Satellites, being a network of multiple satellites, can provide multiple point spectral data over a long time period, including the Van Allen Probe's era. The number of energy channels is limited, but the particle detectors on Polar Operational Environmental Satellites have a narrow field of view allowing an investigation of bounce loss cone particles. We use the ratio of count rates in the E3 (>300 keV) and the P6 (>700 keV) channels as a parameter to define spectral hardness. Using this parameter, the spatial variation of spectral hardness of REP events was investigated. It was found that very soft events were mostly found in the dusk-midnight-early morning magnetic local time sectors and L˜ 5-7 while the hardest events were located in the postnoon sector peaking at L˜ 4-5. The hardest events peaked at lower L shells, and less than 20% were coincident with low-energy (30-80 keV) proton precipitation. Further, around 70% of nightside REP coincident with proton precipitation was associated with stretched magnetic field lines indicating that curvature scattering may have been an important driver. Around 62% of nightside REP coincident with proton precipitation associated with relaxed magnetic field lines, suggesting a mechanism other than magnetic field curvature scattering, was highly energetic.

  6. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    NASA Astrophysics Data System (ADS)

    Stephenson, W. Kirk

    2009-08-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes.

  7. Speaker-independent phoneme recognition with a binaural auditory image model

    NASA Astrophysics Data System (ADS)

    Francis, Keith Ivan

    1997-09-01

    This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.

  8. 17 CFR 275.203(b)(3)-2 - Methods for counting clients in certain private funds.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Methods for counting clients....203(b)(3)-2 Methods for counting clients in certain private funds. (a) For purposes of section 203(b)(3) of the Act (15 U.S.C. 80b-3(b)(3)), you must count as clients the shareholders, limited partners...

  9. Platelet counting using the Coulter electronic counter

    PubMed Central

    Eggleton, M. J.; Sharp, A. A.

    1963-01-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.1 The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed. PMID:16811002

  10. Spontaneous clearance of chronic hepatitis C is rare in HIV-infected patients after effective use of combination antiretroviral therapy

    PubMed Central

    Frias, Mario; Rivero-Juarez, Antonio; Tellez, Francisco; Perez-Perez, Monserrat; Camacho, Angela; Machuca, Isabel; Lorenzo-Moncada, Sandra; Lopez-Lopez, Pedro

    2017-01-01

    Objective To evaluate the rate of spontaneous resolution of chronic hepatitis C (CHC) infection in a cohort of HIV-infected patients. Methods A retrospective analysis of 509 HIV-infected patients with chronic HCV infection was performed at two reference hospitals in Andalusia. The main variable of the study was spontaneous clearance of CHC, defined as a negative HCV RNA result after at least two previous quantitative measurements of HCV RNA separated by a minimum of 12 months. Results Of 509 patients, 3 (0.59%; 95% CI: 0.15%-1.6%) experienced spontaneous clearance of CHC. After combination antiretroviral therapy (cART) initiation, two of three cases experienced an increased CD4+ count, coinciding with HCV viral clearance. All patients were IL28B CC carriers, 2 were co-infected with HCV genotype 3 (the HCV genotype of the remaining patient was not available). Conclusions Spontaneous clearance of CHC is a rare event in the context of HIV/HCV co-infected patients and may be associated with the effective use of cART and thus HIV suppression. PMID:28472191

  11. The prevalence, intensity and clinical manifestations of Onchocerca volvulus infection in Toro local government area of Bauchi State, Nigeria.

    PubMed

    Anosike, J C; Celestine; Onwuliri, O E; Onwuliri, V A

    2001-07-01

    Between January and October 1994, a study of the prevalence, intensity and clinical manifestations of onchocerciasis in nine communities of Toro local government area of Bauchi State, Nigeria was undertaken using the skin-snip method. Of the 1117 inhabitants examined, 188 (16.8%) were positive for microfilariae of Onchocerca volvulus. The prevalence of onchocerciasis was significantly higher (P < 0.05) among males than females, in subjects 21 years of age and above than in those in the first two decades of life, in nomads, farmers, hunters and fishermen than smiths and traders. Intensity of infection was light, not exceeding a geometric mean of 5.3 microfilaria per 2 mm skin bite. Preponderance of positive cases below 20 years presented no chronic signs. Conversely, persons above 20 years had higher microfilaria counts which coincides with the period when most clinical signs manifest. Microfilarial-rate and -density in relation to age were closely associated (r = 0.75, P < 0.001). The need for a sustained mass distribution of Mectizan in these communities is highlighted.

  12. State preparation and detector effects in quantum measurements of rotation with circular polarization-entangled photons and photon counting

    NASA Astrophysics Data System (ADS)

    Cen, Longzhu; Zhang, Zijing; Zhang, Jiandong; Li, Shuo; Sun, Yifei; Yan, Linyu; Zhao, Yuan; Wang, Feng

    2017-11-01

    Circular polarization-entangled photons can be used to obtain an enhancement of the precision in a rotation measurement. In this paper, the method of entanglement transformation is used to produce NOON states in circular polarization from a readily generated linear polarization-entangled photon source. Detection of N -fold coincidences serves as the postselection and N -fold superoscillating fringes are obtained simultaneously. A parity strategy and conditional probabilistic statistics contribute to a better fringe, saturating the angle sensitivity to the Heisenberg limit. The impact of imperfect state preparation and detection is discussed both separately and jointly. For the separated case, the influence of each system imperfection is pronounced. For the joint case, the feasibility region for surpassing the standard quantum limit is given. Our work pushes the state preparation of circular polarization-entangled photons to the same level as that in the case of linear polarization. It is also confirmed that entanglement can be transformed into different frames for specific applications, serving as a useful scheme for using entangled sources.

  13. Ring magnet firing angle control

    DOEpatents

    Knott, M.J.; Lewis, L.G.; Rabe, H.H.

    1975-10-21

    A device is provided for controlling the firing angles of thyratrons (rectifiers) in a ring magnet power supply. A phase lock loop develops a smooth ac signal of frequency equal to and in phase with the frequency of the voltage wave developed by the main generator of the power supply. A counter that counts from zero to a particular number each cycle of the main generator voltage wave is synchronized with the smooth AC signal of the phase lock loop. Gates compare the number in the counter with predetermined desired firing angles for each thyratron and with coincidence the proper thyratron is fired at the predetermined firing angle.

  14. Einstein-Podolsky-Rosen-Bohm experiment and Bell inequality violation using Type 2 parametric down conversion

    NASA Technical Reports Server (NTRS)

    Kiess, Thomas E.; Shih, Yan-Hua; Sergienko, A. V.; Alley, Carroll O.

    1994-01-01

    We report a new two-photon polarization correlation experiment for realizing the Einstein-Podolsky-Rosen-Bohm (EPRB) state and for testing Bell-type inequalities. We use the pair of orthogonally-polarized light quanta generated in Type 2 parametric down conversion. Using 1 nm interference filters in front of our detectors, we observe from the output of a 0.5mm beta - BaB2O4 (BBO) crystal the EPRB correlations in coincidence counts, and measure an associated Bell inequality violation of 22 standard deviations. The quantum state of the photon pair is a polarization analog of the spin-1/2 singlet state.

  15. Statistical approaches to the analysis of point count data: A little extra information can go a long way

    USGS Publications Warehouse

    Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.

  16. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    ERIC Educational Resources Information Center

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  17. Inter-rater reliability of malaria parasite counts and comparison of methods

    PubMed Central

    2009-01-01

    Background The introduction of artemesinin-based treatment for falciparum malaria has led to a shift away from symptom-based diagnosis. Diagnosis may be achieved by using rapid non-microscopic diagnostic tests (RDTs), of which there are many available. Light microscopy, however, has a central role in parasite identification and quantification and remains the main method of parasite-based diagnosis in clinic and hospital settings and is necessary for monitoring the accuracy of RDTs. The World Health Organization has prepared a proficiency testing panel containing a range of malaria-positive blood samples of known parasitaemia, to be used for the assessment of commercially available malaria RDTs. Different blood film and counting methods may be used for this purpose, which raises questions regarding accuracy and reproducibility. A comparison was made of the established methods for parasitaemia estimation to determine which would give the least inter-rater and inter-method variation Methods Experienced malaria microscopists counted asexual parasitaemia on different slides using three methods; the thin film method using the total erythrocyte count, the thick film method using the total white cell count and the Earle and Perez method. All the slides were stained using Giemsa pH 7.2. Analysis of variance (ANOVA) models were used to find the inter-rater reliability for the different methods. The paired t-test was used to assess any systematic bias between the two methods, and a regression analysis was used to see if there was a changing bias with parasite count level. Results The thin blood film gave parasite counts around 30% higher than those obtained by the thick film and Earle and Perez methods, but exhibited a loss of sensitivity with low parasitaemia. The thick film and Earle and Perez methods showed little or no bias in counts between the two methods, however, estimated inter-rater reliability was slightly better for the thick film method. Conclusion The thin film method gave results closer to the true parasite count but is not feasible at a parasitaemia below 500 parasites per microlitre. The thick film method was both reproducible and practical for this project. The determination of malarial parasitaemia must be applied by skilled operators using standardized techniques. PMID:19939271

  18. Evaluation of petrifilm series 2000 as a possible rapid method to count coliforms in foods.

    PubMed

    Priego, R; Medina, L M; Jordano, R

    2000-08-01

    This research note is a preliminary comparison between the Petrifilm 2000 method and a widely used traditional enumeration method (on violet red bile agar); six batches of different foods (egg, frozen green beans, fresh sausage, a bakery product, raw minced meat, and raw milk) were studied. The reliability of the presumptive counts taken at 10, 12, and 14 h of incubation using this method was also verified by comparing the counts with the total confirmed counts at 24 h. In all the batches studied, results obtained with Petrifilm 2000 presented a close correlation to those obtained using violet red bile agar (r = 0.860) and greater sensitivity (93.33% of the samples displayed higher counts on Petrifilm 2000), showing that this method is a reliable and efficient alternative. The count taken at 10-h incubation is of clear interest as an early indicator of results in microbiological food control, since it accounted for 90% of the final count in all the batches analyzed. Counts taken at 12 and 14 h bore a greater similarity to those taken at 24 h. The Petrifilm 2000 method provides results in less than 12 h of incubation, making it a possible rapid method that adapts perfectly to hazard analysis critical control point system by enabling the microbiological quality control of the processing.

  19. Performance evaluation of an Inveon PET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Constantinescu, Cristian C.; Mukherjee, Jogeshwar

    2009-05-01

    We evaluated the performance of an Inveon preclinical PET scanner (Siemens Medical Solutions), the latest MicroPET system. Spatial resolution was measured with a glass capillary tube (0.26 mm inside diameter, 0.29 mm wall thickness) filled with 18F solution. Transaxial and axial resolutions were measured with the source placed parallel and perpendicular to the axis of the scanner. The sensitivity of the scanner was measured with a 22Na point source, placed on the animal bed and positioned at different offsets from the center of the field of view (FOV), as well as at different energy and coincidence windows. The noise equivalent count rates (NECR) and the system scatter fraction were measured using rat-like (Φ = 60, L = 150 mm) and mouse-like (Φ = 25 mm, L = 70 mm) cylindrical phantoms. Line sources filled with high activity 18F (>250 MBq) were inserted parallel to the axes of the phantoms (13.5 and 10 mm offset). For each phantom, list-mode data were collected over 24 h at 350-650 keV and 250-750 keV energy windows and 3.4 ns coincidence window. System scatter fraction was measured when the random event rates were below 1%. Performance phantoms consisting of cylinders with hot rod inserts filled with 18F were imaged. In addition, we performed imaging studies that show the suitability of the Inveon scanner for imaging small structures such as those in mice with a variety of tracers. The radial, tangential and axial resolutions at the center of FOV were 1.46 mm, 1.49 and 1.15 mm, respectively. At a radial offset of 2 cm, the FWHM values were 1.73, 2.20 and 1.47 mm, respectively. At a coincidence window of 3.4 ns, the sensitivity was 5.75% for EW = 350-650 keV and 7.4% for EW = 250-750 keV. For an energy window of 350-650 keV, the peak NECR was 538 kcps at 131.4 MBq for the rat-like phantom, and 1734 kcps at 147.4 MBq for the mouse-like phantom. The system scatter fraction values were 0.22 for the rat phantom and 0.06 for the mouse phantom. The Inveon system presents high image resolution, low scatter fraction values and improved sensitivity and count rate performance.

  20. Comparison Of 252Cf Time Correlated Induced Fisssion With AmLi Induced Fission On Fresh MTR Research Reactor Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, Jay Prakash

    The effective application of international safeguards to research reactors requires verification of spent fuel as well as fresh fuel. To accomplish this goal various nondestructive and destructive assay techniques have been developed in the US and around the world. The Advanced Experimental Fuel Counter (AEFC) is a nondestructive assay (NDA) system developed at Los Alamos National Laboratory (LANL) combining both neutron and gamma measurement capabilities. Since spent fuel assemblies are stored in water, the system was designed to be watertight to facilitate underwater measurements by inspectors. The AEFC is comprised of six 3He detectors as well as a shielded andmore » collimated ion chamber. The 3He detectors are used for active and passive neutron coincidence counting while the ion chamber is used for gross gamma counting. Active coincidence measurement data is used to measure residual fissile mass, whereas the passive coincidence measurement data along with passive gamma measurement can provide information about burnup, cooling time, and initial enrichment. In the past, most of the active interrogation systems along with the AEFC used an AmLi neutron interrogation source. Owing to the difficulty in obtaining an AmLi source, a 252Cf spontaneous fission (SF) source was used during a 2014 field trail in Uzbekistan as an alternative. In this study, experiments were performed to calibrate the AEFC instrument and compare use of the 252Cf spontaneous fission source and the AmLi (α,n) neutron emission source. The 252Cf source spontaneously emits bursts of time-correlated prompt fission neutrons that thermalize in the water and induce fission in the fuel assembly. The induced fission (IF) neutrons are also time correlated resulting in more correlated neutron detections inside the 3He detector, which helps reduce the statistical errors in doubles when using the 252Cf interrogation source instead of the AmLi source. In this work, two MTR fuel assemblies varying both in size and number of fuel plates were measured using 252Cf and AmLi active interrogation sources. This paper analyzes time correlated induced fission (TCIF) from fresh MTR fuel assemblies due to 252Cf and AmLi active interrogation sources.« less

  1. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  2. A new method of time difference measurement: The time difference method by dual phase coincidence points detection

    NASA Technical Reports Server (NTRS)

    Zhou, Wei

    1993-01-01

    In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.

  3. Importance of interpolation and coincidence errors in data fusion

    NASA Astrophysics Data System (ADS)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  4. Evaluation of double photon coincidence Compton imaging method with GEANT4 simulation

    NASA Astrophysics Data System (ADS)

    Yoshihara, Yuri; Shimazoe, Kenji; Mizumachi, Yuki; Takahashi, Hiroyuki

    2017-11-01

    Compton imaging has been used for various applications including astronomical observations, radioactive waste management, and biomedical imaging. The positions of radioisotopes are determined in the intersections of multiple cone traces through a large number of events, which reduces signal to noise ratio (SNR) of the images. We have developed an advanced Compton imaging method to localize radioisotopes with high SNR by using information of the interactions of Compton scattering caused by two gamma rays at the same time, as the double photon coincidence Compton imaging method. The targeted radioisotopes of this imaging method are specific nuclides that emit several gamma rays at the same time such as 60Co, 134Cs, and 111In, etc. Since their locations are determined in the intersections of two Compton cones, the most of cone traces would disappear in the three-dimensional space, which enhances the SNR and angular resolution. In this paper, the comparison of the double photon coincidence Compton imaging method and the single photon Compton imaging method was conducted by using GEANT4 Monte Carlo simulation.

  5. Comparison of plate counts, Petrifilm, dipslides, and adenosine triphosphate bioluminescence for monitoring bacteria in cooling-tower waters.

    PubMed

    Mueller, Sherry A; Anderson, James E; Kim, Byung R; Ball, James C

    2009-04-01

    Effective bacterial control in cooling-tower systems requires accurate and timely methods to count bacteria. Plate-count methods are difficult to implement on-site, because they are time- and labor-intensive and require sterile techniques. Several field-applicable methods (dipslides, Petrifilm, and adenosine triphosphate [ATP] bioluminescence) were compared with the plate count for two sample matrices--phosphate-buffered saline solution containing a pure culture of Pseudomonas fluorescens and cooling-tower water containing an undefined mixed bacterial culture. For the pure culture, (1) counts determined on nutrient agar and plate-count agar (PCA) media and expressed as colony-forming units (CFU) per milliliter were equivalent to those on R2A medium (p = 1.0 and p = 1.0, respectively); (2) Petrifilm counts were not significantly different from R2A plate counts (p = 0.99); (3) the dipslide counts were up to 2 log units higher than R2A plate counts, but this discrepancy was not statistically significant (p = 0.06); and (4) a discernable correlation (r2 = 0.67) existed between ATP readings and plate counts. For cooling-tower water samples (n = 62), (1) bacterial counts using R2A medium were higher (but not significant; p = 0.63) than nutrient agar and significantly higher than tryptone-glucose yeast extract (TGE; p = 0.03) and PCA (p < 0.001); (2) Petrifilm counts were significantly lower than nutrient agar or R2A (p = 0.02 and p < 0.001, respectively), but not statistically different from TGE, PCA, and dipslides (p = 0.55, p = 0.69, and p = 0.91, respectively); (3) the dipslide method yielded bacteria counts 1 to 3 log units lower than nutrient agar and R2A (p < 0.001), but was not significantly different from Petrifilm (p = 0.91), PCA (p = 1.00) or TGE (p = 0.07); (4) the differences between dipslides and the other methods became greater with a 6-day incubation time; and (5) the correlation between ATP readings and plate counts varied from system to system, was poor (r2 values ranged from < 0.01 to 0.47), and the ATP method was not sufficiently sensitive to measure counts below approximately 10(4) CFU/mL.

  6. Tight bounds for the Pearle-Braunstein-Caves chained inequality without the fair-coincidence assumption

    NASA Astrophysics Data System (ADS)

    Jogenfors, Jonathan; Larsson, Jan-Åke

    2017-08-01

    In any Bell test, loopholes can cause issues in the interpretation of the results, since an apparent violation of the inequality may not correspond to a violation of local realism. An important example is the coincidence-time loophole that arises when detector settings might influence the time when detection will occur. This effect can be observed in many experiments where measurement outcomes are to be compared between remote stations because the interpretation of an ostensible Bell violation strongly depends on the method used to decide coincidence. The coincidence-time loophole has previously been studied for the Clauser-Horne-Shimony-Holt and Clauser-Horne inequalities, but recent experiments have shown the need for a generalization. Here, we study the generalized "chained" inequality by Pearle, Braunstein, and Caves (PBC) with N ≥2 settings per observer. This inequality has applications in, for instance, quantum key distribution where it has been used to reestablish security. In this paper we give the minimum coincidence probability for the PBC inequality for all N ≥2 and show that this bound is tight for a violation free of the fair-coincidence assumption. Thus, if an experiment has a coincidence probability exceeding the critical value derived here, the coincidence-time loophole is eliminated.

  7. Validation of an automated colony counting system for group A Streptococcus.

    PubMed

    Frost, H R; Tsoi, S K; Baker, C A; Laho, D; Sanderson-Smith, M L; Steer, A C; Smeesters, P R

    2016-02-08

    The practice of counting bacterial colony forming units on agar plates has long been used as a method to estimate the concentration of live bacteria in culture. However, due to the laborious and potentially error prone nature of this measurement technique, an alternative method is desirable. Recent technologic advancements have facilitated the development of automated colony counting systems, which reduce errors introduced during the manual counting process and recording of information. An additional benefit is the significant reduction in time taken to analyse colony counting data. Whilst automated counting procedures have been validated for a number of microorganisms, the process has not been successful for all bacteria due to the requirement for a relatively high contrast between bacterial colonies and growth medium. The purpose of this study was to validate an automated counting system for use with group A Streptococcus (GAS). Twenty-one different GAS strains, representative of major emm-types, were selected for assessment. In order to introduce the required contrast for automated counting, 2,3,5-triphenyl-2H-tetrazolium chloride (TTC) dye was added to Todd-Hewitt broth with yeast extract (THY) agar. Growth on THY agar with TTC was compared with growth on blood agar and THY agar to ensure the dye was not detrimental to bacterial growth. Automated colony counts using a ProtoCOL 3 instrument were compared with manual counting to confirm accuracy over the stages of the growth cycle (latent, mid-log and stationary phases) and in a number of different assays. The average percentage differences between plating and counting methods were analysed using the Bland-Altman method. A percentage difference of ±10 % was determined as the cut-off for a critical difference between plating and counting methods. All strains measured had an average difference of less than 10 % when plated on THY agar with TTC. This consistency was also observed over all phases of the growth cycle and when plated in blood following bactericidal assays. Agreement between these methods suggest the use of an automated colony counting technique for GAS will significantly reduce time spent counting bacteria to enable a more efficient and accurate measurement of bacteria concentration in culture.

  8. 29 CFR 2590.701-5 - Evidence of creditable coverage.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... under paragraph (b)(2) of this section (relating to the alternative method of counting creditable... benefits described in § 2590.701-4(c) (relating to the alternative method of counting creditable coverage... using the alternative method of counting creditable coverage—(1) In general. After an individual...

  9. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  10. [A case of radiation pneumonitis with eosinophilia in bronchoalveolar lavage fluid].

    PubMed

    Kawai, Seiko; Baba, Kenji; Tanaka, Hiroyuki; Takahashi, Daisuke; Yagi, Takeo; Hattori, Tsutomu; Etsuro, Yamaguchi

    2008-01-01

    A 78-year-old man was admitted to our hospital for irradiation therapy of non-resectable primary lung squamous cell carcinoma of the right middle lobe (T3N2M0). The Linac irradiation through opposing 2 gates (2Gy per day and 60Gy in total) was performed to the affected area including the metastatic right hilar and mediastinal lymphadenopathy. One week after completing the irradiation therapy, fever developed with infiltrates in the area from the right middle lobe to the right lower lobe, which did not necessarily coincide with the irradiated area. Antibiotic therapies were not effective. Both the serum LDH level and eosinophil count in the peripheral blood increased. Bronchoalveolar lavage was performed at the right B8, and differential cell counts of the lavage fluid were: macrophages, 17%; lymphocytes, 60%; neutrophils, 5%; and eosinophils, 18%. No significant organisms were obtained by culture of the lavage fluid. The %VC and DLCO/VA became lower than before the irradiation therapy. Thus, the patient was given a diagnosis of radiation pneumonitis. Treatment with 40mg/day oral prednisolone was commenced with a stepwise dose-reduction (5mg every two weeks) until reaching the maintenance dose of 15mg/day. The serum LDH level and blood eosinophil count recovered promptly to the normal range. The pulmonary infiltrates and the lung functions substantially improved. There have been few reports of radiation pneumonitis in which eosinophil counts increased in peripheral blood and bronchoalveolar lavage fluid after irradiation therapy. In the present case report, the possible mechanisms for the irradiation-induced eosinophilia were also reviewed.

  11. Development of a homogeneous pulse shape discriminating flow-cell radiation detection system

    NASA Astrophysics Data System (ADS)

    Hastie, K. H.; DeVol, T. A.; Fjeld, R. A.

    1999-02-01

    A homogeneous flow-cell radiation detection system which utilizes coincidence counting and pulse shape discrimination circuitry was assembled and tested with five commercially available liquid scintillation cocktails. Two of the cocktails, Ultima Flo (Packard) and Mono Flow 5 (National Diagnostics) have low viscosities and are intended for flow applications; and three of the cocktails, Optiphase HiSafe 3 (Wallac), Ultima Gold AB (Packard), and Ready Safe (Beckman), have higher viscosities and are intended for static applications. The low viscosity cocktails were modified with 1-methylnaphthalene to increase their capability for alpha/beta pulse shape discrimination. The sample loading and pulse shape discriminator setting were optimized to give the lowest minimum detectable concentration for alpha radiation in a 30 s count time. Of the higher viscosity cocktails, Optiphase HiSafe 3 had the lowest minimum detectable activities for alpha and beta radiation, 0.2 and 0.4 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 70% and the corresponding alpha/beta spillover was 5.5%. Of the low viscosity cocktails, Mono Flow 5 modified with 2.5% (by volume) 1-methylnaphthalene resulted in the lowest minimum detectable activities for alpha and beta radiation; 0.3 and 0.5 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 50%, and the corresponding alpha/beta spillover was 16.6%. HiSafe 3 at a 10% sample loading was used to evaluate the system under simulated flow conditions.

  12. Perfect count: a novel approach for the single platform enumeration of absolute CD4+ T-lymphocytes.

    PubMed

    Storie, Ian; Sawle, Alex; Goodfellow, Karen; Whitby, Liam; Granger, Vivian; Ward, Rosalie Y; Peel, Janet; Smart, Theresa; Reilly, John T; Barnett, David

    2004-01-01

    The derivation of reliable CD4(+) T lymphocyte counts is vital for the monitoring of disease progression and therapeutic effectiveness in HIV(+) individuals. Flow cytometry has emerged as the method of choice for CD4(+) T lymphocyte enumeration, with single-platform technology, coupled with reference counting beads, fast becoming the "gold standard." However, although single-platform, bead-based, sample acquisition requires the ratio of beads to cells to remain unchanged, there is no available method, until recently, to monitor this. Perfect Count beads have been developed to address this issue and to incorporate two bead populations, with different densities, to allow the detection of inadequate mixing. Comparison of the relative proportions of both beads with the manufacture's defined limits enables an internal QC check during sample acquisition. In this study, we have compared CD4(+) T lymphocyte counts, obtained from 104 HIV(+) patients, using TruCount beads with MultiSet software (defined as the predicated method) and the new Perfect Count beads, incorporating an in house sequential gating strategy. We have demonstrated an excellent degree of correlation between the predicate method and the Perfect Count system (r(2) = 0.9955; Bland Altman bias +27 CD4(+) T lymphocytes/microl). The Perfect Count system is a robust method for performing single platform absolute counts and has the added advantage of having internal QC checks. Such an approach enables the operator to identify potential problems during sample preparation, acquisition and analysis. Copyright 2003 Wiley-Liss, Inc.

  13. Apparatus and method for quantitative assay of generic transuranic wastes from nuclear reactors

    DOEpatents

    Caldwell, J.T.; Kunz, W.E.; Atencio, J.D.

    1982-03-31

    A combination of passive and active neutron measurements which yields quantitative information about the isotopic composition of transuranic wastes from nuclear power or weapons material manufacture reactors is described. From the measurement of prompt and delayed neutron emission and the incidence of two coincidentally emitted neutrons from induced fission of fissile material in the sample, one can quantify /sup 233/U, /sup 235/U and /sup 239/Pu isotopes in waste samples. Passive coincidence counting, including neutron multiplicity measurement and determination of the overall passive neutron flux additionally enables the separate quantitative evaluation of spontaneous fission isotopes such as /sup 240/Pu, /sup 244/Cm and /sup 252/Cf, and the spontaneous alpha particle emitter /sup 241/Am. These seven isotopes are the most important constituents of wastes from nuclear power reactors and once the mass of each isotope present is determined by the apparatus and method of the instant invention, the overall alpha particle activity can be determined to better than 1 nCi/g from known radioactivity data. Therefore, in addition to the quantitative analysis of the waste sample useful for later reclamation purposes, the alpha particle activity can be determined to decide whether permanent low-level burial is appropriate for the waste sample.

  14. Apparatus and method for quantitative assay of generic transuranic wastes from nuclear reactors

    DOEpatents

    Caldwell, John T.; Kunz, Walter E.; Atencio, James D.

    1984-01-01

    A combination of passive and active neutron measurements which yields quantitative information about the isotopic composition of transuranic wastes from nuclear power or weapons material manufacture reactors is described. From the measurement of prompt and delayed neutron emission and the incidence of two coincidentally emitted neutrons from induced fission of fissile material in the sample, one can quantify .sup.233 U, .sup.235 U and .sup.239 Pu isotopes in waste samples. Passive coincidence counting, including neutron multiplicity measurement and determination of the overall passive neutron flux additionally enables the separate quantitative evaluation of spontaneous fission isotopes such as .sup.240 Pu, .sup.244 Cm and .sup.252 Cf, and the spontaneous alpha particle emitter .sup.241 Am. These seven isotopes are the most important constituents of wastes from nuclear power reactors and once the mass of each isotope present is determined by the apparatus and method of the instant invention, the overall alpha particle activity can be determined to better than 1 nCi/g from known radioactivity data. Therefore, in addition to the quantitative analysis of the waste sample useful for later reclamation purposes, the alpha particle activity can be determined to decide whether "permanent" low-level burial is appropriate for the waste sample.

  15. Coincidence probability as a measure of the average phase-space density at freeze-out

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Czyz, W.; Zalewski, K.

    2006-02-01

    It is pointed out that the average semi-inclusive particle phase-space density at freeze-out can be determined from the coincidence probability of the events observed in multiparticle production. The method of measurement is described and its accuracy examined.

  16. Coincident site lattice-matched growth of semiconductors on substrates using compliant buffer layers

    DOEpatents

    Norman, Andrew

    2016-08-23

    A method of producing semiconductor materials and devices that incorporate the semiconductor materials are provided. In particular, a method is provided of producing a semiconductor material, such as a III-V semiconductor, on a silicon substrate using a compliant buffer layer, and devices such as photovoltaic cells that incorporate the semiconductor materials. The compliant buffer material and semiconductor materials may be deposited using coincident site lattice-matching epitaxy, resulting in a close degree of lattice matching between the substrate material and deposited material for a wide variety of material compositions. The coincident site lattice matching epitaxial process, as well as the use of a ductile buffer material, reduce the internal stresses and associated crystal defects within the deposited semiconductor materials fabricated using the disclosed method. As a result, the semiconductor devices provided herein possess enhanced performance characteristics due to a relatively low density of crystal defects.

  17. 21 CFR 1210.16 - Method of bacterial count.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... FEDERAL IMPORT MILK ACT Inspection and Testing § 1210.16 Method of bacterial count. The bacterial count of milk and cream refers to the number of viable bacteria as determined by the standard plate method of...

  18. 21 CFR 1210.16 - Method of bacterial count.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... FEDERAL IMPORT MILK ACT Inspection and Testing § 1210.16 Method of bacterial count. The bacterial count of milk and cream refers to the number of viable bacteria as determined by the standard plate method of...

  19. 45 CFR 146.115 - Certification and disclosure of previous coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... to the alternative method of counting creditable coverage). Moreover, if the individual's coverage... benefits described in § 146.113(c) (relating to the alternative method of counting creditable coverage... of coverage to a plan or issuer using the alternative method of counting creditable coverage—(1) In...

  20. 26 CFR 54.9801-5 - Evidence of creditable coverage.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... under paragraph (b)(2) of this section (relating to the alternative method of counting creditable... in § 54.9801-4(c) (relating to the alternative method of counting creditable coverage). However, if... Act. (b) Disclosure of coverage to a plan or issuer using the alternative method of counting...

  1. Comparison of culture and qPCR methods in detection of mycobacteria from drinking waters.

    PubMed

    Räsänen, Noora H J; Rintala, Helena; Miettinen, Ilkka T; Torvinen, Eila

    2013-04-01

    Environmental mycobacteria are common bacteria in man-made water systems and may cause infections and hypersensitivity pneumonitis via exposure to water. We compared a generally used cultivation method and a quantitative polymerase chain reaction (qPCR) method to detect mycobacteria in 3 types of drinking waters: surface water, ozone-treated surface water, and groundwater. There was a correlation between the numbers of mycobacteria obtained by cultivation and qPCR methods, but the ratio of the counts obtained by the 2 methods varied among the types of water. The qPCR counts in the drinking waters produced from surface or groundwater were 5 to 34 times higher than culturable counts. In ozone-treated surface waters, both methods gave similar counts. The ozone-treated drinking waters had the highest concentration of assimilable organic carbon, which may explain the good culturability. In warm tap waters, qPCR gave 43 times higher counts than cultivation, but both qPCR counts and culturable counts were lower than those in the drinking waters collected from the same sites. The TaqMan qPCR method is a rapid and sensitive tool for total quantitation of mycobacteria in different types of clean waters. The raw water source and treatments affect both culturability and total numbers of mycobacteria in drinking waters.

  2. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations.

    PubMed

    Takeshita, Kazutaka; Ikeda, Takashi; Takahashi, Hiroshi; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko; Kaji, Koichi

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered.

  4. Counting pollen grains using readily available, free image processing and analysis software.

    PubMed

    Costa, Clayton M; Yang, Suann

    2009-10-01

    Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.

  5. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method

    PubMed Central

    Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-01-01

    Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478

  6. Can reliable sage-grouse lek counts be obtained using aerial infrared technology

    USGS Publications Warehouse

    Gillette, Gifford L.; Coates, Peter S.; Petersen, Steven; Romero, John P.

    2013-01-01

    More effective methods for counting greater sage-grouse (Centrocercus urophasianus) are needed to better assess population trends through enumeration or location of new leks. We describe an aerial infrared technique for conducting sage-grouse lek counts and compare this method with conventional ground-based lek count methods. During the breeding period in 2010 and 2011, we surveyed leks from fixed-winged aircraft using cryogenically cooled mid-wave infrared cameras and surveyed the same leks on the same day from the ground following a standard lek count protocol. We did not detect significant differences in lek counts between surveying techniques. These findings suggest that using a cryogenically cooled mid-wave infrared camera from an aerial platform to conduct lek surveys is an effective alternative technique to conventional ground-based methods, but further research is needed. We discuss multiple advantages to aerial infrared surveys, including counting in remote areas, representing greater spatial variation, and increasing the number of counted leks per season. Aerial infrared lek counts may be a valuable wildlife management tool that releases time and resources for other conservation efforts. Opportunities exist for wildlife professionals to refine and apply aerial infrared techniques to wildlife monitoring programs because of the increasing reliability and affordability of this technology.

  7. Comparing census methods for the endangered Kirtland's Warbler

    Treesearch

    John R. Probst; Deahn M. Donner; Mike Worland; Jerry Weinrich; Phillip Huber; Kenneth R. Ennis

    2005-01-01

    We compared transect counts used for the annual official count of male Kirtland`s Warblers (Dendroica kirtlandii) to an observation-based mapping method of individually sighted males in 155 stands over 10 yrs. The annual census count almost tripled from 1990 to 1999. The transect and observation-based mapping method showed the same increasing trend...

  8. EVALUATION OF THE USE OF DIFFERENT ANTIBIOTICS IN THE DIRECT VIABLE COUNT METHOD TO DETECT FECAL ENTEROCOCCI

    EPA Science Inventory

    The detection of fecal pollution is performed via culturing methods in spite of the fact that culturable counts can severely underestimate the densities of fecal microorganisms. One approach that has been used to enumerate bacteria is the direct viable count method (DVC). The ob...

  9. NATALIE: A 32 detector integrated acquisition system to characterize laser produced energetic particles with nuclear techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarisien, M.; Plaisir, C.; Gobet, F.

    2011-02-15

    We present a stand-alone system to characterize the high-energy particles emitted in the interaction of ultrahigh intensity laser pulses with matter. According to the laser and target characteristics, electrons or protons are produced with energies higher than a few mega electron volts. Selected material samples can, therefore, be activated via nuclear reactions. A multidetector, named NATALIE, has been developed to count the {beta}{sup +} activity of these irradiated samples. The coincidence technique used, designed in an integrated system, results in very low background in the data, which is required for low activity measurements. It, therefore, allows a good precision onmore » the nuclear activation yields of the produced radionuclides. The system allows high counting rates and online correction of the dead time. It also provides, online, a quick control of the experiment. Geant4 simulations are used at different steps of the data analysis to deduce, from the measured activities, the energy and angular distributions of the laser-induced particle beams. Two applications are presented to illustrate the characterization of electrons and protons.« less

  10. Effect of a delta tab on fine scale mixing in a turbulent two-stream shear layer

    NASA Technical Reports Server (NTRS)

    Foss, J. K.; Zaman, K. B. M. Q.

    1996-01-01

    The fine scale mixing produced by a delta tab in a shear layer has been studied experimentally. The tab was placed at the trailing edge of a splitter plate which produced a turbulent two-stream mixing layer. The tab apex tilted downstream and into the high speed stream. Hot-wire measurements in the 3-D space behind the tab detailed the three velocity components as well as the small scale population distributions. These small scale eddies, which represent the peak in the dissipation spectrum, were identified and counted using the Peak-Valley-Counting technique. It was found that the small scale populations were greater in the shear region behind the tab, with the greatest increase occurring where the shear layer underwent a sharp turn. This location was near, but not coincident, with the core of the streamwise vortex, and away from the region exhibiting maximum turbulence intensity. Moreover, the tab increased the most probably frequency and strain rate of the small scales. It made the small scales smaller and more energetic.

  11. Tips and tricks for flow cytometry-based analysis and counting of microparticles.

    PubMed

    Poncelet, Philippe; Robert, Stéphane; Bailly, Nicolas; Garnache-Ottou, Francine; Bouriche, Tarik; Devalet, Bérangère; Segatchian, Jerard H; Saas, Philippe; Mullier, François

    2015-10-01

    Submicron-sized extra-cellular vesicles generated by budding from the external cell membranes, microparticles (MPs) are important actors in transfusion as well as in other medical specialties. After briefly positioning their role in the characterization of labile blood products, this technically oriented chapter aims to review practical points that need to be considered when trying to use flow cytometry for the analysis, characterization and absolute counting of MP subsets. Subjects of active discussions relative to instrumentation will include the choice of the trigger parameter, possible standardization approaches requiring instrument quality-control, origin and control of non-specific background and of coincidence artifacts, choice of the type of electronic signals, optimal sheath fluid and sample speed. Questions related to reagents will cover target antigens and receptors, multi-color reagents, negative controls, enumeration of MPs and limiting artifacts due to unexpected (micro-) coagulation of plasma samples. Newly detected problems are generating innovative solutions and flow cytometry will continue to remain the technology of choice for the analysis of MPs, in the domain of transfusion as well as in many diverse specialties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. A Reconfigurable Instrument System for Nuclear and Particle Physics Experiments

    NASA Astrophysics Data System (ADS)

    Sang, Ziru; Li, Feng; Jiang, Xiao; Jin, Ge

    2014-04-01

    We developed a reconfigurable nuclear instrument system (RNIS) that could satisfy the requirements of diverse nuclear and particle physics experiments, and the inertial confinement fusion diagnostic. Benefiting from the reconfigurable hardware structure and digital pulse processing technology, RNIS shakes off the restrictions of cumbersome crates and miscellaneous modules. It retains all the advantages of conventional nuclear instruments and is more flexible and portable. RNIS is primarily composed of a field programmable hardware board and relevant PC software. Separate analog channels are designed to provide different functions, such as amplifiers, ADC, fast discriminators and Schmitt discriminators for diverse experimental purposes. The high-performance field programmable gate array could complete high-precision time interval measurement, histogram accumulation, counting, and coincidence anticoincidence measurement. To illustrate the prospects of RNIS, a series of applications to the experiments are described in this paper. The first, for which RNIS was originally developed, involves nuclear energy spectrum measurement with a scintillation detector and photomultiplier. The second experiment applies RNIS to a G-M tube counting experiment, and in the third, it is applied to a quantum communication experiment through reconfiguration.

  13. The influence of an arduous military training program on immune function and upper respiratory tract infection incidence.

    PubMed

    Whitham, Martin; Laing, Stewart J; Dorrington, Melanie; Walters, Robert; Dunklin, Steve; Bland, Duncan; Bilzon, James L J; Walsh, Neil P

    2006-08-01

    The effects of the first 19 weeks of U.K. Parachute Regiment (PARA) training on upper respiratory tract infection (URTI) incidence and immune function (circulating leukocyte counts, lymphocyte subsets, lipopolysaccharide-stimulated neutrophil degranulation, and salivary immunoglobulin A concentrations) were investigated for 14 PARA recruits and 12 control subjects. No significant differences were reported between groups for the number or duration of URTIs, lymphocyte subsets, or salivary immunoglobulin A concentrations during training. URTI incidence was greater in the PARA group at weeks 2 and 3 (p < 0.05), coinciding with a decrease in circulating leukocyte and lymphocyte counts (p < 0.05). Neutrophil degranulation was similar in the PARA and control groups at weeks 0 and 19. Decreases in saliva flow rate occurred in the PARA group at week 15 and weeks 18 to 20 (p < 0.05). These results show a limited effect of PARA training on URTI incidence and immune function. The progressive decrease in saliva flow rate during PARA training may indicate an ensuing state of hypohydration.

  14. The impact of BMI on sperm parameters and the metabolite changes of seminal plasma concomitantly.

    PubMed

    Guo, Dan; Wu, Wei; Tang, Qiuqin; Qiao, Shanlei; Chen, Yiqiu; Chen, Minjian; Teng, Mengying; Lu, Chuncheng; Ding, Hongjuan; Xia, Yankai; Hu, Lingqing; Chen, Daozhen; Sha, Jiahao; Wang, Xinru

    2017-07-25

    The development of male infertility increased rapidly worldwide, which coinciding with the epidemic of obesity. However, the impact of weight abnormalities on sperm quality is still contestable. To assess the correlation between BMI and sperm parameters, we searched relevant articles in PubMed, Embase, Web of science, and Wanfang database published until June 2015 without language restriction. Otherwise, we also recruited some participants who attended fertility clinic as well as some general populations in this report. We performed a systematic review and meta-analysis about BMI and sperm parameters containing total sperm count, concentration, semen volume and sperm motility (overall and progressive). Metabolomic analysis of seminal plasma was performed to explore the mechanism from a new perspective. This study found standardized weighted mean differences (SMD) in sperm parameters (total sperm count, sperm concentration, and semen volume) of abnormal weight groups decreased to different degree compared to normal weight. Dose-response analysis found SMD of sperm count, sperm concentration and semen volume respectively fell 2.4%, 1.3% and 2.0% compared with normal weight for every 5-unit increase in BMI. Metabolomic analysis of seminal plasma showed that spermidine and spermine were likely to play a vital role in the spermatogenesis progress. This systematic review with meta-analysis has confirmed there was a relationship between BMI and sperm quality, suggesting obesity may be a detrimental factor of male infertility.

  15. Comparison of the Performance Evaluation of the MicroPET R4 Scanner According to NEMA Standards NU 4-2008 and NU 2-2001

    NASA Astrophysics Data System (ADS)

    Popota, Fotini D.; Aguiar, Pablo; Herance, J. Raúl; Pareto, Deborah; Rojas, Santiago; Ros, Domènec; Pavia, Javier; Gispert, Juan Domingo

    2012-10-01

    The purpose of this work was to evaluate the performance of the microPET R4 system for rodents according to the NU 4-2008 standards of the National Electrical Manufacturers Association (NEMA) for small-animal positron emission tomography (PET) systems and to compare it against its previous evaluation according the adapted clinical NEMA NU 2-2001. The performance parameters evaluated here were spatial resolution, sensitivity, scatter fraction, counting rates for rat- and mouse-sized phantoms, and image quality. Spatial resolution and sensitivity were measured with a 22Na point source, while scatter fraction and count rate performance were determined using a mouse and rat phantoms with an 18F line source. The image quality of the system was assessed using the NEMA image quality phantom. Assessment of attenuation correction was performed using γ-ray transmission and computed tomography (CT)-based attenuation correction methods. At the center of the field of view, a spatial resolution of 2.12 mm at full width at half maximum (FWHM) (radial), 2.66 mm FWHM (tangential), and 2.23 mm FWHM (axial) was measured. The absolute sensitivity was found to be 1.9% at the center of the scanner. Scatter fraction for mouse-sized phantoms was 8.5 %, and the peak count rate was 311 kcps at 153.5 MBq. The rat scatter fraction was 22%, and the peak count rate was 117 kcps at 123.24 MBq. Image uniformity showed better results with 2-D filtered back projection (FBP), while an overestimation of the recovery coefficients was observed when using 2-D and 3-D OSEM MAP reconstruction algorithm. All measurements were made for an energy window of 350-650 keV and a coincidence window of 6 ns. Histogramming and reconstruction parameters were used according to the manufacturer's recommendations. The microPET R4 scanner was fully characterized according to the NEMA NU 4-2008 standards. Our results diverge considerably from those previously reported with an adapted version of the NEMA NU 2-2001 clinical standards. These discrepancies can be attributed to the modifications in NEMA methodology, thereby highlighting the relevance of specific small-animal standards for the performance evaluation of PET systems.

  16. A new 4π(LS)-γ coincidence counter at NCBJ RC POLATOM with TDCR detector in the beta channel.

    PubMed

    Ziemek, T; Jęczmieniowski, A; Cacko, D; Broda, R; Lech, E

    2016-03-01

    A new 4π(LS)-γ coincidence system (TDCRG) was built at the NCBJ RC POLATOM. The counter consists of a TDCR detector in the beta channel and scintillation detector with NaI(Tl) crystal in the gamma channel. The system is equipped with a digital board with FPGA, which records and analyses coincidences in the TDCR detector and coincidences between the beta and gamma channels. The characteristics of the system and a scheme of the FPGA implementation with behavioral simulation are given. The TDCRG counter was validated by activity measurements on (14)C and (60)Co solutions standardized in RC POLATOM using previously validated methods. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Mapping of bird distributions from point count surveys

    USGS Publications Warehouse

    Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.

  18. Rapid enumeration of viable bacteria by image analysis

    NASA Technical Reports Server (NTRS)

    Singh, A.; Pyle, B. H.; McFeters, G. A.

    1989-01-01

    A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.

  19. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P. W.

    2016-01-01

    Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts. PMID:27529701

  20. An automated LS(β)- NaI(Tl)(γ) coincidence system as absolute standard for radioactivity measurements.

    PubMed

    Joseph, Leena; Das, A P; Ravindra, Anuradha; Kulkarni, D B; Kulkarni, M S

    2018-07-01

    4πβ-γ coincidence method is a powerful and widely used method to determine the absolute activity concentration of radioactive solutions. A new automated liquid scintillator based coincidence system has been designed, developed, tested and established as absolute standard for radioactivity measurements. The automation is achieved using PLC (programmable logic controller) and SCADA (supervisory control and data acquisition). Radioactive solution of 60 Co was standardized to compare the performance of the automated system with proportional counter based absolute standard maintained in the laboratory. The activity concentrations determined using these two systems were in very good agreement; the new automated system can be used for absolute measurement of activity concentration of radioactive solutions. Copyright © 2018. Published by Elsevier Ltd.

  1. A novel concentration and viability detection method for Brettanomyces using the Cellometer image cytometry.

    PubMed

    Martyniak, Brian; Bolton, Jason; Kuksin, Dmitry; Shahin, Suzanne M; Chan, Leo Li-Ying

    2017-01-01

    Brettanomyces spp. can present unique cell morphologies comprised of excessive pseudohyphae and budding, leading to difficulties in enumerating cells. The current cell counting methods include manual counting of methylene blue-stained yeasts or measuring optical densities using a spectrophotometer. However, manual counting can be time-consuming and has high operator-dependent variations due to subjectivity. Optical density measurement can also introduce uncertainties where instead of individual cells counted, an average of a cell population is measured. In contrast, by utilizing the fluorescence capability of an image cytometer to detect acridine orange and propidium iodide viability dyes, individual cell nuclei can be counted directly in the pseudohyphae chains, which can improve the accuracy and efficiency of cell counting, as well as eliminating the subjectivity from manual counting. In this work, two experiments were performed to demonstrate the capability of Cellometer image cytometer to monitor Brettanomyces concentrations, viabilities, and budding/pseudohyphae percentages. First, a yeast propagation experiment was conducted to optimize software counting parameters for monitoring the growth of Brettanomyces clausenii, Brettanomyces bruxellensis, and Brettanomyces lambicus, which showed increasing cell concentrations, and varying pseudohyphae percentages. The pseudohyphae formed during propagation were counted either as multiple nuclei or a single multi-nuclei organism, where the results of counting the yeast as a single multi-nuclei organism were directly compared to manual counting. Second, a yeast fermentation experiment was conducted to demonstrate that the proposed image cytometric analysis method can monitor the growth pattern of B. lambicus and B. clausenii during beer fermentation. The results from both experiments displayed different growth patterns, viability, and budding/pseudohyphae percentages for each Brettanomyces species. The proposed Cellometer image cytometry method can improve efficiency and eliminate operator-dependent variations of cell counting compared with the traditional methods, which can potentially improve the quality of beverage products employing Brettanomyces yeasts.

  2. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination.

    PubMed

    Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki

    2018-01-01

    Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl 2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix.

  3. Platelet Counts in Insoluble Platelet-Rich Fibrin Clots: A Direct Method for Accurate Determination

    PubMed Central

    Kitamura, Yutaka; Watanabe, Taisuke; Nakamura, Masayuki; Isobe, Kazushige; Kawabata, Hideo; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Tanaka, Takaaki; Kawase, Tomoyuki

    2018-01-01

    Platelet-rich fibrin (PRF) clots have been used in regenerative dentistry most often, with the assumption that growth factor levels are concentrated in proportion to the platelet concentration. Platelet counts in PRF are generally determined indirectly by platelet counting in other liquid fractions. This study shows a method for direct estimation of platelet counts in PRF. To validate this method by determination of the recovery rate, whole-blood samples were obtained with an anticoagulant from healthy donors, and platelet-rich plasma (PRP) fractions were clotted with CaCl2 by centrifugation and digested with tissue-plasminogen activator. Platelet counts were estimated before clotting and after digestion using an automatic hemocytometer. The method was then tested on PRF clots. The quality of platelets was examined by scanning electron microscopy and flow cytometry. In PRP-derived fibrin matrices, the recovery rate of platelets and white blood cells was 91.6 and 74.6%, respectively, after 24 h of digestion. In PRF clots associated with small and large red thrombi, platelet counts were 92.6 and 67.2% of the respective total platelet counts. These findings suggest that our direct method is sufficient for estimating the number of platelets trapped in an insoluble fibrin matrix and for determining that platelets are distributed in PRF clots and red thrombi roughly in proportion to their individual volumes. Therefore, we propose this direct digestion method for more accurate estimation of platelet counts in most types of platelet-enriched fibrin matrix. PMID:29450197

  4. Comparison of Drive Counts and Mark-Resight As Methods of Population Size Estimation of Highly Dense Sika Deer (Cervus nippon) Populations

    PubMed Central

    Takeshita, Kazutaka; Yoshida, Tsuyoshi; Igota, Hiromasa; Matsuura, Yukiko

    2016-01-01

    Assessing temporal changes in abundance indices is an important issue in the management of large herbivore populations. The drive counts method has been frequently used as a deer abundance index in mountainous regions. However, despite an inherent risk for observation errors in drive counts, which increase with deer density, evaluations of the utility of drive counts at a high deer density remain scarce. We compared the drive counts and mark-resight (MR) methods in the evaluation of a highly dense sika deer population (MR estimates ranged between 11 and 53 individuals/km2) on Nakanoshima Island, Hokkaido, Japan, between 1999 and 2006. This deer population experienced two large reductions in density; approximately 200 animals in total were taken from the population through a large-scale population removal and a separate winter mass mortality event. Although the drive counts tracked temporal changes in deer abundance on the island, they overestimated the counts for all years in comparison to the MR method. Increased overestimation in drive count estimates after the winter mass mortality event may be due to a double count derived from increased deer movement and recovery of body condition secondary to the mitigation of density-dependent food limitations. Drive counts are unreliable because they are affected by unfavorable factors such as bad weather, and they are cost-prohibitive to repeat, which precludes the calculation of confidence intervals. Therefore, the use of drive counts to infer the deer abundance needs to be reconsidered. PMID:27711181

  5. Recommended methods for monitoring change in bird populations by counting and capture of migrants

    Treesearch

    David J. T. Hussell; C. John Ralph

    2005-01-01

    Counts and banding captures of spring or fall migrants can generate useful information on the status and trends of the source populations. To do so, the counts and captures must be taken and recorded in a standardized and consistent manner. We present recommendations for field methods for counting and capturing migrants at intensively operated sites, such as bird...

  6. Mid-infrared coincidence measurements based on intracavity frequency conversion

    NASA Astrophysics Data System (ADS)

    Piccione, S.; Mancinelli, M.; Trenti, A.; Fontana, G.; Dam, J.; Tidemand-Lichtenberg, P.; Pedersen, C.; Pavesi, L.

    2018-02-01

    In the last years, the Mid Infrared (MIR) spectral region has attracted the attention of many areas of science and technology, opening the way to important applications, such as molecular imaging, remote sensing, free- space communication and environmental monitoring. However, the development of new sources of light, such as quantum cascade laser, was not followed by an adequate improvement in the MIR detection system, able to exceed the current challenges. Here we demonstrate the single-photon counting capability of a new detection system, based on efficient up-converter modules, by proving the correlated nature of twin photons pairs at about 3.1μm, opening the way to the extension of quantum optics experiments in the MIR.

  7. A Compton scatter attenuation gamma ray spectrometer

    NASA Technical Reports Server (NTRS)

    Austin, W. E.

    1972-01-01

    A Compton scatter attenuation gamma ray spectrometer conceptual design is discussed for performing gamma spectral measurements in monodirectional gamma fields from 100 R per hour to 1,000,000 R per hour. Selectable Compton targets are used to scatter gamma photons onto an otherwise heavily shielded detector with changeable scattering efficiencies such that the count rate is maintained between 500 and 10,000 per second. Use of two sum-Compton coincident detectors, one for energies up to 1.5 MeV and the other for 600 keV to 10 MeV, will allow good peak to tail pulse height ratios to be obtained over the entire spectrum and reduces the neutron recoil background rate.

  8. Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.

    PubMed

    Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo

    2011-06-01

    This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.

  9. Current automated 3D cell detection methods are not a suitable replacement for manual stereologic cell counting

    PubMed Central

    Schmitz, Christoph; Eastwood, Brian S.; Tappan, Susan J.; Glaser, Jack R.; Peterson, Daniel A.; Hof, Patrick R.

    2014-01-01

    Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D) stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D) “cell counting” approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38 and 99% and false-positive rates from 3.6 to 82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections. PMID:24847213

  10. Single-molecule two-colour coincidence detection to probe biomolecular associations.

    PubMed

    Orte, Angel; Clarke, Richard; Klenerman, David

    2010-08-01

    Two-colour coincidence detection (TCCD) is a form of single-molecule fluorescence developed to sensitively detect and characterize associated biomolecules without any separation, in solution, on the cell membrane and in live cells. In the present short review, we first explain the principles of the method and then describe the application of TCCD to a range of biomedical problems and how this method may be developed further in the future to try to monitor biological processes in live cells.

  11. A matrix-inversion method for gamma-source mapping from gamma-count data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adsley, Ian; Burgess, Claire; Bull, Richard K

    In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less

  12. Bird biodiversity assessments in temperate forest: the value of point count versus acoustic monitoring protocols.

    PubMed

    Klingbeil, Brian T; Willig, Michael R

    2015-01-01

    Effective monitoring programs for biodiversity are needed to assess trends in biodiversity and evaluate the consequences of management. This is particularly true for birds and faunas that occupy interior forest and other areas of low human population density, as these are frequently under-sampled compared to other habitats. For birds, Autonomous Recording Units (ARUs) have been proposed as a supplement or alternative to point counts made by human observers to enhance monitoring efforts. We employed two strategies (i.e., simultaneous-collection and same-season) to compare point count and ARU methods for quantifying species richness and composition of birds in temperate interior forests. The simultaneous-collection strategy compares surveys by ARUs and point counts, with methods matched in time, location, and survey duration such that the person and machine simultaneously collect data. The same-season strategy compares surveys from ARUs and point counts conducted at the same locations throughout the breeding season, but methods differ in the number, duration, and frequency of surveys. This second strategy more closely follows the ways in which monitoring programs are likely to be implemented. Site-specific estimates of richness (but not species composition) differed between methods; however, the nature of the relationship was dependent on the assessment strategy. Estimates of richness from point counts were greater than estimates from ARUs in the simultaneous-collection strategy. Woodpeckers in particular, were less frequently identified from ARUs than point counts with this strategy. Conversely, estimates of richness were lower from point counts than ARUs in the same-season strategy. Moreover, in the same-season strategy, ARUs detected the occurrence of passerines at a higher frequency than did point counts. Differences between ARU and point count methods were only detected in site-level comparisons. Importantly, both methods provide similar estimates of species richness and composition for the region. Consequently, if single visits to sites or short-term monitoring are the goal, point counts will likely perform better than ARUs, especially if species are rare or vocalize infrequently. However, if seasonal or annual monitoring of sites is the goal, ARUs offer a viable alternative to standard point-count methods, especially in the context of large-scale or long-term monitoring of temperate forest birds.

  13. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  14. Evaluation of the platelet counting by Abbott CELL-DYN SAPPHIRE haematology analyser compared with flow cytometry.

    PubMed

    Grimaldi, E; Del Vecchio, L; Scopacasa, F; Lo Pardo, C; Capone, F; Pariante, S; Scalia, G; De Caterina, M

    2009-04-01

    The Abbot Cell-Dyn Sapphire is a new generation haematology analyser. The system uses optical/fluorescence flow cytometry in combination with electronic impedance to produce a full blood count. Optical and impedance are the default methods for platelet counting while automated CD61-immunoplatelet analysis can be run as selectable test. The aim of this study was to determine the platelet count performance of the three counting methods available on the instrument and to compare the results with those provided by Becton Dickinson FACSCalibur flow cytometer used as reference method. A lipid interference experiment was also performed. Linearity, carryover and precision were good, and satisfactory agreement with reference method was found for the impedance, optical and CD61-immunoplatelet analysis, although this latter provided the closest results in comparison with flow cytometry. In the lipid interference experiment, a moderate inaccuracy of optical and immunoplatelet counts was observed starting from a very high lipid value.

  15. voom: precision weights unlock linear model analysis tools for RNA-seq read counts

    PubMed Central

    2014-01-01

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249

  16. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    PubMed

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  17. Search for transient gravitational waves in coincidence with short-duration radio transients during 2007-2013

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, K. N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Pereira, R.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stiles, D.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Archibald, A. M.; Banaszak, S.; Berndsen, A.; Boyles, J.; Cardoso, R. F.; Chawla, P.; Cherry, A.; Dartez, L. P.; Day, D.; Epstein, C. R.; Ford, A. J.; Flanigan, J.; Garcia, A.; Hessels, J. W. T.; Hinojosa, J.; Jenet, F. A.; Karako-Argaman, C.; Kaspi, V. M.; Keane, E. F.; Kondratiev, V. I.; Kramer, M.; Leake, S.; Lorimer, D.; Lunsford, G.; Lynch, R. S.; Martinez, J. G.; Mata, A.; McLaughlin, M. A.; McPhee, C. A.; Penucci, T.; Ransom, S.; Roberts, M. S. E.; Rohr, M. D. W.; Stairs, I. H.; Stovall, K.; van Leeuwen, J.; Walker, A. N.; Wells, B. L.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    We present an archival search for transient gravitational-wave bursts in coincidence with 27 single-pulse triggers from Green Bank Telescope pulsar surveys, using the LIGO, Virgo, and GEO interferometer network. We also discuss a check for gravitational-wave signals in coincidence with Parkes fast radio bursts using similar methods. Data analyzed in these searches were collected between 2007 and 2013. Possible sources of emission of both short-duration radio signals and transient gravitational-wave emission include starquakes on neutron stars, binary coalescence of neutron stars, and cosmic string cusps. While no evidence for gravitational-wave emission in coincidence with these radio transients was found, the current analysis serves as a prototype for similar future searches using more sensitive second-generation interferometers.

  18. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    PubMed

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts.

  19. Effective count rates for PET scanners with reduced and extended axial field of view

    NASA Astrophysics Data System (ADS)

    MacDonald, L. R.; Harrison, R. L.; Alessio, A. M.; Hunter, W. C. J.; Lewellen, T. K.; Kinahan, P. E.

    2011-06-01

    We investigated the relationship between noise equivalent count (NEC) and axial field of view (AFOV) for PET scanners with AFOVs ranging from one-half to twice those of current clinical scanners. PET scanners with longer or shorter AFOVs could fulfill different clinical needs depending on exam volumes and site economics. Using previously validated Monte Carlo simulations, we modeled true, scattered and random coincidence counting rates for a PET ring diameter of 88 cm with 2, 4, 6, and 8 rings of detector blocks (AFOV 7.8, 15.5, 23.3, and 31.0 cm). Fully 3D acquisition mode was compared to full collimation (2D) and partial collimation (2.5D) modes. Counting rates were estimated for a 200 cm long version of the 20 cm diameter NEMA count-rate phantom and for an anthropomorphic object based on a patient scan. We estimated the live-time characteristics of the scanner from measured count-rate data and applied that estimate to the simulated results to obtain NEC as a function of object activity. We found NEC increased as a quadratic function of AFOV for 3D mode, and linearly in 2D mode. Partial collimation provided the highest overall NEC on the 2-block system and fully 3D mode provided the highest NEC on the 8-block system for clinically relevant activities. On the 4-, and 6-block systems 3D mode NEC was highest up to ~300 MBq in the anthropomorphic phantom, above which 3D NEC dropped rapidly, and 2.5D NEC was highest. Projected total scan time to achieve NEC-density that matches current clinical practice in a typical oncology exam averaged 9, 15, 24, and 61 min for the 8-, 6-, 4-, and 2-block ring systems, when using optimal collimation. Increasing the AFOV should provide a greater than proportional increase in NEC, potentially benefiting patient throughput-to-cost ratio. Conversely, by using appropriate collimation, a two-ring (7.8 cm AFOV) system could acquire whole-body scans achieving NEC-density levels comparable to current standards within long, but feasible, scan times.

  20. Gamma-ray momentum reconstruction from Compton electron trajectories by filtered back-projection

    DOE PAGES

    Haefner, A.; Gunter, D.; Plimley, B.; ...

    2014-11-03

    Gamma-ray imaging utilizing Compton scattering has traditionally relied on measuring coincident gamma-ray interactions to map directional information of the source distribution. This coincidence requirement makes it an inherently inefficient process. We present an approach to gamma-ray reconstruction from Compton scattering that requires only a single electron tracking detector, thus removing the coincidence requirement. From the Compton scattered electron momentum distribution, our algorithm analytically computes the incident photon's correlated direction and energy distributions. Because this method maps the source energy and location, it is useful in applications, where prior information about the source distribution is unknown. We demonstrate this method withmore » electron tracks measured in a scientific Si charge coupled device. While this method was demonstrated with electron tracks in a Si-based detector, it is applicable to any detector that can measure electron direction and energy, or equivalently the electron momentum. For example, it can increase the sensitivity to obtain energy and direction in gas-based systems that suffer from limited efficiency.« less

  1. Stability and dissociation dynamics of N{sub 2}{sup ++} ions following core ionization studied by an Auger-electron–photoion coincidence method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwayama, H.; Shigemasa, E.; SOKENDAI, Nishigonaka 38, Myodaiji, Okazaki 444-8585

    An Auger-electron–photoion coincidence (AEPICO) method has been applied to study the stability and dissociation dynamics of dicationic states after the N K-shell photoionization of nitrogen molecules. From time-of-flight and kinetic energy analyses of the product ions, we have obtained coincident Auger spectra associated with metastable states of N{sub 2}{sup ++} ions and dissociative states leading to N{sub 2}{sup ++} → N{sup +} + N{sup +} and N{sup ++} + N. To investigate the production of dissociative states, we present two-dimensional AEPICO maps which reveal the correlations between the binding energies of the Auger final states and the ion kinetic energymore » release. These correlations have been used to determine the dissociation limits of individual Auger final states.« less

  2. Enhanced PET resolution by combining pinhole collimation and coincidence detection

    NASA Astrophysics Data System (ADS)

    DiFilippo, Frank P.

    2015-10-01

    Spatial resolution of clinical PET scanners is limited by detector design and photon non-colinearity. Although dedicated small animal PET scanners using specialized high-resolution detectors have been developed, enhancing the spatial resolution of clinical PET scanners is of interest as a more available alternative. Multi-pinhole 511 keV SPECT is capable of high spatial resolution but requires heavily shielded collimators to avoid significant background counts. A practical approach with clinical PET detectors is to combine multi-pinhole collimation with coincidence detection. In this new hybrid modality, there are three locations associated with each event, namely those of the two detected photons and the pinhole aperture. These three locations over-determine the line of response and provide redundant information that is superior to coincidence detection or pinhole collimation alone. Multi-pinhole collimation provides high resolution and avoids non-colinearity error but is subject to collimator penetration and artifacts from overlapping projections. However the coincidence information, though at lower resolution, is valuable for determining whether the photon passed near a pinhole within the cone acceptance angle and for identifying through which pinhole the photon passed. This information allows most photons penetrating through the collimator to be rejected and avoids overlapping projections. With much improved event rejection, a collimator with minimal shielding may be used, and a lightweight add-on collimator for high resolution imaging is feasible for use with a clinical PET scanner. Monte Carlo simulations were performed of a 18F hot rods phantom and a 54-pinhole unfocused whole-body mouse collimator with a clinical PET scanner. Based on coincidence information and pinhole geometry, events were accepted or rejected, and pinhole-specific crystal-map projections were generated. Tomographic images then were reconstructed using a conventional pinhole SPECT algorithm. Hot rods of 1.4 mm diameter were resolved easily in a simulated phantom. System sensitivity was 0.09% for a simulated 70-mm line source corresponding to the NEMA NU-4 mouse phantom. Higher resolution is expected with further optimization of pinhole design, and higher sensitivity is expected with a focused and denser pinhole configuration. The simulations demonstrate high spatial resolution and feasibility of small animal imaging with an add-on multi-pinhole collimator for a clinical PET scanner. Further work is needed to develop geometric calibration and quantitative data corrections and, eventually, to construct a prototype device and produce images with physical phantoms.

  3. Comparison of cell counting methods in rodent pulmonary toxicity studies: automated and manual protocols and considerations for experimental design

    PubMed Central

    Zeidler-Erdely, Patti C.; Antonini, James M.; Meighan, Terence G.; Young, Shih-Houng; Eye, Tracy J.; Hammer, Mary Ann; Erdely, Aaron

    2016-01-01

    Pulmonary toxicity studies often use bronchoalveolar lavage (BAL) to investigate potential adverse lung responses to a particulate exposure. The BAL cellular fraction is counted, using automated (i.e. Coulter Counter®), flow cytometry or manual (i.e. hemocytometer) methods, to determine inflammatory cell influx. The goal of the study was to compare the different counting methods to determine which is optimal for examining BAL cell influx after exposure by inhalation or intratracheal instillation (ITI) to different particles with varying inherent pulmonary toxicities in both rat and mouse models. General findings indicate that total BAL cell counts using the automated and manual methods tended to agree after inhalation or ITI exposure to particle samples that are relatively nontoxic or at later time points after exposure to a pneumotoxic particle when the response resolves. However, when the initial lung inflammation and cytotoxicity was high after exposure to a pneumotoxic particle, significant differences were observed when comparing cell counts from the automated, flow cytometry and manual methods. When using total BAL cell count for differential calculations from the automated method, depending on the cell diameter size range cutoff, the data suggest that the number of lung polymorphonuclear leukocytes (PMN) varies. Importantly, the automated counts, regardless of the size cutoff, still indicated a greater number of total lung PMN when compared with the manual method, which agreed more closely with flow cytometry. The results suggest that either the manual method or flow cytometry would be better suited for BAL studies where cytotoxicity is an unknown variable. PMID:27251196

  4. Comparison of cell counting methods in rodent pulmonary toxicity studies: automated and manual protocols and considerations for experimental design.

    PubMed

    Zeidler-Erdely, Patti C; Antonini, James M; Meighan, Terence G; Young, Shih-Houng; Eye, Tracy J; Hammer, Mary Ann; Erdely, Aaron

    2016-08-01

    Pulmonary toxicity studies often use bronchoalveolar lavage (BAL) to investigate potential adverse lung responses to a particulate exposure. The BAL cellular fraction is counted, using automated (i.e. Coulter Counter®), flow cytometry or manual (i.e. hemocytometer) methods, to determine inflammatory cell influx. The goal of the study was to compare the different counting methods to determine which is optimal for examining BAL cell influx after exposure by inhalation or intratracheal instillation (ITI) to different particles with varying inherent pulmonary toxicities in both rat and mouse models. General findings indicate that total BAL cell counts using the automated and manual methods tended to agree after inhalation or ITI exposure to particle samples that are relatively nontoxic or at later time points after exposure to a pneumotoxic particle when the response resolves. However, when the initial lung inflammation and cytotoxicity was high after exposure to a pneumotoxic particle, significant differences were observed when comparing cell counts from the automated, flow cytometry and manual methods. When using total BAL cell count for differential calculations from the automated method, depending on the cell diameter size range cutoff, the data suggest that the number of lung polymorphonuclear leukocytes (PMN) varies. Importantly, the automated counts, regardless of the size cutoff, still indicated a greater number of total lung PMN when compared with the manual method, which agreed more closely with flow cytometry. The results suggest that either the manual method or flow cytometry would be better suited for BAL studies where cytotoxicity is an unknown variable.

  5. Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Cañadas, M.; Arce, P.; Rato Mendes, P.

    2011-01-01

    Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.

  6. Evaluation of surveillance methods for monitoring house fly abundance and activity on large commercial dairy operations.

    PubMed

    Gerry, Alec C; Higginbotham, G E; Periera, L N; Lam, A; Shelton, C R

    2011-06-01

    Relative house fly, Musca domestica L., activity at three large dairies in central California was monitored during the peak fly activity period from June to August 2005 by using spot cards, fly tapes, bait traps, and Alsynite traps. Counts for all monitoring methods were significantly related at two of three dairies; with spot card counts significantly related to fly tape counts recorded the same week, and both spot card counts and fly tape counts significantly related to bait trap counts 1-2 wk later. Mean fly counts differed significantly between dairies, but a significant interaction between dairies sampled and monitoring methods used demonstrates that between-dairy comparisons are unwise. Estimate precision was determined by the coefficient of variability (CV) (or SE/mean). Using a CV = 0.15 as a desired level of estimate precision and assuming an integrate pest management (IPM) action threshold near the peak house fly activity measured by each monitoring method, house fly monitoring at a large dairy would require 12 spot cards placed in midafternoon shaded fly resting sites near cattle or seven bait traps placed in open areas near cattle. Software (FlySpotter; http://ucanr.org/ sites/FlySpotter/download/) using computer vision technology was developed to count fly spots on a scanned image of a spot card to dramatically reduce time invested in monitoring house flies. Counts provided by the FlySpotter software were highly correlated to visual counts. The use of spot cards for monitoring house flies is recommended for dairy IPM programs.

  7. Comparison of methods for estimating density of forest songbirds from point counts

    Treesearch

    Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey

    2011-01-01

    New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...

  8. An analytical model of the effects of pulse pileup on the energy spectrum recorded by energy resolved photon counting x-ray detectors

    PubMed Central

    Taguchi, Katsuyuki; Frey, Eric C.; Wang, Xiaolan; Iwanczyk, Jan S.; Barber, William C.

    2010-01-01

    Purpose: Recently, novel CdTe photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed. When such detectors are operated under a high x-ray flux, however, coincident pulses distort the recorded energy spectrum. These distortions are called pulse pileup effects. It is essential to compensate for these effects on the recorded energy spectrum in order to take full advantage of spectral information PCXDs provide. Such compensation can be achieved by incorporating a pileup model into the image reconstruction process for computed tomography, that is, as a part of the forward imaging process, and iteratively estimating either the imaged object or the line integrals using, e.g., a maximum likelihood approach. The aim of this study was to develop a new analytical pulse pileup model for both peak and tail pileup effects for nonparalyzable detectors. Methods: The model takes into account the following factors: The bipolar shape of the pulse, the distribution function of time intervals between random events, and the input probability density function of photon energies. The authors used Monte Carlo simulations to evaluate the model. Results: The recorded spectra estimated by the model were in an excellent agreement with those obtained by Monte Carlo simulations for various levels of pulse pileup effects. The coefficients of variation (i.e., the root mean square difference divided by the mean of measurements) were 5.3%–10.0% for deadtime losses of 1%–50% with a polychromatic incident x-ray spectrum. Conclusions: The proposed pulse pileup model can predict recorded spectrum with relatively good accuracy. PMID:20879558

  9. Comparison of fluorescence microscopy and solid-phase cytometry methods for counting bacteria in water

    USGS Publications Warehouse

    Lisle, John T.; Hamilton, Martin A.; Willse, Alan R.; McFeters, Gordon A.

    2004-01-01

    Total direct counts of bacterial abundance are central in assessing the biomass and bacteriological quality of water in ecological and industrial applications. Several factors have been identified that contribute to the variability in bacterial abundance counts when using fluorescent microscopy, the most significant of which is retaining an adequate number of cells per filter to ensure an acceptable level of statistical confidence in the resulting data. Previous studies that have assessed the components of total-direct-count methods that contribute to this variance have attempted to maintain a bacterial cell abundance value per filter of approximately 106 cells filter-1. In this study we have established the lower limit for the number of bacterial cells per filter at which the statistical reliability of the abundance estimate is no longer acceptable. Our results indicate that when the numbers of bacterial cells per filter were progressively reduced below 105, the microscopic methods increasingly overestimated the true bacterial abundance (range, 15.0 to 99.3%). The solid-phase cytometer only slightly overestimated the true bacterial abundances and was more consistent over the same range of bacterial abundances per filter (range, 8.9 to 12.5%). The solid-phase cytometer method for conducting total direct counts of bacteria was less biased and performed significantly better than any of the microscope methods. It was also found that microscopic count data from counting 5 fields on three separate filters were statistically equivalent to data from counting 20 fields on a single filter.

  10. Understanding poisson regression.

    PubMed

    Hayat, Matthew J; Higgins, Melinda

    2014-04-01

    Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.

  11. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  12. A THUMBNAIL HISTORY OF HETEROTROPHIC PLATE COUNT (HPC) METHODOLOGY IN THE UNITED STATES

    EPA Science Inventory

    Over the past 100 years, the method of determining the number of bacteria in water, foods or other materials has been termed variously as: bacterial plate count, total plate count, total viable plate count, aerobic plate count, standard plate cound and more recently, heterotrophi...

  13. Use of Surveillance Data on HIV Diagnoses with HIV-Related Symptoms to Estimate the Number of People Living with Undiagnosed HIV in Need of Antiretroviral Therapy

    PubMed Central

    van Sighem, Ard; Sabin, Caroline A.; Phillips, Andrew N.

    2015-01-01

    Background It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). Methods The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. Results For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150–199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150–199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29–100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. Conclusions The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART. PMID:25768925

  14. Status Report on the Passive Neutron Enrichment Meter (PNEM) for UF6 Cylinder Assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Karen A.; Swinhoe, Martyn T.; Menlove, Howard O.

    2012-05-02

    The Passive Neutron Enrichment Meter (PNEM) is a nondestructive assay (NDA) system being developed at Los Alamos National Laboratory (LANL). It was designed to determine {sup 235}U mass and enrichment of uranium hexafluoride (UF{sub 6}) in product, feed, and tails cylinders (i.e., 30B and 48Y cylinders). These cylinders are found in the nuclear fuel cycle at uranium conversion, enrichment, and fuel fabrication facilities. The PNEM is a {sup 3}He-based neutron detection system that consists of two briefcase-sized detector pods. A photograph of the system during characterization at LANL is shown in Fig. 1. Several signatures are currently being studied tomore » determine the most effective measurement and data reduction technique for unfolding {sup 235}U mass and enrichment. The system collects total neutron and coincidence data for both bare and cadmium-covered detector pods. The measurement concept grew out of the success of the Uranium Cylinder Assay System (UCAS), which is an operator system at Rokkasho Enrichment Plant (REP) that uses total neutron counting to determine {sup 235}U mass in UF{sub 6} cylinders. The PNEM system was designed with higher efficiency than the UCAS in order to add coincidence counting functionality for the enrichment determination. A photograph of the UCAS with a 48Y cylinder at REP is shown in Fig. 2, and the calibration measurement data for 30B product and 48Y feed and tails cylinders is shown in Fig. 3. The data was collected in a low-background environment, meaning there is very little scatter in the data. The PNEM measurement concept was first presented at the 2010 Institute of Nuclear Materials Management (INMM) Annual Meeting. The physics design and uncertainty analysis were presented at the 2010 International Atomic Energy Agency (IAEA) Safeguards Symposium, and the mechanical and electrical designs and characterization measurements were published in the ESARDA Bulletin in 2011.« less

  15. Systemic immune response and virus persistence after foot-and-mouth disease virus infection of naïve cattle and cattle vaccinated with a homologous adenovirus-vectored vaccine

    DOE PAGES

    Eschbaumer, Michael; Stenfeldt, Carolina; Rekant, Steven I.; ...

    2016-09-15

    In order to investigate host factors associated with the establishment of persistent foot-and-mouth disease virus (FMDV) infection, the systemic response to vaccination and challenge was studied in 47 steers. Eighteen steers that had received a recombinant FMDV A vaccine 2 weeks earlier and 29 non-vaccinated steers were challenged by intra-nasopharyngeal deposition of FMDV A24. For up to 35 days after challenge, host factors including complete blood counts with T lymphocyte subsets, type I/III interferon (IFN) activity, neutralizing and total FMDV-specific antibody titers in serum, as well as antibody-secreting cells (in 6 non-vaccinated animals) were characterized in the context of viralmore » infection dynamics. As a result, vaccination generally induced a strong antibody response. There was a transient peak of FMDV-specific serum IgM in non-vaccinated animals after challenge, while IgM levels in vaccinated animals did not increase further. Both groups had a lasting increase of specific IgG and neutralizing antibody after challenge. Substantial systemic IFN activity in non-vaccinated animals coincided with viremia, and no IFN or viremia was detected in vaccinated animals. After challenge, circulating lymphocytes decreased in non-vaccinated animals, coincident with viremia, IFN activity, and clinical disease, whereas lymphocyte and monocyte counts in vaccinated animals were unaffected by vaccination but transiently increased after challenge. The CD4 +/CD8 + T cell ratio in non-vaccinated animals increased during acute infection, driven by an absolute decrease of CD8 + cells. In conclusion, the incidence of FMDV persistence was 61.5 % in non-vaccinated and 54.5 % in vaccinated animals. Overall, the systemic factors examined were not associated with the FMDV carrier/non-carrier divergence; however, significant differences were identified between responses of non-vaccinated and vaccinated cattle.« less

  16. The System Design, Engineering Architecture, and Preliminary Results of a Lower-Cost High-Sensitivity High-Resolution Positron Emission Mammography Camera.

    PubMed

    Zhang, Yuxuan; Ramirez, Rocio A; Li, Hongdi; Liu, Shitao; An, Shaohui; Wang, Chao; Baghaei, Hossain; Wong, Wai-Hoi

    2010-02-01

    A lower-cost high-sensitivity high-resolution positron emission mammography (PEM) camera is developed. It consists of two detector modules with the planar detector bank of 20 × 12 cm(2). Each bank has 60 low-cost PMT-Quadrant-Sharing (PQS) LYSO blocks arranged in a 10 × 6 array with two types of geometries. One is the symmetric 19.36 × 19.36 mm(2) block made of 1.5 × 1.5 × 10 mm(3) crystals in a 12 × 12 array. The other is the 19.36 × 26.05 mm(2) asymmetric block made of 1.5 × 1.9 × 10 mm(3) crystals in 12 × 13 array. One row (10) of the elongated blocks are used along one side of the bank to reclaim the half empty PMT photocathode in the regular PQS design to reduce the dead area at the edge of the module. The bank has a high overall crystal packing fraction of 88%, which results in a very high sensitivity. Mechanical design and electronics have been developed for low-cost, compactness, and stability purposes. Each module has four Anger-HYPER decoding electronics that can handle a count-rate of 3 Mcps for single events. A simple two-module coincidence board with a hardware delay window for random coincidences has been developed with an adjustable window of 6 to 15 ns. Some of the performance parameters have been studied by preliminary tests and Monte Carlo simulations, including the crystal decoding map and the 17% energy resolution of the detectors, the point source sensitivity of 11.5% with 50 mm bank-to-bank distance, the 1.2 mm-spatial resolutions, 42 kcps peak Noise Equivalent Count Rate at 7.0-mCi total activity in human body, and the resolution phantom images. Those results show that the design goal of building a lower-cost, high-sensitivity, high-resolution PEM detector is achieved.

  17. Systemic immune response and virus persistence after foot-and-mouth disease virus infection of naïve cattle and cattle vaccinated with a homologous adenovirus-vectored vaccine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eschbaumer, Michael; Stenfeldt, Carolina; Rekant, Steven I.

    In order to investigate host factors associated with the establishment of persistent foot-and-mouth disease virus (FMDV) infection, the systemic response to vaccination and challenge was studied in 47 steers. Eighteen steers that had received a recombinant FMDV A vaccine 2 weeks earlier and 29 non-vaccinated steers were challenged by intra-nasopharyngeal deposition of FMDV A24. For up to 35 days after challenge, host factors including complete blood counts with T lymphocyte subsets, type I/III interferon (IFN) activity, neutralizing and total FMDV-specific antibody titers in serum, as well as antibody-secreting cells (in 6 non-vaccinated animals) were characterized in the context of viralmore » infection dynamics. As a result, vaccination generally induced a strong antibody response. There was a transient peak of FMDV-specific serum IgM in non-vaccinated animals after challenge, while IgM levels in vaccinated animals did not increase further. Both groups had a lasting increase of specific IgG and neutralizing antibody after challenge. Substantial systemic IFN activity in non-vaccinated animals coincided with viremia, and no IFN or viremia was detected in vaccinated animals. After challenge, circulating lymphocytes decreased in non-vaccinated animals, coincident with viremia, IFN activity, and clinical disease, whereas lymphocyte and monocyte counts in vaccinated animals were unaffected by vaccination but transiently increased after challenge. The CD4 +/CD8 + T cell ratio in non-vaccinated animals increased during acute infection, driven by an absolute decrease of CD8 + cells. In conclusion, the incidence of FMDV persistence was 61.5 % in non-vaccinated and 54.5 % in vaccinated animals. Overall, the systemic factors examined were not associated with the FMDV carrier/non-carrier divergence; however, significant differences were identified between responses of non-vaccinated and vaccinated cattle.« less

  18. Concurrent epizootic hyperinfections of sea lice (predominantly Caligus chiastos) and blood flukes (Cardicola forsteri) in ranched Southern Bluefin tuna.

    PubMed

    Hayward, Craig J; Ellis, David; Foote, Danielle; Wilkinson, Ryan J; Crosbie, Phillip B B; Bott, Nathan J; Nowak, Barbara F

    2010-10-11

    Peaks in epizootics of sea lice (mostly Caligus chiastos) and blood flukes (Cardicola forsteri) among Southern Bluefin tuna (Thunnus maccoyii) appear to coincide with the onset of an increased mortality. The mortality event occurs 6-12 weeks after T. maccoyii have been transferred into static ranching pontoons from the wild. However, to date available data on parasite occurrence before commercial harvesting begins, are scant. This research gathered epizootiological data from weeks 4 to 13 post-transfer, for 153 T. maccoyii sampled from two research and four commercial pontoons. Counts of both parasites in the research pontoons reached levels far heavier than previously documented in ranched T. maccoyii. For sea lice, the prevalence in most pontoons was 100%; the highest intensity reached 495 individuals, and mean counts at the peak of the infection exceeded 265 lice per fish. Almost all of the 5407 individual lice counted were identified as adult C. chiastos (89.44% female, 10.14% male); adult females of two other species were also present, C. amblygenitalis (0.13%), in addition to an undescribed species, C. sp. (0.04%). Lice counts were correlated positively with gross eye pathology scores (r(s,151df)=0.3394, p=0.0000), negatively correlated with condition index (r(s,151df)=-0.5396, p=0.0000), and positively correlated with plasma cortisol (r(s,131df)=0.3906, p=0.0000) and glucose (r(s,131df)=0.2240, p=0.0096). For the blood fluke, prevalences were less uniform than those of sea lice, with lower rates of infection at the beginning (ranging from 10% to 40%), reaching 100% mid-study, and declining again (40% in one pontoon). The highest intensity reached 441 individual flukes. Fluke counts were negatively correlated with plasma haemoglobin (r(s,151df)=-0.2436, p=0.0051) and positively with lysozyme (r(s,151df)=0.3013, p=0.0019). Fluke counts were also correlated with sea lice counts (r(s,150df)=0.3143, p=0.0000). Peaks in these epizootics occurred near the onset of elevated mortalities, which started after 7 weeks of ranching. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Sensitivity booster for DOI-PET scanner by utilizing Compton scattering events between detector blocks

    NASA Astrophysics Data System (ADS)

    Yoshida, Eiji; Tashima, Hideaki; Yamaya, Taiga

    2014-11-01

    In a conventional PET scanner, coincidence events are measured with a limited energy window for detection of photoelectric events in order to reject Compton scatter events that occur in a patient, but Compton scatter events caused in detector crystals are also rejected. Scatter events within the patient causes scatter coincidences, but inter crystal scattering (ICS) events have useful information for determining an activity distribution. Some researchers have reported the feasibility of PET scanners based on a Compton camera for tracing ICS into the detector. However, these scanners require expensive semiconductor detectors for high-energy resolution. In the Anger-type block detector, single photons interacting with multiple detectors can be obtained for each interacting position and complete information can be gotten just as for photoelectric events in the single detector. ICS events in the single detector have been used to get coincidence, but single photons interacting with multiple detectors have not been used to get coincidence. In this work, we evaluated effect of sensitivity improvement using Compton kinetics in several types of DOI-PET scanners. The proposed method promises to improve the sensitivity using coincidence events of single photons interacting with multiple detectors, which are identified as the first interaction (FI). FI estimation accuracy can be improved to determine FI validity from the correlation between Compton scatter angles calculated on the coincidence line-of-response. We simulated an animal PET scanner consisting of 42 detectors. Each detector block consists of three types of scintillator crystals (LSO, GSO and GAGG). After the simulation, coincidence events are added as information for several depth-of-interaction (DOI) resolutions. From the simulation results, we concluded the proposed method promises to improve the sensitivity considerably when effective atomic number of a scintillator is low. Also, we showed that FI estimate accuracy is improved, as DOI resolution is high.

  20. A new method to detect anisotropic electron events with SOHO/EPHIN

    NASA Astrophysics Data System (ADS)

    Banjac, Saša; Kühl, Patrick; Heber, Bernd

    2016-07-01

    The EPHIN instrument (Electron Proton Helium INstrument) forms a part of the COSTEP experiment (COmprehensive SupraThermal and Energetic Particle Analyzer) within the CEPAC collaboration on board of the SOHO spacecraft (SOlar and Heliospheric Observatory). The EPHIN sensor is a stack of six solid-state detectors surrounded by an anti-coincidence. It measures energy spectra of electrons in the range 250 keV to >8.7 MeV, and hydrogen and helium isotopes in the range 4~MeV/n to >53~MeV/n. In order to improve the isotopic resolution, the first two detectors have been segmented: 5 segments form a ring enclosing a central segment. This does not only allow to correct the energy-losses in the detectors for the different path-length in the detectors but allows also an estimation of the arrival direction of the particles with respect to the sensor axis. Utilizing an extensive GEANT 4 Monte-Carlo simulation of the sensor head we computed the scattering-induced modifications to the input angular distribution and developed an inversion method that takes into account the poor counting statistics by optimizing the corresponding algorithm. This improvement makes it possible for the first time to detect long lasting anisotropies in the 1~MeV-3~MeV electron flux with a single telescope on a three-axis stabilized spacecraft. We present the method and its application to several events with strong anisotropies. For validation, we compare our data with the WIND-3DP results.

  1. A Method for Counting Moving People in Video Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  2. Ultra-fast photon counting with a passive quenching silicon photomultiplier in the charge integration regime

    NASA Astrophysics Data System (ADS)

    Zhang, Guoqing; Lina, Liu

    2018-02-01

    An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.

  3. [Count of salivary Streptococci mutans in pregnant women of the metropolitan region of Chile: cross-sectional study].

    PubMed

    Villagrán, E; Linossier, A; Donoso, E

    1999-02-01

    Salivary Streptococci mutans contamination is considered the main microbiological risk factor for the initiation of caries. To assess the oral health of pregnant women, counting Salivary Streptococci mutants. One hundred seventy four pregnant women, in the first, second and third trimester of pregnancy, aged 27 +/- 5 years old, consulting at a public primary health center, were studied. Puerperal women that had their delivery two months before, were studied as a control group. Salivary samples were obtained and Streptococci mutans colonies were counted using quantitative and semiquantitative methods. There was a good concordance between both counting methods. No differences in Streptococci mutans counts were observed among the three groups of pregnant women, but the latter as a group had higher counts than puerperal women. Women with more than 5 caries had also higher counts. Semiquantitative Streptococci mutans counts are easy, rapid and non invasive and have a good concordance with quantitative counts in saliva.

  4. BMPix and PEAK tools: New methods for automated laminae recognition and counting—Application to glacial varves from Antarctic marine sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.

    2010-03-01

    We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  5. Effect of distance-related heterogeneity on population size estimates from point counts

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.

    2009-01-01

    Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.

  6. Comparison of Dry Medium Culture Plates for Mesophilic Aerobic Bacteria in Milk, Ice Cream, Ham, and Codfish Fillet Products

    PubMed Central

    Park, Junghyun; Kim, Myunghee

    2013-01-01

    This study was performed to compare the performance of Sanita-Kun dry medium culture plate with those of traditional culture medium and Petrifilm dry medium culture plate for the enumeration of the mesophilic aerobic bacteria in milk, ice cream, ham, and codfish fillet. Mesophilic aerobic bacteria were comparatively evaluated in milk, ice cream, ham, and codfish fillet using Sanita-Kun aerobic count (SAC), Petrifilm aerobic count (PAC), and traditional plate count agar (PCA) media. According to the results, all methods showed high correlations of 0.989~1.000 and no significant differences were observed for enumerating the mesophilic aerobic bacteria in the tested food products. SAC method was easier to perform and count colonies efficiently as compared to the PCA and PAC methods. Therefore, we concluded that the SAC method offers an acceptable alternative to the PCA and PAC methods for counting the mesophilic aerobic bacteria in milk, ice cream, ham, and codfish fillet products. PMID:24551829

  7. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  8. Evaluation of the performance of a point-of-care method for total and differential white blood cell count in clozapine users.

    PubMed

    Bui, H N; Bogers, J P A M; Cohen, D; Njo, T; Herruer, M H

    2016-12-01

    We evaluated the performance of the HemoCue WBC DIFF, a point-of-care device for total and differential white cell count, primarily to test its suitability for the mandatory white blood cell monitoring in clozapine use. Leukocyte count and 5-part differentiation was performed by the point-of-care device and by routine laboratory method in venous EDTA-blood samples from 20 clozapine users, 20 neutropenic patients, and 20 healthy volunteers. From the volunteers, also a capillary sample was drawn. Intra-assay reproducibility and drop-to-drop variation were tested. The correlation between both methods in venous samples was r > 0.95 for leukocyte, neutrophil, and lymphocyte counts. The correlation between point-of-care (capillary sample) and routine (venous sample) methods for these cells was 0.772; 0.817 and 0.798, respectively. Only for leukocyte and neutrophil counts, the intra-assay reproducibility was sufficient. The point-of-care device can be used to screen for leukocyte and neutrophil counts. Because of the relatively high measurement uncertainty and poor correlation with venous samples, we recommend to repeat the measurement with a venous sample if cell counts are in the lower reference range. In case of clozapine therapy, neutropenia can probably be excluded if high neutrophil counts are found and patients can continue their therapy. © 2016 John Wiley & Sons Ltd.

  9. Survey of predators and sampling method comparison in sweet corn.

    PubMed

    Musser, Fred R; Nyrop, Jan P; Shelton, Anthony M

    2004-02-01

    Natural predation is an important component of integrated pest management that is often overlooked because it is difficult to quantify and perceived to be unreliable. To begin incorporating natural predation into sweet corn, Zea mays L., pest management, a predator survey was conducted and then three sampling methods were compared for their ability to accurately monitor the most abundant predators. A predator survey on sweet corn foliage in New York between 1999 and 2001 identified 13 species. Orius insidiosus (Say), Coleomegilla maculata (De Geer), and Harmonia axyridis (Pallas) were the most numerous predators in all years. To determine the best method for sampling adult and immature stages of these predators, comparisons were made among nondestructive field counts, destructive counts, and yellow sticky cards. Field counts were correlated with destructive counts for all populations, but field counts of small insects were biased. Sticky cards underrepresented immature populations. Yellow sticky cards were more attractive to C. maculata adults than H. axyridis adults, especially before pollen shed, making coccinellid population estimates based on sticky cards unreliable. Field counts were the most precise method for monitoring adult and immature stages of the three major predators. Future research on predicting predation of pests in sweet corn should be based on field counts of predators because these counts are accurate, have no associated supply costs, and can be made quickly.

  10. Estimating consumer familiarity with health terminology: a context-based approach.

    PubMed

    Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz

    2008-01-01

    Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.

  11. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture.

    PubMed

    Zhong, Yuanhong; Gao, Junyuan; Lei, Qilun; Zhou, Yao

    2018-05-09

    Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.

  12. A Vision-Based Counting and Recognition System for Flying Insects in Intelligent Agriculture

    PubMed Central

    Zhong, Yuanhong; Gao, Junyuan; Lei, Qilun; Zhou, Yao

    2018-01-01

    Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications. PMID:29747429

  13. Better Than Counting: Density Profiles from Force Sampling

    NASA Astrophysics Data System (ADS)

    de las Heras, Daniel; Schmidt, Matthias

    2018-05-01

    Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.

  14. A Scintillation Counter System Design To Detect Antiproton Annihilation using the High Performance Antiproton Trap(HiPAT)

    NASA Technical Reports Server (NTRS)

    Martin, James J.; Lewis, Raymond A.; Stanojev, Boris

    2003-01-01

    The High Performance Antiproton Trap (HiPAT), a system designed to hold up to l0(exp 12) charge particles with a storage half-life of approximately 18 days, is a tool to support basic antimatter research. NASA's interest stems from the energy density represented by the annihilation of matter with antimatter, 10(exp 2)MJ/g. The HiPAT is configured with a Penning-Malmberg style electromagnetic confinement region with field strengths up to 4 Tesla, and 20kV. To date a series of normal matter experiments, using positive and negative ions, have been performed evaluating the designs performance prior to operations with antiprotons. The primary methods of detecting and monitoring stored normal matter ions and antiprotons within the trap includes a destructive extraction technique that makes use of a micro channel plate (MCP) device and a non-destractive radio frequency scheme tuned to key particle frequencies. However, an independent means of detecting stored antiprotons is possible by making use of the actual annihilation products as a unique indicator. The immediate yield of the annihilation event includes photons and pie mesons, emanating spherically from the point of annihilation. To "count" these events, a hardware system of scintillators, discriminators, coincident meters and multi channel scalars (MCS) have been configured to surround much of the HiPAT. Signal coincidence with voting logic is an essential part of this system, necessary to weed out the single cosmic ray events from the multi-particle annihilation shower. This system can be operated in a variety of modes accommodating various conditions. The first is a low-speed sampling interval that monitors the background loss or "evaporation" rate of antiprotons held in the trap during long storage periods; provides an independent method of validating particle lifetimes. The second is a high-speed sample rate accumulating information on a microseconds time-scale; useful when trapped antiparticles are extracted against a target, providing an indication of quantity. This paper details the layout of this system, setup of the hardware components around HiPAT, and applicable checkouts using normal matter radioactive sources.

  15. Learning to Count: School Finance Formula Count Methods and Attendance-Related Student Outcomes

    ERIC Educational Resources Information Center

    Ely, Todd L.; Fermanich, Mark L.

    2013-01-01

    School systems are under increasing pressure to improve student performance. Several states have recently explored adopting student count methods for school funding purposes that incentivize school attendance and continuous enrollment by adjusting funding for changes in enrollment or attendance over the course of the school year. However, no…

  16. COMPARISON OF LABORATORY SUBSAMPLING METHODS OF BENTHIC SAMPLES FROM BOATABLE RIVERS USING ACTUAL AND SIMULATED COUNT DATA

    EPA Science Inventory

    We examined the effects of using a fixed-count subsample of 300 organisms on metric values using macroinvertebrate samples collected with 3 field sampling methods at 12 boatable river sites. For each sample, we used metrics to compare an initial fixed-count subsample of approxima...

  17. Evaluation of seven aquatic sampling methods for amphibians and other aquatic fauna

    USGS Publications Warehouse

    Gunzburger, M.S.

    2007-01-01

    To design effective and efficient research and monitoring programs researchers must have a thorough understanding of the capabilities and limitations of their sampling methods. Few direct comparative studies exist for aquatic sampling methods for amphibians. The objective of this study was to simultaneously employ seven aquatic sampling methods in 10 wetlands to compare amphibian species richness and number of individuals detected with each method. Four sampling methods allowed counts of individuals (metal dipnet, D-frame dipnet, box trap, crayfish trap), whereas the other three methods allowed detection of species (visual encounter, aural, and froglogger). Amphibian species richness was greatest with froglogger, box trap, and aural samples. For anuran species, the sampling methods by which each life stage was detected was related to relative length of larval and breeding periods and tadpole size. Detection probability of amphibians varied across sampling methods. Box trap sampling resulted in the most precise amphibian count, but the precision of all four count-based methods was low (coefficient of variation > 145 for all methods). The efficacy of the four count sampling methods at sampling fish and aquatic invertebrates was also analyzed because these predatory taxa are known to be important predictors of amphibian habitat distribution. Species richness and counts were similar for fish with the four methods, whereas invertebrate species richness and counts were greatest in box traps. An effective wetland amphibian monitoring program in the southeastern United States should include multiple sampling methods to obtain the most accurate assessment of species community composition at each site. The combined use of frogloggers, crayfish traps, and dipnets may be the most efficient and effective amphibian monitoring protocol. ?? 2007 Brill Academic Publishers.

  18. Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order

    DOE PAGES

    Favalli, Andrea; Croft, Stephen; Santi, Peter

    2015-06-15

    Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclearmore » data constants by a series of coupled algebraic equations – the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This study represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.« less

  19. Experimental Study for Automatic Colony Counting System Based Onimage Processing

    NASA Astrophysics Data System (ADS)

    Fang, Junlong; Li, Wenzhe; Wang, Guoxin

    Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.

  20. Recent trends in counts of migrant hawks from northeastern North America

    USGS Publications Warehouse

    Titus, K.; Fuller, M.R.

    1990-01-01

    Using simple regression, pooled-sites route-regression, and nonparametric rank-trend analyses, we evaluated trends in counts of hawks migrating past 6 eastern hawk lookouts from 1972 to 1987. The indexing variable was the total count for a season. Bald eagle (Haliaeetus leucocephalus), peregrine falcon (Falco peregrinus), merlin (F. columbarius), osprey (Pandion haliaetus), and Cooper's hawk (Accipiter cooperii) counts increased using route-regression and nonparametric methods (P 0.10). We found no consistent trends (P > 0.10) in counts of sharp-shinned hawks (A. striatus), northern goshawks (A. gentilis) red-shouldered hawks (Buteo lineatus), red-tailed hawks (B. jamaicensis), rough-legged hawsk (B. lagopus), and American kestrels (F. sparverius). Broad-winged hawk (B. platypterus) counts declined (P < 0.05) based on the route-regression method. Empirical comparisons of our results with those for well-studied species such as the peregrine falcon, bald eagle, and osprey indicated agreement with nesting surveys. We suggest that counts of migrant hawks are a useful and economical method for detecting long-term trends in species across regions, particularly for species that otherwise cannot be easily surveyed.

  1. Study of the Residual Background Events in Ground Data from the ASTRO-HSXS Microcalorimeter

    NASA Technical Reports Server (NTRS)

    Kilbourne, Caroline A.; Boyce, Kevin R.; Chiao, M. P.; Eckart, M. E.; Kelley, R. L.; Leutenegger, M. A.; Porter, F. S.; Watanabe, T.; Ishisaki, Y.; Yamada, S.; hide

    2015-01-01

    The measured instrumental background of the XRS calorimeter spectrometer of Suzaku had several sources, including primary cosmic rays and secondary particles interacting with the pixels and with the silicon structure of the array. Prior to the launch of Suzaku, several data sets were taken without x-ray illumination to study the characteristics and timing of background signals produced in the array and anti-coincidence detector. Even though the source of the background in the laboratory was different from that in low-earth orbit (muons and environmental gamma-rays on the ground versus Galactic cosmic-ray (GCR) protons and alpha particles in space), the study of correlations and properties of populations of rare events was useful for establishing the preliminary screening parameters needed for selection of good science data. Sea-level muons are singly charged minimum-ionizing particles, like the GCR protons, and thus were good probes of the effectiveness of screening via the signals from the anti-coincidence detector. Here we present the first analysis of the on-ground background of the SXS calorimeter of Astro-H. On XRS, the background prior to screening was completely dominated by coincident events on many pixels resulting from the temperature pulse arising from each large energy deposition (greater than 200 keV) into the silicon frame around the array. The improved heat-sinking of the SXS array compared with XRS eliminated these thermal disturbances, greatly reducing the measured count rate in the absence of illumination. The removal of these events has made it easier to study the nature of the residual background and to look for additional event populations. We compare the SXS residual background to that measured in equivalent ground data for XRS and discuss these preliminary results.

  2. Evaluation of Pulse Counting for the Mars Organic Mass Analyzer (MOMA) Ion Trap Detection Scheme

    NASA Technical Reports Server (NTRS)

    Van Amerom, Friso H.; Short, Tim; Brinckerhoff, William; Mahaffy, Paul; Kleyner, Igor; Cotter, Robert J.; Pinnick, Veronica; Hoffman, Lars; Danell, Ryan M.; Lyness, Eric I.

    2011-01-01

    The Mars Organic Mass Analyzer is being developed at Goddard Space Flight Center to identify organics and possible biological compounds on Mars. In the process of characterizing mass spectrometer size, weight, and power consumption, the use of pulse counting was considered for ion detection. Pulse counting has advantages over analog-mode amplification of the electron multiplier signal. Some advantages are reduced size of electronic components, low power consumption, ability to remotely characterize detector performance, and avoidance of analog circuit noise. The use of pulse counting as a detection method with ion trap instruments is relatively rare. However, with the recent development of high performance electrical components, this detection method is quite suitable and can demonstrate significant advantages over analog methods. Methods A prototype quadrupole ion trap mass spectrometer with an internal electron ionization source was used as a test setup to develop and evaluate the pulse-counting method. The anode signal from the electron multiplier was preamplified. The an1plified signal was fed into a fast comparator for pulse-level discrimination. The output of the comparator was fed directly into a Xilinx FPGA development board. Verilog HDL software was written to bin the counts at user-selectable intervals. This system was able to count pulses at rates in the GHz range. The stored ion count nun1ber per bin was transferred to custom ion trap control software. Pulse-counting mass spectra were compared with mass spectra obtained using the standard analog-mode ion detection. Prelin1inary Data Preliminary mass spectra have been obtained for both analog mode and pulse-counting mode under several sets of instrument operating conditions. Comparison of the spectra revealed better peak shapes for pulse-counting mode. Noise levels are as good as, or better than, analog-mode detection noise levels. To artificially force ion pile-up conditions, the ion trap was overfilled and ions were ejected at very high scan rates. Pile-up of ions was not significant for the ion trap under investigation even though the ions are ejected in so-called 'ion-micro packets'. It was found that pulse counting mode had higher dynamic range than analog mode, and that the first amplification stage in analog mode can distort mass peaks. The inherent speed of the pulse counting method also proved to be beneficial to ion trap operation and ion ejection characterization. Very high scan rates were possible with pulse counting since the digital circuitry response time is so much smaller than with the analog method. Careful investigation of the pulse-counting data also allowed observation of the applied resonant ejection frequency during mass analysis. Ejection of ion micro packets could be clearly observed in the binned data. A second oscillation frequency, much lower than the secular frequency, was also observed. Such an effect was earlier attributed to the oscillation of the total plasma cloud in the ion trap. While the components used to implement pulse counting are quite advanced, due to their prevalence in consumer electronics, the cost of this detection system is no more than that of an analog mode system. Total pulse-counting detection system electronics cost is under $250

  3. Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting.

    PubMed

    Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay

    2016-04-09

    Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e(-) read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor.

  4. Use of surveillance data on HIV diagnoses with HIV-related symptoms to estimate the number of people living with undiagnosed HIV in need of antiretroviral therapy.

    PubMed

    Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N

    2015-01-01

    It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.

  5. aCORN Beta Spectrometer and Electrostatic Mirror

    NASA Astrophysics Data System (ADS)

    Hassan, Md; aCORN Collaboration

    2013-10-01

    aCORN uses a high efficiency backscatter suppressed beta spectrometer to measure the electron-antineutrino correlation in neutron beta decay. We measure the correlation by counting protons and beta electrons in coincidence with precisely determined electron energy. There are 19 photomultiplier tubes arranged in a hexagonal array coupled to a single phosphor doped polystyrene scintillator. The magnetic field is shaped so that electrons that backscatter without depositing their full energy strike a tulip-shaped array of scintillator paddles and these events are vetoed. The detailed construction, performance and calibration of this beta spectrometer will be presented. I will also present the simulation, construction, and features of our novel electrostatic mirror. This work was supported by the National Science Foundation and the NIST Center for Neutron Research.

  6. Highly charged ion based time of flight emission microscope

    DOEpatents

    Barnes, Alan V.; Schenkel, Thomas; Hamza, Alex V.; Schneider, Dieter H.; Doyle, Barney

    2001-01-01

    A highly charged ion based time-of-flight emission microscope has been designed, which improves the surface sensitivity of static SIMS measurements because of the higher ionization probability of highly charged ions. Slow, highly charged ions are produced in an electron beam ion trap and are directed to the sample surface. The sputtered secondary ions and electrons pass through a specially designed objective lens to a microchannel plate detector. This new instrument permits high surface sensitivity (10.sup.10 atoms/cm.sup.2), high spatial resolution (100 nm), and chemical structural information due to the high molecular ion yields. The high secondary ion yield permits coincidence counting, which can be used to enhance determination of chemical and topological structure and to correlate specific molecular species.

  7. Improvements in Boron Plate Coating Technology for Higher Efficiency Neutron Detection and Coincidence Counting Error Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menlove, Howard Olsen; Henzlova, Daniela

    This informal report presents the measurement data and information to document the performance of the advanced Precision Data Technology, Inc. (PDT) sealed cell boron-10 plate neutron detector that makes use of the advanced coating materials and procedures. In 2015, PDT changed the boron coating materials and application procedures to significantly increase the efficiency of their basic corrugated plate detector performance. A prototype sealed cell unit was supplied to LANL for testing and comparison with prior detector cells. Also, LANL had reference detector slabs from the original neutron collar (UNCL) and the new Antech UNCL with the removable 3He tubes. Themore » comparison data is presented in this report.« less

  8. Entangled-Pair Transmission Improvement Using Distributed Phase-Sensitive Amplification

    NASA Astrophysics Data System (ADS)

    Agarwal, Anjali; Dailey, James M.; Toliver, Paul; Peters, Nicholas A.

    2014-10-01

    We demonstrate the transmission of time-bin entangled photon pairs through a distributed optical phase-sensitive amplifier (OPSA). We utilize four-wave mixing at telecom wavelengths in a 5-km dispersion-shifted fiber OPSA operating in the low-gain limit. Measurements of two-photon interference curves show no statistically significant degradation in the fringe visibility at the output of the OPSA. In addition, coincidence counting rates are higher than direct passive transmission because of constructive interference between amplitudes of input photon pairs and those generated in the OPSA. Our results suggest that application of distributed phase-sensitive amplification to transmission of entangled photon pairs could be highly beneficial towards advancing the rate and scalability of future quantum communications systems.

  9. Essential Thrombocythaemia and Peripheral Gangrene

    PubMed Central

    Preston, F. E.; Emmanuel, I. G.; Winfield, D. A.; Malia, R. G.

    1974-01-01

    Six patients are described in whom gangrene of one or more toes occurred as the presenting feature of essential thrombocythaemia. Spontaneous platelet aggregation was observed in platelet-rich plasma from four patients and platelet aggregation after the addition of adenosine diphosphate and collagen was highly abnormal in samples from all six. All of the patients described dramatic relief of pain within six hours of ingestion of aspirin and this coincided with disappearance of the spontaneous platelet aggregation and collagen-induced platelet aggregation. Treatment with phosphorus-32 corrected the platelet count and there were no further recurrences of peripheral vascular disease. Platelet function tests performed at the time all gave normal results. It is concluded that essential thrombocythaemia is an important and treatable cause of peripheral vascular disease. PMID:4472103

  10. Count data, detection probabilities, and the demography, dynamics, distribution, and decline of amphibians.

    PubMed

    Schmidt, Benedikt R

    2003-08-01

    The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.

  11. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    PubMed

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. [Reassessment of a combination of cerebrospinal fluid scintigraphy and nasal pledget counts in patients with suspected rhinorrhea].

    PubMed

    Kosuda, S; Arai, S; Hohshito, Y; Tokumitsu, H; Kusano, S; Ishihara, S; Shima, K

    1998-07-01

    A combination study of cerebrospinal fluid scintigraphy and nasal pledget counts was performed using 37 MBq of 111In-DTPA in 12 patients with suspected rhinorrhea. A pledget was inserted and dwelled in each nasal cavity for 6 hours, with the patient prone during at least 30 minutes. A total of 18 studies was implemented and nasal pledget counting method successfully diagnosed all of CSF rhinorrhea. Diagnosis was possible when pledget counts were greater than 1 kcpm. In patients with persistent, intermittent and occult/no nasal discharge, rhinorrhea was found in 100% (5/5), 60% (3/5), 25% (2/8), respectively. Two cases only exhibited positive scintigraphy. MRI or CT cisternography should be first performed in patients with persistent discharge, but in patients with intermittent/occult discharge pledget counting method might take priority of other diagnostic modalities. In conclusion, nasal pledget counting method is a simple and useful tool for detecting rhinorrhea.

  13. Search for Correlated Fluctuations in the Beta+ Decay of Na-22

    NASA Astrophysics Data System (ADS)

    Silverman, M. P.; Strange, W.

    2008-10-01

    Claims for a ``cosmogenic'' force that correlates otherwise independent stochastic events have been made for at least 10 years, based largely on visual inspection of time series of histograms whose shapes were interpreted as suggestive of recurrent patterns with semi-diurnal, diurnal, and monthly periods. Building on our earlier work to test randomness of different nuclear decay processes, we have searched for correlations in the time-series of coincident positron-electron annihilations deriving from beta+ decay of Na-22. Disintegrations were counted within a narrow time window over a period of 7 days, leading to a time series of more than 1 million events. Statistical tests were performed on the raw time series, its correlation function, and its Fourier transform to search for cyclic correlations indicative of quantum-mechanical violating deviations from Poisson statistics. The time series was then partitioned into a sequence of 167 ``bags'' each of 8192 events. A histogram was made of the events of each bag, where contiguous frequency classes differed by a single count. The chronological sequence of histograms was then tested for correlations within classes. In all cases the results of the tests were in accord with statistical control, giving no evidence of correlated fluctuations.

  14. Microchannel plate special nuclear materials sensor

    NASA Astrophysics Data System (ADS)

    Feller, W. B.; White, P. L.; White, P. B.; Siegmund, O. H. W.; Martin, A. P.; Vallerga, J. V.

    2011-10-01

    Nova Scientific Inc., is developing for the Domestic Nuclear Detection Office (DNDO SBIR #HSHQDC-08-C-00190), a solid-state, high-efficiency neutron detection alternative to 3He gas tubes, using neutron-sensitive microchannel plates (MCPs) containing 10B and/or Gd. This work directly supports DNDO development of technologies designed to detect and interdict nuclear weapons or illicit nuclear materials. Neutron-sensitized MCPs have been shown theoretically and more recently experimentally, to be capable of thermal neutron detection efficiencies equivalent to 3He gas tubes. Although typical solid-state neutron detectors typically have an intrinsic gamma sensitivity orders of magnitude higher than that of 3He gas detectors, we dramatically reduce gamma sensitivity by combining a novel electronic coincidence rejection scheme, employing a separate but enveloping gamma scintillator. This has already resulted in a measured gamma rejection ratio equal to a small 3He tube, without in principle sacrificing neutron detection efficiency. Ongoing improvements to the MCP performance as well as the coincidence counting geometry will be described. Repeated testing and validation with a 252Cf source has been underway throughout the Phase II SBIR program, with ongoing comparisons to a small commercial 3He gas tube. Finally, further component improvements and efforts toward integration maturity are underway, with the goal of establishing functional prototypes for SNM field testing.

  15. A high-efficiency HPGe coincidence system for environmental analysis.

    PubMed

    Britton, R; Davies, A V; Burnett, J L; Jackson, M J

    2015-08-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT) is supported by a network of certified laboratories which must meet certain sensitivity requirements for CTBT relevant radionuclides. At the UK CTBT Radionuclide Laboratory (GBL15), a high-efficiency, dual-detector gamma spectroscopy system has been developed to improve the sensitivity of measurements for treaty compliance, greatly reducing the time required for each sample. Utilising list-mode acquisition, each sample can be counted once, and processed multiple times to further improve sensitivity. For the 8 key radionuclides considered, Minimum Detectable Activities (MDA's) were improved by up to 37% in standard mode (when compared to a typical CTBT detector system), with the acquisition time required to achieve the CTBT sensitivity requirements reduced from 6 days to only 3. When utilising the system in coincidence mode, the MDA for (60) Co in a high-activity source was improved by a factor of 34 when compared to a standard CTBT detector, and a factor of 17 when compared to the dual-detector system operating in standard mode. These MDA improvements will allow the accurate and timely quantification of radionuclides that decay via both singular and cascade γ emission, greatly enhancing the effectiveness of CTBT laboratories. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  16. Spatial distribution of diesel transit bus emissions and urban populations: implications of coincidence and scale on exposure.

    PubMed

    Gouge, Brian; Ries, Francis J; Dowlatabadi, Hadi

    2010-09-15

    Macroscale emissions modeling approaches have been widely applied in impact assessments of mobile source emissions. However, these approaches poorly characterize the spatial distribution of emissions and have been shown to underestimate emissions of some pollutants. To quantify the implications of these limitations on exposure assessments, CO, NO(X), and HC emissions from diesel transit buses were estimated at 50 m intervals along a bus rapid transit route using a microscale emissions modeling approach. The impacted population around the route was estimated using census, pedestrian count and transit ridership data. Emissions exhibited significant spatial variability. In intervals near major intersections and bus stops, emissions were 1.6-3.0 times higher than average. The coincidence of these emission hot spots and peaks in pedestrian populations resulted in a 20-40% increase in exposure compared to estimates that assumed homogeneous spatial distributions of emissions and/or populations along the route. An additional 19-30% increase in exposure resulted from the underestimate of CO and NO(X) emissions by macroscale modeling approaches. The results of this study indicate that macroscale modeling approaches underestimate exposure due to poor characterization of the influence of vehicle activity on the spatial distribution of emissions and total emissions.

  17. The Salinas Airshower Learning And Discovery Project (SALAD)

    NASA Astrophysics Data System (ADS)

    Hernandez, Victor; Niduaza, Rommel; Ruiz Castruita, Daniel; Knox, Adrian; Ramos, Daniel; Fan, Sewan; Fatuzzo, Laura

    2015-04-01

    The SALAD project partners community college and high school STEM students in order to develop and investigate cosmic ray detector telescopes and the physical concepts, using a new light sensor technology based on silicon photomultiplier (SiPM) detectors. Replacing the conventional photomultiplier with the SiPM, offers notable advantages in cost and facilitates more in depth, hands-on learning laboratory activities. The students in the SALAD project design, construct and extensively evaluate the SiPM detector modules. These SiPM modules, can be completed in a short time utilizing cost effective components. We describe our research to implement SiPM as read out light detectors for plastic scintillators in a cosmic ray detector telescope for use in high schools. In particular, we describe our work in the design, evaluation and the assembly of (1) a fast preamplifier, (2) a simple coincidence circuit using fast comparators, to discriminate the SiPM noise signal pulses, and (3) a monovibrator circuit to shape the singles plus the AND logic pulses for subsequent processing. To store the singles and coincidence counts data, an Arduino micro-controller with program sketches can be implemented. Results and findings from our work would be described and presented. US Department of Education Title V Grant Award PO31S090007

  18. Reviving common standards in point-count surveys for broad inference across studies

    USGS Publications Warehouse

    Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.

    2014-01-01

    We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.

  19. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  20. Automated vehicle counting using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  1. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting

    PubMed Central

    2012-01-01

    Background The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests’ accuracy, precision, repeatability, linearity and range. Methods As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. Results All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Conclusions Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution). PMID:22650233

  2. Topological methods for the comparison of structures using LDR-brachytherapy of the prostate as an example.

    PubMed

    Schiefer, H; von Toggenburg, F; Seelentag, W W; Plasswilm, L; Ries, G; Schmid, H-P; Leippold, T; Krusche, B; Roth, J; Engeler, D

    2009-08-21

    The dose coverage of low dose rate (LDR)-brachytherapy for localized prostate cancer is monitored 4-6 weeks after intervention by contouring the prostate on computed tomography and/or magnetic resonance imaging sets. Dose parameters for the prostate (V100, D90 and D80) provide information on the treatment quality. Those depend strongly on the delineation of the prostate contours. We therefore systematically investigated the contouring process for 21 patients with five examiners. The prostate structures were compared with one another using topological procedures based on Boolean algebra. The coincidence number C(V) measures the agreement between a set of structures. The mutual coincidence C(i, j) measures the agreement between two structures i and j, and the mean coincidence C(i) compares a selected structure i with the remaining structures in a set. All coincidence parameters have a value of 1 for complete coincidence of contouring and 0 for complete absence. The five patients with the lowest C(V) values were discussed, and rules for contouring the prostate have been formulated. The contouring and assessment were repeated after 3 months for the same five patients. All coincidence parameters have been improved after instruction. This shows objectively that training resulted in more consistent contouring across examiners.

  3. On precise phase difference measurement approach using border stability of detection resolution.

    PubMed

    Bai, Lina; Su, Xin; Zhou, Wei; Ou, Xiaojuan

    2015-01-01

    For the precise phase difference measurement, this paper develops an improved dual phase coincidence detection method. The measurement resolution of the digital phase coincidence detection circuits is always limited, for example, only at the nanosecond level. This paper reveals a new way to improve the phase difference measurement precision by using the border stability of the circuit detection fuzzy areas. When a common oscillator signal is used to detect the phase coincidence with the two comparison signals, there will be two detection fuzzy areas for the reason of finite detection resolution surrounding the strict phase coincidence. Border stability of fuzzy areas and the fluctuation difference of the two fuzzy areas can be even finer than the picoseconds level. It is shown that the system resolution obtained only depends on the stability of the circuit measurement resolution which is much better than the measurement device resolution itself.

  4. Search for Transient Gravitational Waves in Coincidence with Short-Duration Radio Transients During 2007-2013

    NASA Technical Reports Server (NTRS)

    Abbott, B. P.; Hughey, Brennan; Zanolin, Michele; Szczepanczyk, Marek; Gill, Kiranjyot; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; hide

    2016-01-01

    We present an archival search for transient gravitational-wave bursts in coincidence with 27 single-pulse triggers from Green Bank Telescope pulsar surveys, using the LIGO (Laser Interferometer Gravitational Wave Observatory), Virgo (Variability of Solar Irradiance and Gravity Oscillations) and GEO (German-UK Interferometric Detector) interferometer network. We also discuss a check for gravitational-wave signals in coincidence with Parkes fast radio bursts using similar methods. Data analyzed in these searches were collected between 2007 and 2013. Possible sources of emission of both short-duration radio signals and transient gravitational-wave emission include star quakes on neutron stars, binary coalescence of neutron stars, and cosmic string cusps. While no evidence for gravitational-wave emission in coincidence with these radio transients was found, the current analysis serves as a prototype for similar future searches using more sensitive second-generation interferometers.

  5. A rapid and universal bacteria-counting approach using CdSe/ZnS/SiO2 composite nanoparticles as fluorescence probe.

    PubMed

    Fu, Xin; Huang, Kelong; Liu, Suqin

    2010-02-01

    In this paper, a rapid, simple, and sensitive method was described for detection of the total bacterial count using SiO(2)-coated CdSe/ZnS quantum dots (QDs) as a fluorescence marker that covalently coupled with bacteria using glutaraldehyde as the crosslinker. Highly luminescent CdSe/ZnS were prepared by applying cadmium oxide and zinc stearate as precursors instead of pyrophoric organometallic precursors. A reverse-microemulsion technique was used to synthesize CdSe/ZnS/SiO(2) composite nanoparticles with a SiO(2) surface coating. Our results showed that CdSe/ZnS/SiO(2) composite nanoparticles prepared with this method possessed highly luminescent, biologically functional, and monodispersive characteristics, and could successfully be covalently conjugated with the bacteria. As a demonstration, it was found that the method had higher sensitivity and could count bacteria in 3 x 10(2) CFU/mL, lower than the conventional plate counting and organic dye-based method. A linear relationship of the fluorescence peak intensity (Y) and the total bacterial count (X) was established in the range of 3 x 10(2)-10(7) CFU/mL using the equation Y = 374.82X-938.27 (R = 0.99574). The results of the determination for the total count of bacteria in seven real samples were identical with the conventional plate count method, and the standard deviation was satisfactory.

  6. The use of portable 2D echocardiography and 'frame-based' bubble counting as a tool to evaluate diving decompression stress.

    PubMed

    Germonpré, Peter; Papadopoulou, Virginie; Hemelryck, Walter; Obeid, Georges; Lafère, Pierre; Eckersley, Robert J; Tang, Meng-Xing; Balestra, Costantino

    2014-03-01

    'Decompression stress' is commonly evaluated by scoring circulating bubble numbers post dive using Doppler or cardiac echography. This information may be used to develop safer decompression algorithms, assuming that the lower the numbers of venous gas emboli (VGE) observed post dive, the lower the statistical risk of decompression sickness (DCS). Current echocardiographic evaluation of VGE, using the Eftedal and Brubakk method, has some disadvantages as it is less well suited for large-scale evaluation of recreational diving profiles. We propose and validate a new 'frame-based' VGE-counting method which offers a continuous scale of measurement. Nine 'raters' of varying familiarity with echocardiography were asked to grade 20 echocardiograph recordings using both the Eftedal and Brubakk grading and the new 'frame-based' counting method. They were also asked to count the number of bubbles in 50 still-frame images, some of which were randomly repeated. A Wilcoxon Spearman ρ calculation was used to assess test-retest reliability of each rater for the repeated still frames. For the video images, weighted kappa statistics, with linear and quadratic weightings, were calculated to measure agreement between raters for the Eftedal and Brubakk method. Bland-Altman plots and intra-class correlation coefficients were used to measure agreement between raters for the frame-based counting method. Frame-based counting showed a better inter-rater agreement than the Eftedal and Brubakk grading, even with relatively inexperienced assessors, and has good intra- and inter-rater reliability. Frame-based bubble counting could be used to evaluate post-dive decompression stress, and offers possibilities for computer-automated algorithms to allow near-real-time counting.

  7. Detecting swift fox: Smoked-plate scent stations versus spotlighting

    Treesearch

    Daniel W. Uresk; Kieth E. Severson; Jody Javersak

    2003-01-01

    We compared two methods of detecting presence of swift fox: smoked-plate scent stations and spotlight counts. Tracks were counted on ten 1-mile (1.6-km) transects with bait/tracking plate stations every 0.1 mile (0.16 km). Vehicle spotlight counts were conducted on the same transects. Methods were compared with Spearman's rank order correlation. Repeated measures...

  8. Reduction of CMOS Image Sensor Read Noise to Enable Photon Counting

    PubMed Central

    Guidash, Michael; Ma, Jiaju; Vogelsang, Thomas; Endsley, Jay

    2016-01-01

    Recent activity in photon counting CMOS image sensors (CIS) has been directed to reduction of read noise. Many approaches and methods have been reported. This work is focused on providing sub 1 e− read noise by design and operation of the binary and small signal readout of photon counting CIS. Compensation of transfer gate feed-through was used to provide substantially reduced CDS time and source follower (SF) bandwidth. SF read noise was reduced by a factor of 3 with this method. This method can be applied broadly to CIS devices to reduce the read noise for small signals to enable use as a photon counting sensor. PMID:27070625

  9. Comparison of point counts and territory mapping for detecting effects of forest management on songbirds

    USGS Publications Warehouse

    Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently

    2013-01-01

    Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.

  10. An automated approach for annual layer counting in ice cores

    NASA Astrophysics Data System (ADS)

    Winstrup, M.; Svensson, A.; Rasmussen, S. O.; Winther, O.; Steig, E.; Axelrod, A.

    2012-04-01

    The temporal resolution of some ice cores is sufficient to preserve seasonal information in the ice core record. In such cases, annual layer counting represents one of the most accurate methods to produce a chronology for the core. Yet, manual layer counting is a tedious and sometimes ambiguous job. As reliable layer recognition becomes more difficult, a manual approach increasingly relies on human interpretation of the available data. Thus, much may be gained by an automated and therefore objective approach for annual layer identification in ice cores. We have developed a novel method for automated annual layer counting in ice cores, which relies on Bayesian statistics. It uses algorithms from the statistical framework of Hidden Markov Models (HMM), originally developed for use in machine speech recognition. The strength of this layer detection algorithm lies in the way it is able to imitate the manual procedures for annual layer counting, while being based on purely objective criteria for annual layer identification. With this methodology, it is possible to determine the most likely position of multiple layer boundaries in an entire section of ice core data at once. It provides a probabilistic uncertainty estimate of the resulting layer count, hence ensuring a proper treatment of ambiguous layer boundaries in the data. Furthermore multiple data series can be incorporated to be used at once, hence allowing for a full multi-parameter annual layer counting method similar to a manual approach. In this study, the automated layer counting algorithm has been applied to data from the NGRIP ice core, Greenland. The NGRIP ice core has very high temporal resolution with depth, and hence the potential to be dated by annual layer counting far back in time. In previous studies [Andersen et al., 2006; Svensson et al., 2008], manual layer counting has been carried out back to 60 kyr BP. A comparison between the counted annual layers based on the two approaches will be presented and their differences discussed. Within the estimated uncertainties, the two methodologies agree. This shows the potential for a fully automated annual layer counting method to be operational for data sections where the annual layering is unknown.

  11. Steam versus hot-water scalding in reducing bacterial loads on the skin of commercially processed poultry.

    PubMed

    Patrick, T E; Goodwin, T L; Collins, J A; Wyche, R C; Love, B E

    1972-04-01

    A comparison of two types of scalders was conducted to determine their effectiveness in reducing bacterial contamination of poultry carcasses. A conventional hot-water scalder and a prototype model of a steam scalder were tested under commercial conditions. Total plate counts from steam-scalded birds were significantly lower than the counts of water-scalded birds immediately after scalding and again after picking. No differences in the two methods could be found after chilling. Coliform counts from steam-scalded birds were significantly lower than the counts from water-scalded birds immediately after scalding. No significant differences in coliform counts were detected when the two scald methods were compared after defeathering and chilling.

  12. A Comparison of the OSHA Modified NIOSH Physical and Chemical Analytical Method (P and CAM) 304 and the Dust Trak Photometric Aerosol Sampler for 0-Chlorobenzylidine Malonitrile

    DTIC Science & Technology

    2013-04-02

    photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method to determine correlation between the two...study compared the non-specific, rapid photometric particle counting instrument, DustTrak, to the established OSHA modified NIOSH P&CAM 304 method...mask confidence training (27) . This study will compare a direct reading, non-specific photometric particle count instrument (DustTrak TSI Model

  13. Can simple mobile phone applications provide reliable counts of respiratory rates in sick infants and children? An initial evaluation of three new applications.

    PubMed

    Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick

    2015-05-01

    Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Performance evaluation of the new hematology analyzer Sysmex XN-series.

    PubMed

    Seo, J Y; Lee, S-T; Kim, S-H

    2015-04-01

    The Sysmex XN-series is a new automated hematology analyzer designed to improve the accuracy of cell counts and the specificity of the flagging events. The basic characteristics and the performance of new measurement channels of the XN were evaluated and compared with the Sysmex XE-2100 and the manual method. Fluorescent platelet count (PLT-F) was compared with the flow cytometric method. The low WBC mode and body fluid mode were also evaluated. For workflow analysis, 1005 samples were analyzed on both the XN and the XE-2100, and manual review rates were compared. All parameters measured by the XN correlated well with the XE-2100. PLT-F showed better correlation with the flow cytometric method (r(2)  = 0.80) compared with optical platelet count (r(2)  = 0.73) for platelet counts <70 × 10(9) /L. The low WBC mode reported accurate leukocyte differentials for samples with a WBC count <0.5 × 10(9) /L. Relatively good correlation was found for WBC counts between the manual method and the body fluid mode (r = 0.88). The XN made less flags than the XE-2100, while the sensitivities of both instruments were comparable. The XN provided reliable results on low cell counts, as well as reduced manual blood film reviews, while maintaining a proper level of diagnostic sensitivity. © 2014 John Wiley & Sons Ltd.

  15. An Evaluation of the Accuracy of the Subtraction Method Used for Determining Platelet Counts in Advanced Platelet-Rich Fibrin and Concentrated Growth Factor Preparations

    PubMed Central

    Watanabe, Taisuke; Isobe, Kazushige; Suzuki, Taiji; Kawabata, Hideo; Nakamura, Masayuki; Tsukioka, Tsuneyuki; Okudera, Toshimitsu; Okudera, Hajime; Uematsu, Kohya; Okuda, Kazuhiro; Nakata, Koh; Kawase, Tomoyuki

    2017-01-01

    Platelet concentrates should be quality-assured of purity and identity prior to clinical use. Unlike for the liquid form of platelet-rich plasma, platelet counts cannot be directly determined in solid fibrin clots and are instead calculated by subtracting the counts in other liquid or semi-clotted fractions from those in whole blood samples. Having long suspected the validity of this method, we herein examined the possible loss of platelets in the preparation process. Blood samples collected from healthy male donors were immediately centrifuged for advanced platelet-rich fibrin (A-PRF) and concentrated growth factors (CGF) according to recommended centrifugal protocols. Blood cells in liquid and semi-clotted fractions were directly counted. Platelets aggregated on clot surfaces were observed by scanning electron microscopy. A higher centrifugal force increased the numbers of platelets and platelet aggregates in the liquid red blood cell fraction and the semi-clotted red thrombus in the presence and absence of the anticoagulant, respectively. Nevertheless, the calculated platelet counts in A-PRF/CGF preparations were much higher than expected, rendering the currently accepted subtraction method inaccurate for determining platelet counts in fibrin clots. To ensure the quality of solid types of platelet concentrates chairside in a timely manner, a simple and accurate platelet-counting method should be developed immediately. PMID:29563413

  16. Learning linear transformations between counting-based and prediction-based word embeddings

    PubMed Central

    Hayashi, Kohei; Kawarabayashi, Ken-ichi

    2017-01-01

    Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629

  17. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  18. [Left ventricular volume determination by first-pass radionuclide angiocardiography using a semi-geometric count-based method].

    PubMed

    Kinoshita, S; Suzuki, T; Yamashita, S; Muramatsu, T; Ide, M; Dohi, Y; Nishimura, K; Miyamae, T; Yamamoto, I

    1992-01-01

    A new radionuclide technique for the calculation of left ventricular (LV) volume by the first-pass (FP) method was developed and examined. Using a semi-geometric count-based method, the LV volume can be measured by the following equation: CV = CM/(L/d). V = (CT/CV) x d3 = (CT/CM) x L x d2. (V = LV volume, CV = voxel count, CM = the maximum LV count, CT = the total LV count, L = LV depth where the maximum count was obtained, and d = pixel size.) This theorem was applied to FP LV images obtained in the 30-degree right anterior oblique position. Frame-mode acquisition was performed and the LV end-diastolic maximum count and total count were obtained. The maximum LV depth was obtained as the maximum width of the LV on the FP end-diastolic image, using the assumption that the LV cross-section is circular. These values were substituted in the above equation and the LV end-diastolic volume (FP-EDV) was calculated. A routine equilibrium (EQ) study was done, and the end-diastolic maximum count and total count were obtained. The LV maximum depth was measured on the FP end-diastolic frame, as the maximum length of the LV image. Using these values, the EQ-EDV was calculated and the FP-EDV was compared to the EQ-EDV. The correlation coefficient for these two values was r = 0.96 (n = 23, p less than 0.001), and the standard error of the estimated volume was 10 ml.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Tracking and people counting using Particle Filter Method

    NASA Astrophysics Data System (ADS)

    Sulistyaningrum, D. R.; Setiyono, B.; Rizky, M. S.

    2018-03-01

    In recent years, technology has developed quite rapidly, especially in the field of object tracking. Moreover, if the object under study is a person and the number of people a lot. The purpose of this research is to apply Particle Filter method for tracking and counting people in certain area. Tracking people will be rather difficult if there are some obstacles, one of which is occlusion. The stages of tracking and people counting scheme in this study include pre-processing, segmentation using Gaussian Mixture Model (GMM), tracking using particle filter, and counting based on centroid. The Particle Filter method uses the estimated motion included in the model used. The test results show that the tracking and people counting can be done well with an average accuracy of 89.33% and 77.33% respectively from six videos test data. In the process of tracking people, the results are good if there is partial occlusion and no occlusion

  20. Precision wildlife monitoring using unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hodgson, Jarrod C.; Baylis, Shane M.; Mott, Rowan; Herrod, Ashley; Clarke, Rohan H.

    2016-03-01

    Unmanned aerial vehicles (UAVs) represent a new frontier in environmental research. Their use has the potential to revolutionise the field if they prove capable of improving data quality or the ease with which data are collected beyond traditional methods. We apply UAV technology to wildlife monitoring in tropical and polar environments and demonstrate that UAV-derived counts of colony nesting birds are an order of magnitude more precise than traditional ground counts. The increased count precision afforded by UAVs, along with their ability to survey hard-to-reach populations and places, will likely drive many wildlife monitoring projects that rely on population counts to transition from traditional methods to UAV technology. Careful consideration will be required to ensure the coherence of historic data sets with new UAV-derived data and we propose a method for determining the number of duplicated (concurrent UAV and ground counts) sampling points needed to achieve data compatibility.

  1. Cascaded systems analysis of photon counting detectors.

    PubMed

    Xu, J; Zbijewski, W; Gang, G; Stayman, J W; Taguchi, K; Lundqvist, M; Fredenberg, E; Carrino, J A; Siewerdsen, J H

    2014-10-01

    Photon counting detectors (PCDs) are an emerging technology with applications in spectral and low-dose radiographic and tomographic imaging. This paper develops an analytical model of PCD imaging performance, including the system gain, modulation transfer function (MTF), noise-power spectrum (NPS), and detective quantum efficiency (DQE). A cascaded systems analysis model describing the propagation of quanta through the imaging chain was developed. The model was validated in comparison to the physical performance of a silicon-strip PCD implemented on an experimental imaging bench. The signal response, MTF, and NPS were measured and compared to theory as a function of exposure conditions (70 kVp, 1-7 mA), detector threshold, and readout mode (i.e., the option for coincidence detection). The model sheds new light on the dependence of spatial resolution, charge sharing, and additive noise effects on threshold selection and was used to investigate the factors governing PCD performance, including the fundamental advantages and limitations of PCDs in comparison to energy-integrating detectors (EIDs) in the linear regime for which pulse pileup can be ignored. The detector exhibited highly linear mean signal response across the system operating range and agreed well with theoretical prediction, as did the system MTF and NPS. The DQE analyzed as a function of kilovolt (peak), exposure, detector threshold, and readout mode revealed important considerations for system optimization. The model also demonstrated the important implications of false counts from both additive electronic noise and charge sharing and highlighted the system design and operational parameters that most affect detector performance in the presence of such factors: for example, increasing the detector threshold from 0 to 100 (arbitrary units of pulse height threshold roughly equivalent to 0.5 and 6 keV energy threshold, respectively), increased the f50 (spatial-frequency at which the MTF falls to a value of 0.50) by ∼30% with corresponding improvement in DQE. The range in exposure and additive noise for which PCDs yield intrinsically higher DQE was quantified, showing performance advantages under conditions of very low-dose, high additive noise, and high fidelity rejection of coincident photons. The model for PCD signal and noise performance agreed with measurements of detector signal, MTF, and NPS and provided a useful basis for understanding complex dependencies in PCD imaging performance and the potential advantages (and disadvantages) in comparison to EIDs as well as an important guide to task-based optimization in developing new PCD imaging systems.

  2. Enumeration procedure for monitoring test microbe populations on inoculated carriers in AOAC use-dilution methods.

    PubMed

    Tomasino, Stephen F; Fiumara, Rebecca M; Cottrill, Michele P

    2006-01-01

    The AOAC Use-Dilution methods do not provide procedures to enumerate the test microbe on stainless steel carriers (penicylinders) or guidance on the expected target populations of the test microbe (i.e., a performance standard). This report describes the procedures used by the U.S. Environmental Protection Agency to enumerate the test microbe (carrier counts) associated with conducting the Use-Dilution method with Staphylococcus aureus (Method 955.15) and Pseudomonas aeruginosa (Method 964.02) and the examination of historical data. The carrier count procedure involves the random selection of carriers, shearing bacterial cells from the carrier surface through sonication, and plating of serially diluted inoculum on trypticase soy agar. For each Use-Dilution test conducted, the official AOAC method was strictly followed for carrier preparation, culture initiation, test culture preparation, and carrier inoculation steps. Carrier count data from 78 Use-Dilution tests conducted over a 6-year period were compiled and analyzed. A mean carrier count of 6.6 logs (approximately 4.0 x 10(6) colony-forming units/carrier) was calculated for both S. aureus and P. aeruginosa. Of the mean values, 95% fell within +/- 2 repeatability standard deviations. The enumeration procedure and target carrier counts are desirable for standardizing the Use-Dilution methods, increasing their reproducibility, and ensuring the quality of the data.

  3. How many fish in a tank? Constructing an automated fish counting system by using PTV analysis

    NASA Astrophysics Data System (ADS)

    Abe, S.; Takagi, T.; Takehara, K.; Kimura, N.; Hiraishi, T.; Komeyama, K.; Torisawa, S.; Asaumi, S.

    2017-02-01

    Because escape from a net cage and mortality are constant problems in fish farming, health control and management of facilities are important in aquaculture. In particular, the development of an accurate fish counting system has been strongly desired for the Pacific Bluefin tuna farming industry owing to the high market value of these fish. The current fish counting method, which involves human counting, results in poor accuracy; moreover, the method is cumbersome because the aquaculture net cage is so large that fish can only be counted when they move to another net cage. Therefore, we have developed an automated fish counting system by applying particle tracking velocimetry (PTV) analysis to a shoal of swimming fish inside a net cage. In essence, we treated the swimming fish as tracer particles and estimated the number of fish by analyzing the corresponding motion vectors. The proposed fish counting system comprises two main components: image processing and motion analysis, where the image-processing component abstracts the foreground and the motion analysis component traces the individual's motion. In this study, we developed a Region Extraction and Centroid Computation (RECC) method and a Kalman filter and Chi-square (KC) test for the two main components. To evaluate the efficiency of our method, we constructed a closed system, placed an underwater video camera with a spherical curved lens at the bottom of the tank, and recorded a 360° view of a swimming school of Japanese rice fish (Oryzias latipes). Our study showed that almost all fish could be abstracted by the RECC method and the motion vectors could be calculated by the KC test. The recognition rate was approximately 90% when more than 180 individuals were observed within the frame of the video camera. These results suggest that the presented method has potential application as a fish counting system for industrial aquaculture.

  4. CoinCalc-A new R package for quantifying simultaneities of event series

    NASA Astrophysics Data System (ADS)

    Siegmund, Jonatan F.; Siegmund, Nicole; Donner, Reik V.

    2017-01-01

    We present the new R package CoinCalc for performing event coincidence analysis (ECA), a novel statistical method to quantify the simultaneity of events contained in two series of observations, either as simultaneous or lagged coincidences within a user-specific temporal tolerance window. The package also provides different analytical as well as surrogate-based significance tests (valid under different assumptions about the nature of the observed event series) as well as an intuitive visualization of the identified coincidences. We demonstrate the usage of CoinCalc based on two typical geoscientific example problems addressing the relationship between meteorological extremes and plant phenology as well as that between soil properties and land cover.

  5. Methods of detecting and counting raptors: A review

    USGS Publications Warehouse

    Fuller, M.R.; Mosher, J.A.; Ralph, C. John; Scott, J. Michael

    1981-01-01

    Most raptors are wide-ranging, secretive, and occur at relatively low densities. These factors, in conjunction with the nocturnal activity of owls, cause the counting of raptors by most standard census and survey efforts to be very time consuming and expensive. This paper reviews the most common methods of detecting and counting raptors. It is hoped that it will be of use to the ever-increasing number of biologists, land-use planners, and managers that must determine the occurrence, density, or population dynamics of raptors. Road counts of fixed station or continuous transect design are often used to sample large areas. Detection of spontaneous or elicited vocalizations, especially those of owls, provides a means of detecting and estimating raptor numbers. Searches for nests are accomplished from foot surveys, observations from automobiles and boats, or from aircraft when nest structures are conspicuous (e.g., Osprey). Knowledge of nest habitat, historic records, and inquiries of local residents are useful for locating nests. Often several of these techniques are combined to help find nest sites. Aerial searches have also been used to locate or count large raptors (e.g., eagles), or those that may be conspicuous in open habitats (e.g., tundra). Counts of birds entering or leaving nest colonies or colonial roosts have been attempted on a limited basis. Results from Christmas Bird Counts have provided an index of the abundance of some species. Trapping and banding generally has proven to be an inefficient method of detecting raptors or estimating their populations. Concentrations of migrants at strategically located points around the world afford the best opportunity to count many rap tors in a relatively short period of time, but the influence of many unquantified variables has inhibited extensive interpretation of these counts. Few data exist to demonstrate the effectiveness of these methods. We believe more research on sampling techniques, rather than complete counts or intensive searches, will provide adequate yet affordable estimates of raptor numbers in addition to providing methods for detecting the presence of raptors on areas of interest to researchers and managers.

  6. Comparison of point-of-care methods for preparation of platelet concentrate (platelet-rich plasma).

    PubMed

    Weibrich, Gernot; Kleis, Wilfried K G; Streckbein, Philipp; Moergel, Maximilian; Hitzler, Walter E; Hafner, Gerd

    2012-01-01

    This study analyzed the concentrations of platelets and growth factors in platelet-rich plasma (PRP), which are likely to depend on the method used for its production. The cellular composition and growth factor content of platelet concentrates (platelet-rich plasma) produced by six different procedures were quantitatively analyzed and compared. Platelet and leukocyte counts were determined on an automatic cell counter, and analysis of growth factors was performed using enzyme-linked immunosorbent assay. The principal differences between the analyzed PRP production methods (blood bank method of intermittent flow centrifuge system/platelet apheresis and by the five point-of-care methods) and the resulting platelet concentrates were evaluated with regard to resulting platelet, leukocyte, and growth factor levels. The platelet counts in both whole blood and PRP were generally higher in women than in men; no differences were observed with regard to age. Statistical analysis of platelet-derived growth factor AB (PDGF-AB) and transforming growth factor β1 (TGF-β1) showed no differences with regard to age or gender. Platelet counts and TGF-β1 concentration correlated closely, as did platelet counts and PDGF-AB levels. There were only rare correlations between leukocyte counts and PDGF-AB levels, but comparison of leukocyte counts and PDGF-AB levels demonstrated certain parallel tendencies. TGF-β1 levels derive in substantial part from platelets and emphasize the role of leukocytes, in addition to that of platelets, as a source of growth factors in PRP. All methods of producing PRP showed high variability in platelet counts and growth factor levels. The highest growth factor levels were found in the PRP prepared using the Platelet Concentrate Collection System manufactured by Biomet 3i.

  7. A conceptual guide to detection probability for point counts and other count-based survey methods

    Treesearch

    D. Archibald McCallum

    2005-01-01

    Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...

  8. Validation of analytical methods in GMP: the disposable Fast Read 102® device, an alternative practical approach for cell counting.

    PubMed

    Gunetti, Monica; Castiglia, Sara; Rustichelli, Deborah; Mareschi, Katia; Sanavio, Fiorella; Muraro, Michela; Signorino, Elena; Castello, Laura; Ferrero, Ivana; Fagioli, Franca

    2012-05-31

    The quality and safety of advanced therapy products must be maintained throughout their production and quality control cycle to ensure their final use in patients. We validated the cell count method according to the International Conference on Harmonization of Technical Requirements for Registration of Pharmaceuticals for Human Use and European Pharmacopoeia, considering the tests' accuracy, precision, repeatability, linearity and range. As the cell count is a potency test, we checked accuracy, precision, and linearity, according to ICH Q2. Briefly our experimental approach was first to evaluate the accuracy of Fast Read 102® compared to the Bürker chamber. Once the accuracy of the alternative method was demonstrated, we checked the precision and linearity test only using Fast Read 102®. The data were statistically analyzed by average, standard deviation and coefficient of variation percentages inter and intra operator. All the tests performed met the established acceptance criteria of a coefficient of variation of less than ten percent. For the cell count, the precision reached by each operator had a coefficient of variation of less than ten percent (total cells) and under five percent (viable cells). The best range of dilution, to obtain a slope line value very similar to 1, was between 1:8 and 1:128. Our data demonstrated that the Fast Read 102® count method is accurate, precise and ensures the linearity of the results obtained in a range of cell dilution. Under our standard method procedures, this assay may thus be considered a good quality control method for the cell count as a batch release quality control test. Moreover, the Fast Read 102® chamber is a plastic, disposable device that allows a number of samples to be counted in the same chamber. Last but not least, it overcomes the problem of chamber washing after use and so allows a cell count in a clean environment such as that in a Cell Factory. In a good manufacturing practice setting the disposable cell counting devices will allow a single use of the count chamber they can then be thrown away, thus avoiding the waste disposal of vital dye (e.g. Trypan Blue) or lysing solution (e.g. Tuerk solution).

  9. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  10. A critical evaluation of a flow cytometer used for detecting enterococci in recreational waters.

    PubMed

    King, Dawn N; Brenner, Kristen P; Rodgers, Mark R

    2007-06-01

    The current U. S. Environmental Protection Agency-approved method for enterococci (Method 1600) in recreational water is a membrane filter (MF) method that takes 24 hours to obtain results. If the recreational water is not in compliance with the standard, the risk of exposure to enteric pathogens may occur before the water is identified as hazardous. Because flow cytometry combined with specific fluorescent antibodies has the potential to be used as a rapid detection method for microorganisms, this technology was evaluated as a rapid, same-day method to detect enterococci in bathing beach waters. The flow cytometer chosen for this study was a laser microbial detection system designed to detect labeled antibodies. A comparison of MF counts with flow cytometry counts of enterococci in phosphate buffer and sterile-filtered recreational water showed good agreement between the two methods. However, when flow cytometry was used, the counts were several orders of magnitude higher than the MF counts with no correlation to Enterococcus spike concentrations. The unspiked sample controls frequently had higher counts than the samples spiked with enterococci. Particles within the spiked water samples were probably counted as target cells by the flow cytometer because of autofluorescence or non-specific adsorption of antibody and carryover to subsequent samples. For these reasons, this technology may not be suitable for enterococci detection in recreational waters. Improvements in research and instrument design that will eliminate high background and carryover may make this a viable technology in the

  11. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  12. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  13. Clustering method for counting passengers getting in a bus with single camera

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  14. Pulse pileup statistics for energy discriminating photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir

    Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less

  15. Tutorial on Using Regression Models with Count Outcomes Using R

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander; Morgan, Grant B.

    2016-01-01

    Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares) either with or without transforming the count variables. In either case, using typical regression for count data can…

  16. Sources and magnitude of sampling error in redd counts for bull trout

    Treesearch

    Jason B. Dunham; Bruce Rieman

    2001-01-01

    Monitoring of salmonid populations often involves annual redd counts, but the validity of this method has seldom been evaluated. We conducted redd counts of bull trout Salvelinus confluentus in two streams in northern Idaho to address four issues: (1) relationships between adult escapements and redd counts; (2) interobserver variability in redd...

  17. CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.

    PubMed

    Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T

    2017-12-28

    In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .

  18. A gravimetric simplified method for nucleated marrow cell counting using an injection needle.

    PubMed

    Saitoh, Toshiki; Fang, Liu; Matsumoto, Kiyoshi

    2005-08-01

    A simplified gravimetric marrow cell counting method for rats is proposed for a regular screening method. After fresh bone marrow was aspirated by an injection needle, the marrow cells were suspended in carbonate buffered saline. The nucleated marrow cell count (NMC) was measured by an automated multi-blood cell analyzer. When this gravimetric method was applied to rats, the NMC of the left and right femurs had essentially identical values due to careful handling. The NMC at 4 to 10 weeks of age in male and female Crj:CD(SD)IGS rats was 2.72 to 1.96 and 2.75 to 1.98 (x10(6) counts/mg), respectively. More useful information for evaluation could be obtained by using this gravimetric method in addition to myelogram examination. However, some difficulties with this method include low NMC due to blood contamination and variation of NMC due to handling. Therefore, the utility of this gravimetric method for screening will be clarified by the accumulation of the data on myelotoxicity studies with this method.

  19. Three-dimensional computed tomographic volumetry precisely predicts the postoperative pulmonary function.

    PubMed

    Kobayashi, Keisuke; Saeki, Yusuke; Kitazawa, Shinsuke; Kobayashi, Naohiro; Kikuchi, Shinji; Goto, Yukinobu; Sakai, Mitsuaki; Sato, Yukio

    2017-11-01

    It is important to accurately predict the patient's postoperative pulmonary function. The aim of this study was to compare the accuracy of predictions of the postoperative residual pulmonary function obtained with three-dimensional computed tomographic (3D-CT) volumetry with that of predictions obtained with the conventional segment-counting method. Fifty-three patients scheduled to undergo lung cancer resection, pulmonary function tests, and computed tomography were enrolled in this study. The postoperative residual pulmonary function was predicted based on the segment-counting and 3D-CT volumetry methods. The predicted postoperative values were compared with the results of postoperative pulmonary function tests. Regarding the linear correlation coefficients between the predicted postoperative values and the measured values, those obtained using the 3D-CT volumetry method tended to be higher than those acquired using the segment-counting method. In addition, the variations between the predicted and measured values were smaller with the 3D-CT volumetry method than with the segment-counting method. These results were more obvious in COPD patients than in non-COPD patients. Our findings suggested that the 3D-CT volumetry was able to predict the residual pulmonary function more accurately than the segment-counting method, especially in patients with COPD. This method might lead to the selection of appropriate candidates for surgery among patients with a marginal pulmonary function.

  20. Enumeration of total aerobic microorganisms in foods by SimPlate Total Plate Count-Color Indicator methods and conventional culture methods: collaborative study.

    PubMed

    Feldsine, Philip T; Leung, Stephanie C; Lienau, Andrew H; Mui, Linda A; Townsend, David E

    2003-01-01

    The relative efficacy of the SimPlate Total Plate Count-Color Indicator (TPC-CI) method (SimPlate 35 degrees C) was compared with the AOAC Official Method 966.23 (AOAC 35 degrees C) for enumeration of total aerobic microorganisms in foods. The SimPlate TPC-CI method, incubated at 30 degrees C (SimPlate 30 degrees C), was also compared with the International Organization for Standardization (ISO) 4833 method (ISO 30 degrees C). Six food types were analyzed: ground black pepper, flour, nut meats, frozen hamburger patties, frozen fruits, and fresh vegetables. All foods tested were naturally contaminated. Nineteen laboratories throughout North America and Europe participated in the study. Three method comparisons were conducted. In general, there was <0.3 mean log count difference in recovery among the SimPlate methods and their corresponding reference methods. Mean log counts between the 2 reference methods were also very similar. Repeatability (Sr) and reproducibility (SR) standard deviations were similar among the 3 method comparisons. The SimPlate method (35 degrees C) and the AOAC method were comparable for enumerating total aerobic microorganisms in foods. Similarly, the SimPlate method (30 degrees C) was comparable to the ISO method when samples were prepared and incubated according to the ISO method.

  1. Statistical study of muons counts rates in differents directions, observed at the Brazilian Southern Space Observatory

    NASA Astrophysics Data System (ADS)

    Grams, Guilherme; Schuch, Nelson Jorge; Braga, Carlos Roberto; Purushottam Kane, Rajaram; Echer, Ezequiel; Ronan Coelho Stekel, Tardelli

    Cosmic ray are charged particles, at the most time protons, that reach the earth's magne-tosphere from interplanetary space with velocities greater than the solar wind. When these impinge the atmosphere, they interact with atmosphere constituents and decay into sub-particles forming an atmospheric shower. The muons are the sub-particles which normally maintain the originated direction of the primary cosmic ray. A multi-directional muon detec-tor (MMD) was installed in 2001 and upgraded in 2005, through an international cooperation between Brazil, Japan and USA, and operated since then at the Southern Space Observatory -SSO/CRS/CCR/INPE -MCT, (29,4° S, 53,8° W, 480m a.s.l.), São Martinho da Serra, RS, a Brazil. The main objetive of this work is to present a statistical analysis of the intensity of muons, with energy between 50 and 170 GeV, in differents directions, measured by the SSO's multi-directional muon detector. The analysis was performed with data from 2006 and 2007 collected by the SSO's MMD. The MMD consists of two layers of 4x7 detectors with a total observation area of 28 m2 . The counting of muons in each directional channel is made by a coincidence of pulses pair, one from a detector in the upper layer and the other from a detector in the lower layer. The SSO's MMD is equipped with 119 directional channels for muon count rate measurement and is capable of detecting muons incident with zenithal angle between 0° and 75,53° . A statistical analysis was made with the MMD muon count rate for all the di-rectional channels. The average and the standard deviation of the muon count rate in each directional component were calculated. The results show lower cont rate for the channels with larger zenith, and higher cont rate with smaller zenith, as expected from the production and propagation of muons in the atmosphere. It is also possible to identify the Stormer cone. The SSO's MMD is also a detector component of the Global Muon Detector Network (GMDN), which has been developed in an international collaboration lead by Shinshu University, Japan.

  2. Automated cell counts on CSF samples: A multicenter performance evaluation of the GloCyte system.

    PubMed

    Hod, E A; Brugnara, C; Pilichowska, M; Sandhaus, L M; Luu, H S; Forest, S K; Netterwald, J C; Reynafarje, G M; Kratz, A

    2018-02-01

    Automated cell counters have replaced manual enumeration of cells in blood and most body fluids. However, due to the unreliability of automated methods at very low cell counts, most laboratories continue to perform labor-intensive manual counts on many or all cerebrospinal fluid (CSF) samples. This multicenter clinical trial investigated if the GloCyte System (Advanced Instruments, Norwood, MA), a recently FDA-approved automated cell counter, which concentrates and enumerates red blood cells (RBCs) and total nucleated cells (TNCs), is sufficiently accurate and precise at very low cell counts to replace all manual CSF counts. The GloCyte System concentrates CSF and stains RBCs with fluorochrome-labeled antibodies and TNCs with nucleic acid dyes. RBCs and TNCs are then counted by digital image analysis. Residual adult and pediatric CSF samples obtained for clinical analysis at five different medical centers were used for the study. Cell counts were performed by the manual hemocytometer method and with the GloCyte System following the same protocol at all sites. The limits of the blank, detection, and quantitation, as well as precision and accuracy of the GloCyte, were determined. The GloCyte detected as few as 1 TNC/μL and 1 RBC/μL, and reliably counted as low as 3 TNCs/μL and 2 RBCs/μL. The total coefficient of variation was less than 20%. Comparison with cell counts obtained with a hemocytometer showed good correlation (>97%) between the GloCyte and the hemocytometer, including at very low cell counts. The GloCyte instrument is a precise, accurate, and stable system to obtain red cell and nucleated cell counts in CSF samples. It allows for the automated enumeration of even very low cell numbers, which is crucial for CSF analysis. These results suggest that GloCyte is an acceptable alternative to the manual method for all CSF samples, including those with normal cell counts. © 2017 John Wiley & Sons Ltd.

  3. A fully automated and scalable timing probe-based method for time alignment of the LabPET II scanners

    NASA Astrophysics Data System (ADS)

    Samson, Arnaud; Thibaudeau, Christian; Bouchard, Jonathan; Gaudin, Émilie; Paulin, Caroline; Lecomte, Roger; Fontaine, Réjean

    2018-05-01

    A fully automated time alignment method based on a positron timing probe was developed to correct the channel-to-channel coincidence time dispersion of the LabPET II avalanche photodiode-based positron emission tomography (PET) scanners. The timing probe was designed to directly detect positrons and generate an absolute time reference. The probe-to-channel coincidences are recorded and processed using firmware embedded in the scanner hardware to compute the time differences between detector channels. The time corrections are then applied in real-time to each event in every channel during PET data acquisition to align all coincidence time spectra, thus enhancing the scanner time resolution. When applied to the mouse version of the LabPET II scanner, the calibration of 6 144 channels was performed in less than 15 min and showed a 47% improvement on the overall time resolution of the scanner, decreasing from 7 ns to 3.7 ns full width at half maximum (FWHM).

  4. Steam Versus Hot-Water Scalding in Reducing Bacterial Loads on the Skin of Commercially Processed Poultry

    PubMed Central

    Patrick, Thomas E.; Goodwin, T. L.; Collins, J. A.; Wyche, R. C.; Love, B. E.

    1972-01-01

    A comparison of two types of scalders was conducted to determine their effectiveness in reducing bacterial contamination of poultry carcasses. A conventional hot-water scalder and a prototype model of a steam scalder were tested under commercial conditions. Total plate counts from steam-scalded birds were significantly lower than the counts of water-scalded birds immediately after scalding and again after picking. No differences in the two methods could be found after chilling. Coliform counts from steam-scalded birds were significantly lower than the counts from water-scalded birds immediately after scalding. No significant differences in coliform counts were detected when the two scald methods were compared after defeathering and chilling. PMID:4553146

  5. Evaluation of ICT filariasis card test using whole capillary blood: comparison with Knott's concentration and counting chamber methods.

    PubMed

    Njenga, S M; Wamae, C N

    2001-10-01

    An immunochromatographic card test (ICT) that uses fingerprick whole blood instead of serum for diagnosis of bancroftian filariasis has recently been developed. The card test was validated in the field in Kenya by comparing its sensitivity to the combined sensitivity of Knott's concentration and counting chamber methods. A total of 102 (14.6%) and 117 (16.7%) persons was found to be microfilaremic by Knott's concentration and counting chamber methods, respectively. The geometric mean intensities (GMI) were 74.6 microfilariae (mf)/ml and 256.5 mf/ml by Knott's concentration and counting chamber methods, respectively. All infected individuals detected by both Knott's concentration and counting chamber methods were also antigen positive by the ICT filariasis card test (100% sensitivity). Further, of 97 parasitologically amicrofilaremic persons, 24 (24.7%) were antigen positive by the ICT. The overall prevalence of antigenemia was 37.3%. Of 100 nonendemic area control persons, none was found to be filarial antigen positive (100% specificity). The results show that the new version of the ICT filariasis card test is a simple, sensitive, specific, and rapid test that is convenient in field settings.

  6. Inspection Methods in Programming.

    DTIC Science & Technology

    1981-06-01

    Counting is a a specialization of Iterative-generation in which the generating function is Oneplus ) Waters’ second category of plan building method...is Oneplus and the initial input is 1. 0 I 180 CHAPTER NINE -ta a acio f igr9-.IeaieGnrtoPln 7 -7 STEADY STATE PLANS 181 TemporalPlan counting...specializalion iterative-generation roles .action(afu nction) ,tail(counting) conslraints .action.op = oneplus A .action.input = 1 The lItcrative-application

  7. Performance Equivalence and Validation of the Soleris Automated System for Quantitative Microbial Content Testing Using Pure Suspension Cultures.

    PubMed

    Limberg, Brian J; Johnstone, Kevin; Filloon, Thomas; Catrenich, Carl

    2016-09-01

    Using United States Pharmacopeia-National Formulary (USP-NF) general method <1223> guidance, the Soleris(®) automated system and reagents (Nonfermenting Total Viable Count for bacteria and Direct Yeast and Mold for yeast and mold) were validated, using a performance equivalence approach, as an alternative to plate counting for total microbial content analysis using five representative microbes: Staphylococcus aureus, Bacillus subtilis, Pseudomonas aeruginosa, Candida albicans, and Aspergillus brasiliensis. Detection times (DTs) in the alternative automated system were linearly correlated to CFU/sample (R(2) = 0.94-0.97) with ≥70% accuracy per USP General Chapter <1223> guidance. The LOD and LOQ of the automated system were statistically similar to the traditional plate count method. This system was significantly more precise than plate counting (RSD 1.2-2.9% for DT, 7.8-40.6% for plate counts), was statistically comparable to plate counting with respect to variations in analyst, vial lots, and instruments, and was robust when variations in the operating detection thresholds (dTs; ±2 units) were used. The automated system produced accurate results, was more precise and less labor-intensive, and met or exceeded criteria for a valid alternative quantitative method, consistent with USP-NF general method <1223> guidance.

  8. Classicality condition on a system observable in a quantum measurement and a relative-entropy conservation law

    NASA Astrophysics Data System (ADS)

    Kuramochi, Yui; Ueda, Masahito

    2015-03-01

    We consider the information flow on a system observable X corresponding to a positive-operator-valued measure under a quantum measurement process Y described by a completely positive instrument from the viewpoint of the relative entropy. We establish a sufficient condition for the relative-entropy conservation law which states that the average decrease in the relative entropy of the system observable X equals the relative entropy of the measurement outcome of Y , i.e., the information gain due to measurement. This sufficient condition is interpreted as an assumption of classicality in the sense that there exists a sufficient statistic in a joint successive measurement of Y followed by X such that the probability distribution of the statistic coincides with that of a single measurement of X for the premeasurement state. We show that in the case when X is a discrete projection-valued measure and Y is discrete, the classicality condition is equivalent to the relative-entropy conservation for arbitrary states. The general theory on the relative-entropy conservation is applied to typical quantum measurement models, namely, quantum nondemolition measurement, destructive sharp measurements on two-level systems, a photon counting, a quantum counting, homodyne and heterodyne measurements. These examples except for the nondemolition and photon-counting measurements do not satisfy the known Shannon-entropy conservation law proposed by Ban [M. Ban, J. Phys. A: Math. Gen. 32, 1643 (1999), 10.1088/0305-4470/32/9/012], implying that our approach based on the relative entropy is applicable to a wider class of quantum measurements.

  9. Aerobiological and phenological study of Pistacia in Córdoba city (Spain).

    PubMed

    Velasco-Jiménez, María José; Arenas, Manuel; Alcázar, Purificación; Galán, Carmen; Domínguez-Vilches, Eugenio

    2015-02-01

    Pistacia species grow in temperate regions, and are widespread in the Mediterranean area. Two species can be found in the Iberian Peninsula: Pistacia lentiscus L. and Pistacia terebinthus L. Airborne pollen from these species, recorded in some Spanish provinces, is regarded by some authors as potentially allergenic, and therefore should be of particular interest, given that these species are actually being introduced as ornamentals in parks and gardens. This paper deals with a study of daily and seasonal Pistacia airborne pollen counts in Córdoba city, analysed in parallel with field flowering phenology data. The study was carried out in Córdoba, using a volumetric Hirst-type sampler in accordance with Spanish Aerobiology Network guidelines. Phenological monitoring was performed weekly from January to May at 7 sites in the mountain areas north of Córdoba city. The Pistacia pollen season lasted an average of 41 days, from mid-March to end of April. Higher pollen counts were recorded in evening hours. The pollen index increased over the study period, and the pollen season coincided with phenological observations. Some airborne pollen grains were recorded once flowering had finished, due to re-suspension or transport from other locations. Pistacia pollen counts in Córdoba were low, but sufficient to identify seasonal and daily patterns. This pollen type should be taken into account in pollen calendars, in order to fully inform potential allergy-sufferers. The number of trees introduced as ornamentals should be carefully controlled, since widespread planting could increase airborne pollen levels. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. SURVIVAL OF SALMONELLA SPECIES IN RIVER WATER.

    EPA Science Inventory

    The survival of four Salmonella strains in river water microcosms was monitored using culturing techniques, direct counts, whole cell hybridization, scanning electron microscopy, and resuscitation techniques via the direct viable count method and flow cytrometry. Plate counts of...

  11. Growth of coincident site lattice matched semiconductor layers and devices on crystalline substrates

    DOEpatents

    Norman, Andrew G; Ptak, Aaron J

    2013-08-13

    Methods of fabricating a semiconductor layer or device and said devices are disclosed. The methods include but are not limited to providing a substrate having a crystalline surface with a known lattice parameter (a). The method further includes growing a crystalline semiconductor layer on the crystalline substrate surface by coincident site lattice matched epitaxy, without any buffer layer between the crystalline semiconductor layer and the crystalline surface of the substrate. The crystalline semiconductor layer will be prepared to have a lattice parameter (a') that is related to the substrate lattice parameter (a). The lattice parameter (a') maybe related to the lattice parameter (a) by a scaling factor derived from a geometric relationship between the respective crystal lattices.

  12. Smart fast blood counting of trace volumes of body fluids from various mammalian species using a compact custom-built microscope cytometer (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Smith, Zachary J.; Gao, Tingjuan; Lin, Tzu-Yin; Carrade-Holt, Danielle; Lane, Stephen M.; Matthews, Dennis L.; Dwyre, Denis M.; Wachsmann-Hogiu, Sebastian

    2016-03-01

    Cell counting in human body fluids such as blood, urine, and CSF is a critical step in the diagnostic process for many diseases. Current automated methods for cell counting are based on flow cytometry systems. However, these automated methods are bulky, costly, require significant user expertise, and are not well suited to counting cells in fluids other than blood. Therefore, their use is limited to large central laboratories that process enough volume of blood to recoup the significant capital investment these instruments require. We present in this talk a combination of a (1) low-cost microscope system, (2) simple sample preparation method, and (3) fully automated analysis designed for providing cell counts in blood and body fluids. We show results on both humans and companion and farm animals, showing that accurate red cell, white cell, and platelet counts, as well as hemoglobin concentration, can be accurately obtained in blood, as well as a 3-part white cell differential in human samples. We can also accurately count red and white cells in body fluids with a limit of detection ~3 orders of magnitude smaller than current automated instruments. This method uses less than 1 microliter of blood, and less than 5 microliters of body fluids to make its measurements, making it highly compatible with finger-stick style collections, as well as appropriate for small animals such as laboratory mice where larger volume blood collections are dangerous to the animal's health.

  13. The 12C(n, 2n)11C cross section from threshold to 26.5 MeV

    PubMed Central

    Eckert, T.; Hartshaw, G.; Padalino, S. J.; Polsin, D. N.; Russ, M.; Simone, A. T.; Brune, C. R.; Massey, T. N.; Parker, C. E.; Fitzgerald, R.; Sangster, T. C.; Regan, S. P.

    2018-01-01

    The 12C(n, 2n)11C cross section was measured from just below threshold to 26.5 MeV using the Pelletron accelerator at Ohio University. Monoenergetic neutrons, produced via the 3H(d,n)4He reaction, were allowed to strike targets of polyethylene and graphite. Activation of both targets was measured by counting positron annihilations resulting from the β+ decay of 11C. Annihilation gamma rays were detected, both in coincidence and singly, using back-to-back NaI detectors. The incident neutron flux was determined indirectly via 1H(n,p) protons elastically scattered from the polyethylene target. Previous measurements fall into upper and lower bands; the results of the present measurement are consistent with the upper band. PMID:29732443

  14. The 12C(n, 2n)11C cross section from threshold to 26.5 MeV.

    PubMed

    Yuly, M; Eckert, T; Hartshaw, G; Padalino, S J; Polsin, D N; Russ, M; Simone, A T; Brune, C R; Massey, T N; Parker, C E; Fitzgerald, R; Sangster, T C; Regan, S P

    2018-02-01

    The 12 C(n, 2n) 11 C cross section was measured from just below threshold to 26.5 MeV using the Pelletron accelerator at Ohio University. Monoenergetic neutrons, produced via the 3 H(d,n) 4 He reaction, were allowed to strike targets of polyethylene and graphite. Activation of both targets was measured by counting positron annihilations resulting from the β + decay of 11 C. Annihilation gamma rays were detected, both in coincidence and singly, using back-to-back NaI detectors. The incident neutron flux was determined indirectly via 1 H(n,p) protons elastically scattered from the polyethylene target. Previous measurements fall into upper and lower bands; the results of the present measurement are consistent with the upper band.

  15. 12C(n , 2 n )11C cross section from threshold to 26.5 MeV

    NASA Astrophysics Data System (ADS)

    Yuly, M.; Eckert, T.; Hartshaw, G.; Padalino, S. J.; Polsin, D. N.; Russ, M.; Simone, A. T.; Brune, C. R.; Massey, T. N.; Parker, C. E.; Fitzgerald, R.; Sangster, T. C.; Regan, S. P.

    2018-02-01

    The 12C(n ,2 n )11C cross section was measured from just below threshold to 26.5 MeV using the Pelletron accelerator at Ohio University. Monoenergetic neutrons, produced via the 3H(d ,n )4He reaction, were allowed to strike targets of polyethylene and graphite. Activation of both targets was measured by counting positron annihilations resulting from the β+ decay of 11C. Annihilation gamma rays were detected, both in coincidence and singly, using back-to-back NaI detectors. The incident neutron flux was determined indirectly via 1H(n ,p ) protons elastically scattered from the polyethylene target. Previous measurements fall into upper and lower bands; the results of the present measurement are consistent with the upper band.

  16. Investigation of large α production in reactions involving weakly bound 7Li

    NASA Astrophysics Data System (ADS)

    Pandit, S. K.; Shrivastava, A.; Mahata, K.; Parkar, V. V.; Palit, R.; Keeley, N.; Rout, P. C.; Kumar, A.; Ramachandran, K.; Bhattacharyya, S.; Nanal, V.; Palshetkar, C. S.; Nag, T. N.; Gupta, Shilpi; Biswas, S.; Saha, S.; Sethi, J.; Singh, P.; Chatterjee, A.; Kailas, S.

    2017-10-01

    The origin of the large α -particle production cross sections in systems involving weakly bound 7Li projectiles has been investigated by measuring the cross sections of all possible fragment-capture as well as complete fusion using the particle-γ coincidence, in-beam, and off-beam γ -ray counting techniques for the 7Li+93Nb system at near Coulomb barrier energies. Almost all of the inclusive α -particle yield has been accounted for. While the t -capture mechanism is found to be dominant (˜70 % ), compound nuclear evaporation and breakup processes contribute ˜15 % each to the inclusive α -particle production in the measured energy range. Systematic behavior of the t capture and inclusive α cross sections for reactions involving 7Li over a wide mass range is also reported.

  17. Method selection and adaptation for distributed monitoring of infectious diseases for syndromic surveillance.

    PubMed

    Xing, Jian; Burkom, Howard; Tokars, Jerome

    2011-12-01

    Automated surveillance systems require statistical methods to recognize increases in visit counts that might indicate an outbreak. In prior work we presented methods to enhance the sensitivity of C2, a commonly used time series method. In this study, we compared the enhanced C2 method with five regression models. We used emergency department chief complaint data from US CDC BioSense surveillance system, aggregated by city (total of 206 hospitals, 16 cities) during 5/2008-4/2009. Data for six syndromes (asthma, gastrointestinal, nausea and vomiting, rash, respiratory, and influenza-like illness) was used and was stratified by mean count (1-19, 20-49, ≥50 per day) into 14 syndrome-count categories. We compared the sensitivity for detecting single-day artificially-added increases in syndrome counts. Four modifications of the C2 time series method, and five regression models (two linear and three Poisson), were tested. A constant alert rate of 1% was used for all methods. Among the regression models tested, we found that a Poisson model controlling for the logarithm of total visits (i.e., visits both meeting and not meeting a syndrome definition), day of week, and 14-day time period was best. Among 14 syndrome-count categories, time series and regression methods produced approximately the same sensitivity (<5% difference) in 6; in six categories, the regression method had higher sensitivity (range 6-14% improvement), and in two categories the time series method had higher sensitivity. When automated data are aggregated to the city level, a Poisson regression model that controls for total visits produces the best overall sensitivity for detecting artificially added visit counts. This improvement was achieved without increasing the alert rate, which was held constant at 1% for all methods. These findings will improve our ability to detect outbreaks in automated surveillance system data. Published by Elsevier Inc.

  18. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  19. Comparison of Kato-Katz thick-smear and McMaster egg counting method for the assessment of drug efficacy against soil-transmitted helminthiasis in school children in Jimma Town, Ethiopia.

    PubMed

    Bekana, Teshome; Mekonnen, Zeleke; Zeynudin, Ahmed; Ayana, Mio; Getachew, Mestawet; Vercruysse, Jozef; Levecke, Bruno

    2015-10-01

    There is a paucity of studies that compare efficacy of drugs obtained by different diagnostic methods. We compared the efficacy of a single oral dose albendazole (400 mg), measured as egg reduction rate, against soil-transmitted helminth infections in 210 school children (Jimma Town, Ethiopia) using both Kato-Katz thick smear and McMaster egg counting method. Our results indicate that differences in sensitivity and faecal egg counts did not imply a significant difference in egg reduction rate estimates. The choice of a diagnostic method to assess drug efficacy should not be based on sensitivity and faecal egg counts only. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Quick counting method for estimating the number of viable microbes on food and food processing equipment.

    PubMed

    Winter, F H; York, G K; el-Nakhal, H

    1971-07-01

    A rapid method for estimating the extent of microbial contamination on food and on food processing equipment is described. Microbial cells are rinsed from food or swab samples with sterile diluent and concentrated on the surface of membrane filters. The filters are incubated on a suitable bacteriological medium for 4 hr at 30 C, heated at 105 C for 5 min, and stained. The membranes are then dried at 60 C for 15 min, rendered transparent with immersion oil, and examined microscopically. Data obtained by the rapid method were compared with counts of the same samples determined by the standard plate count method. Over 60 comparisons resulted in a correlation coefficient of 0.906. Because the rapid technique can provide reliable microbiological count information in extremely short times, it can be a most useful tool in the routine evaluation of microbial contamination of food processing facilities and for some foods.

  1. Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.

    PubMed

    Uhl, Wolfgang; Schaule, Gabriela

    2004-05-01

    Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.

  2. SURVIVAL OF SALMONELLA SPECIES IN RIVER WATER

    EPA Science Inventory

    The survival of four Salmonella strains in river water microcosms was monitored by culturing techniques, direct counts, whole-cell hybridization, scanning electron microscopy, and resuscitation techniques via the direct viable count method and flow cytometry. Plate counts of bact...

  3. Event coincidence analysis for quantifying statistical interrelationships between event time series. On the role of flood events as triggers of epidemic outbreaks

    NASA Astrophysics Data System (ADS)

    Donges, J. F.; Schleussner, C.-F.; Siegmund, J. F.; Donner, R. V.

    2016-05-01

    Studying event time series is a powerful approach for analyzing the dynamics of complex dynamical systems in many fields of science. In this paper, we describe the method of event coincidence analysis to provide a framework for quantifying the strength, directionality and time lag of statistical interrelationships between event series. Event coincidence analysis allows to formulate and test null hypotheses on the origin of the observed interrelationships including tests based on Poisson processes or, more generally, stochastic point processes with a prescribed inter-event time distribution and other higher-order properties. Applying the framework to country-level observational data yields evidence that flood events have acted as triggers of epidemic outbreaks globally since the 1950s. Facing projected future changes in the statistics of climatic extreme events, statistical techniques such as event coincidence analysis will be relevant for investigating the impacts of anthropogenic climate change on human societies and ecosystems worldwide.

  4. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    PubMed

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  5. Coincidence theory: Seeking a perceptual preference for just intonation, equal temperament, and Pythagorean intonation in excerpts for wind instruments

    NASA Astrophysics Data System (ADS)

    Long, Derle Ray

    Coincidence theory states that when the components of harmony are in enhanced alignment the sound will be more consonant to the human auditory system. An objective method of examining the components of harmony is by investigating alignment of the mathematics of a particular sound or harmony. The study examined preference responses to excerpts tuned in just intonation, Pythagorean intonation, and equal temperament. Musical excerpts were presented in pairs and study subjects simply picked one version from the pair that they perceived as the most consonant. Results of the study revealed an overall preference for equal temperament in contradiction to coincidence theory. Several additional areas for research are suggested to further investigate the results of this study.

  6. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  7. Comparison of a new GIS-based technique and a manual method for determining sinkhole density: An example from Illinois' sinkhole plain

    USGS Publications Warehouse

    Angel, J.C.; Nelson, D.O.; Panno, S.V.

    2004-01-01

    A new Geographic Information System (GIS) method was developed as an alternative to the hand-counting of sinkholes on topographic maps for density and distribution studies. Sinkhole counts were prepared by hand and compared to those generated from USGS DLG data using ArcView 3.2 and the ArcInfo Workstation component of ArcGIS 8.1 software. The study area for this investigation, chosen for its great density of sinkholes, included the 42 public land survey sections that reside entirely within the Renault Quadrangle in southwestern Illinois. Differences between the sinkhole counts derived from the two methods for the Renault Quadrangle study area were negligible. Although the initial development and refinement of the GIS method required considerably more time than counting sinkholes by hand, the flexibility of the GIS method is expected to provide significant long-term benefits and time savings when mapping larger areas and expanding research efforts. ?? 2004 by The National Speleological Society.

  8. A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.

    PubMed

    Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi

    2010-04-01

    The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.

  9. In situ DNA hybridized chain reaction (FISH-HCR) as a better method for quantification of bacteria and archaea within marine sediment

    NASA Astrophysics Data System (ADS)

    Buongiorno, J.; Lloyd, K. G.; Shumaker, A.; Schippers, A.; Webster, G.; Weightman, A.; Turner, S.

    2015-12-01

    Nearly 75% of the Earth's surface is covered by marine sediment that is home to an estimated 2.9 x 1029 microbial cells. A substantial impediment to understanding the abundance and distribution of cells within marine sediment is the lack of a consistent and reliable method for their taxon-specific quantification. Catalyzed reporter fluorescent in situ hybridization (CARD-FISH) provides taxon-specific enumeration, but this process requires passing a large enzyme through cell membranes, decreasing its precision relative to general cell counts using a small DNA stain. In 2015, Yamaguchi et al. developed FISH hybridization chain reaction (FISH-HCR) as an in situ whole cell detection method for environmental microorganisms. FISH-HCR amplifies the fluorescent signal, as does CARD-FISH, but it allows for milder cell permeation methods that might prevent yield loss. To compare FISH-HCR to CARD-FISH, we examined bacteria and archaea cell counts within two sediment cores, Lille Belt (~78 meters deep) and Landsort Deep (90 meters deep), which were retrieved from the Baltic Sea Basin during IODP Expedition 347. Preliminary analysis shows that CARD-FISH counts are below the quantification limit for most depths across both cores. By contrast, quantification of cells was possible with FISH-HCR in all examined depths. When quantification with CARD-FISH was above the limit of detection, counts with FISH-HCR were up to 11 fold higher for Bacteria and 3 fold higher for Archaea from the same sediment sample. Further, FISH-HCR counts follow the trends of on board counts nicely, indicating that FISH-HCR may better reflect the cellular abundance within marine sediment than other quantification methods, including qPCR. Using FISH-HCR, we found that archaeal cell counts were on average greater than bacterial cell counts, but within the same order of magnitude.

  10. Comparison of McMaster and FECPAKG2 methods for counting nematode eggs in the faeces of alpacas.

    PubMed

    Rashid, Mohammed H; Stevenson, Mark A; Waenga, Shea; Mirams, Greg; Campbell, Angus J D; Vaughan, Jane L; Jabbar, Abdul

    2018-05-02

    This study aimed to compare the FECPAK G2 and the McMaster techniques for counting of gastrointestinal nematode eggs in the faeces of alpacas using two floatation solutions (saturated sodium chloride and sucrose solutions). Faecal eggs counts from both techniques were compared using the Lin's concordance correlation coefficient and Bland and Altman statistics. Results showed moderate to good agreement between the two methods, with better agreement achieved when saturated sugar is used as a floatation fluid, particularly when faecal egg counts are less than 1000 eggs per gram of faeces. To the best of our knowledge this is the first study to assess agreement of measurements between McMaster and FECPAK G2 methods for estimating faecal eggs in South American camelids.

  11. The diabetes nutrition education study randomized controlled trial: A comparative effectiveness study of approaches to nutrition in diabetes self-management education.

    PubMed

    Bowen, Michael E; Cavanaugh, Kerri L; Wolff, Kathleen; Davis, Dianne; Gregory, Rebecca P; Shintani, Ayumi; Eden, Svetlana; Wallston, Ken; Elasy, Tom; Rothman, Russell L

    2016-08-01

    To compare the effectiveness of different approaches to nutrition education in diabetes self-management education and support (DSME/S). We randomized 150 adults with type 2 diabetes to either certified diabetes educator (CDE)-delivered DSME/S with carbohydrate gram counting or the modified plate method versus general health education. The primary outcome was change in HbA1C over 6 months. At 6 months, HbA1C improved within the plate method [-0.83% (-1.29, -0.33), P<0.001] and carbohydrate counting [-0.63% (-1.03, -0.18), P=0.04] groups but not the control group [P=0.34]. Change in HbA1C from baseline between the control and intervention groups was not significant at 6 months (carbohydrate counting, P=0.36; modified plate method, P=0.08). In a pre-specified subgroup analysis of patients with a baseline HbA1C 7-10%, change in HbA1C from baseline improved in the carbohydrate counting [-0.86% (-1.47, -0.26), P=0.006] and plate method groups [-0.76% (-1.33, -0.19), P=0.01] compared to controls. CDE-delivered DSME/S focused on carbohydrate counting or the modified plate method improved glycemic control in patients with an initial HbA1C between 7 and 10%. Both carbohydrate counting and the modified plate method improve glycemic control as part of DSME/S. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Accurate measurement of peripheral blood mononuclear cell concentration using image cytometry to eliminate RBC-induced counting error.

    PubMed

    Chan, Leo Li-Ying; Laverty, Daniel J; Smith, Tim; Nejad, Parham; Hei, Hillary; Gandhi, Roopali; Kuksin, Dmitry; Qiu, Jean

    2013-02-28

    Peripheral blood mononuclear cells (PBMCs) have been widely researched in the fields of immunology, infectious disease, oncology, transplantation, hematological malignancy, and vaccine development. Specifically, in immunology research, PBMCs have been utilized to monitor concentration, viability, proliferation, and cytokine production from immune cells, which are critical for both clinical trials and biomedical research. The viability and concentration of isolated PBMCs are traditionally measured by manual counting with trypan blue (TB) using a hemacytometer. One of the common issues of PBMC isolation is red blood cell (RBC) contamination. The RBC contamination can be dependent on the donor sample and/or technical skill level of the operator. RBC contamination in a PBMC sample can introduce error to the measured concentration, which can pass down to future experimental assays performed on these cells. To resolve this issue, RBC lysing protocol can be used to eliminate potential error caused by RBC contamination. In the recent years, a rapid fluorescence-based image cytometry system has been utilized for bright-field and fluorescence imaging analysis of cellular characteristics (Nexcelom Bioscience LLC, Lawrence, MA). The Cellometer image cytometry system has demonstrated the capability of automated concentration and viability detection in disposable counting chambers of unpurified mouse splenocytes and PBMCs stained with acridine orange (AO) and propidium iodide (PI) under fluorescence detection. In this work, we demonstrate the ability of Cellometer image cytometry system to accurately measure PBMC concentration, despite RBC contamination, by comparison of five different total PBMC counting methods: (1) manual counting of trypan blue-stained PBMCs in hemacytometer, (2) manual counting of PBMCs in bright-field images, (3) manual counting of acetic acid lysing of RBCs with TB-stained PBMCs, (4) automated counting of acetic acid lysing of RBCs with PI-stained PBMCs, and (5) AO/PI dual staining method. The results show comparable total PBMC counting among all five methods, which validate the AO/PI staining method for PBMC measurement in the image cytometry method. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Observation of autoionization in O 2 by an electron-electron coincidence method

    NASA Astrophysics Data System (ADS)

    Doering, J. P.; Yang, J.; Cooper, J. W.

    1995-01-01

    A strong transition to an autoionizing stata has been observed in O 2 at 16.83 ± 0.11 eV by means of a new electron-electron conincidence method. The method uses the fact that electrons arising from autoionizing states appear at a constant energy loss corresponding to the excitation energy of the autoionizing state rather than at a constant ionization potential as do electrons produced by direct ionization. Comparison of the present data with previous photoionization studies suggests that the autoionizing O 2 state is the same state deduced to be responsible for abnormal vibrational intensities in the O 2+X 2Πg ground state when 16.85 eV Ne(I) photons are used. These electron-electron coincidence experiments provide a direct new method for the study of autoionization produced by electron impact.

  14. Count distribution for mixture of two exponentials as renewal process duration with applications

    NASA Astrophysics Data System (ADS)

    Low, Yeh Ching; Ong, Seng Huat

    2016-06-01

    A count distribution is presented by considering a renewal process where the distribution of the duration is a finite mixture of exponential distributions. This distribution is able to model over dispersion, a feature often found in observed count data. The computation of the probabilities and renewal function (expected number of renewals) are examined. Parameter estimation by the method of maximum likelihood is considered with applications of the count distribution to real frequency count data exhibiting over dispersion. It is shown that the mixture of exponentials count distribution fits over dispersed data better than the Poisson process and serves as an alternative to the gamma count distribution.

  15. On the Sensitivity of the HAWC Observatory to Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Hays, E.; McEnery, Julie E.

    2011-01-01

    We present the sensitivity of HAWC to Gamma Ray Bursts (GRBs). HAWC is a very high-energy gamma-ray observatory currently under construction in Mexico at an altitude of 4100 m. It will observe atmospheric air showers via the water Cherenkov method. HAWC will consist of 300 large water tanks instrumented with 4 photomultipliers each. HAWC has two data acquisition (DAQ) systems. The main DAQ system reads out coincident signals in the tanks and reconstructs the direction and energy of individual atmospheric showers. The scaler DAQ counts the hits in each photomultiplier tube (PMT) in the detector and searches for a statistical excess over the noise of all PMTs. We show that HAWC has a realistic opportunity to observe the high-energy power law components of GRBs that extend at least up to 30 GeV, as it has been observed by Fermi LAT. The two DAQ systems have an energy threshold that is low enough to observe events similar to GRB 090510 and GRB 090902b with the characteristics observed by Fermi LAT. HAWC will provide information about the high-energy spectra of GRBs which in turn will lead to understanding about e-pair attenuation in GRB jets, extragalactic background light absorption, as well as establishing the highest energy to which GRBs accelerate particles.

  16. PET image reconstruction: a robust state space approach.

    PubMed

    Liu, Huafeng; Tian, Yi; Shi, Pengcheng

    2005-01-01

    Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.

  17. Limitations of on-site dairy farm regulatory debits as milk quality predictors.

    PubMed

    Borneman, Darand L; Stiegert, Kyle; Ingham, Steve

    2015-03-01

    In the United States, compliance with grade A raw fluid milk regulatory standards is assessed via laboratory milk quality testing and by on-site inspection of producers (farms). This study evaluated the correlation between on-site survey debits being marked and somatic cell count (SCC) or standard plate count (SPC) laboratory results for 1,301 Wisconsin grade A dairy farms in 2012. Debits recorded on the survey form were tested as predictors of laboratory results utilizing ordinary least squares regression to determine if results of the current method for on-site evaluation of grade A dairy farms accurately predict SCC and SPC test results. Such a correlation may indicate that current methods of on-site inspection serve the primary intended purpose of assuring availability of high-quality milk. A model for predicting SCC was estimated using ordinary least squares regression methods. Step-wise selected regressors of grouped debit items were able to predict SCC levels with some degree of accuracy (adjusted R2=0.1432). Specific debit items, seasonality, and farm size were the best predictors of SCC levels. The SPC data presented an analytical challenge because over 75% of the SPC observations were at or below a 25,000 cfu/mL threshold but were recorded by testing laboratories as at the threshold value. This classic censoring problem necessitated the use of a Tobit regression approach. Even with this approach, prediction of SPC values based on on-site survey criteria was much less successful (adjusted R2=0.034) and provided little support for the on-site survey system as a way to inform farmers about making improvements that would improve SPC. The lower level of correlation with SPC may indicate that factors affecting SPC are more varied and differ from those affecting SCC. Further, unobserved deficiencies in postmilking handling and storage sanitation could enhance bacterial growth and increase SPC, whereas postmilking sanitation will have no effect on SCC because somatic cells do not reproduce in stored milk. Results suggest that close examination, and perhaps redefinition, of survey debits, along with making the survey coincident with SCC and SPC sampling, could make the on-site survey a better tool for ensuring availability of high-quality milk. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Red Blood Cell Count Automation Using Microscopic Hyperspectral Imaging Technology.

    PubMed

    Li, Qingli; Zhou, Mei; Liu, Hongying; Wang, Yiting; Guo, Fangmin

    2015-12-01

    Red blood cell counts have been proven to be one of the most frequently performed blood tests and are valuable for early diagnosis of some diseases. This paper describes an automated red blood cell counting method based on microscopic hyperspectral imaging technology. Unlike the light microscopy-based red blood count methods, a combined spatial and spectral algorithm is proposed to identify red blood cells by integrating active contour models and automated two-dimensional k-means with spectral angle mapper algorithm. Experimental results show that the proposed algorithm has better performance than spatial based algorithm because the new algorithm can jointly use the spatial and spectral information of blood cells.

  19. Coincidence and coherent data analysis methods for gravitational wave bursts in a network of interferometric detectors

    NASA Astrophysics Data System (ADS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-11-01

    Network data analysis methods are the only way to properly separate real gravitational wave (GW) transient events from detector noise. They can be divided into two generic classes: the coincidence method and the coherent analysis. The former uses lists of selected events provided by each interferometer belonging to the network and tries to correlate them in time to identify a physical signal. Instead of this binary treatment of detector outputs (signal present or absent), the latter method involves first the merging of the interferometer data and looks for a common pattern, consistent with an assumed GW waveform and a given source location in the sky. The thresholds are only applied later, to validate or not the hypothesis made. As coherent algorithms use more complete information than coincidence methods, they are expected to provide better detection performances, but at a higher computational cost. An efficient filter must yield a good compromise between a low false alarm rate (hence triggering on data at a manageable rate) and a high detection efficiency. Therefore, the comparison of the two approaches is achieved using so-called receiving operating characteristics (ROC), giving the relationship between the false alarm rate and the detection efficiency for a given method. This paper investigates this question via Monte Carlo simulations, using the network model developed in a previous article. Its main conclusions are the following. First, a three-interferometer network such as Virgo-LIGO is found to be too small to reach good detection efficiencies at low false alarm rates: larger configurations are suitable to reach a confidence level high enough to validate as true GW a detected event. In addition, an efficient network must contain interferometers with comparable sensitivities: studying the three-interferometer LIGO network shows that the 2-km interferometer with half sensitivity leads to a strong reduction of performances as compared to a network of three interferometers with full sensitivity. Finally, it is shown that coherent analyses are feasible for burst searches and are clearly more efficient than coincidence strategies. Therefore, developing such methods should be an important goal of a worldwide collaborative data analysis.

  20. How to Quantify Penile Corpus Cavernosum Structures with Histomorphometry: Comparison of Two Methods

    PubMed Central

    Felix-Patrício, Bruno; De Souza, Diogo Benchimol; Gregório, Bianca Martins; Costa, Waldemar Silva; Sampaio, Francisco José

    2015-01-01

    The use of morphometrical tools in biomedical research permits the accurate comparison of specimens subjected to different conditions, and the surface density of structures is commonly used for this purpose. The traditional point-counting method is reliable but time-consuming, with computer-aided methods being proposed as an alternative. The aim of this study was to compare the surface density data of penile corpus cavernosum trabecular smooth muscle in different groups of rats, measured by two observers using the point-counting or color-based segmentation method. Ten normotensive and 10 hypertensive male rats were used in this study. Rat penises were processed to obtain smooth muscle immunostained histological slices and photomicrographs captured for analysis. The smooth muscle surface density was measured in both groups by two different observers by the point-counting method and by the color-based segmentation method. Hypertensive rats showed an increase in smooth muscle surface density by the two methods, and no difference was found between the results of the two observers. However, surface density values were higher by the point-counting method. The use of either method did not influence the final interpretation of the results, and both proved to have adequate reproducibility. However, as differences were found between the two methods, results obtained by either method should not be compared. PMID:26413547

Top