DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, A K; Koniczek, M; Antonuk, L E
Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed circuit simulations and prototyping is expected. Partially supported by NIH grant R01-EB000558. This work was partially supported by NIH grant no. R01-EB000558.« less
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Monte Carlo simulation of Ray-Scan 64 PET system and performance evaluation using GATE toolkit
NASA Astrophysics Data System (ADS)
Li, Suying; Zhang, Qiushi; Vuletic, Ivan; Xie, Zhaoheng; Yang, Kun; Ren, Qiushi
2017-02-01
In this study, we aimed to develop a GATE model for the simulation of Ray-Scan 64 PET scanner and model its performance characteristics. A detailed implementation of system geometry and physical process were included in the simulation model. Then we modeled the performance characteristics of Ray-Scan 64 PET system for the first time, based on National Electrical Manufacturers Association (NEMA) NU-2 2007 protocols and validated the model against experimental measurement, including spatial resolution, sensitivity, counting rates and noise equivalent count rate (NECR). Moreover, an accurate dead time module was investigated to simulate the counting rate performance. Overall results showed reasonable agreement between simulation and experimental data. The validation results showed the reliability and feasibility of the GATE model to evaluate major performance of Ray-Scan 64 PET system. It provided a useful tool for a wide range of research applications.
Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE
NASA Astrophysics Data System (ADS)
Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.
2006-02-01
A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; ...
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less
Pulse pileup statistics for energy discriminating photon counting x-ray detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir
Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The monoenergetic image with maximum contrast-to-noise ratio from dual energy imaging with ideal photon counting is only slightly better than with dual kVp energy integration, and with a bipolar pulse model, energy integration outperforms photon counting for this particular metric because of the count rate losses. However, the material resolving capability of photon counting can be superior to energy integration with dual kVp even in the presence of pileup because of the energy information available to photon counting. Conclusions: A computationally efficient multinomial approximation of the count statistics that is based on the mean output spectrum can accurately predict imaging performance. This enables photon counting system designers to directly relate the effect of pileup to its impact on imaging statistics and how to best take advantage of the benefits of energy discriminating photon counting detectors, such as material separation with spectral imaging.« less
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
A new clocking method for a charge coupled device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umezu, Rika; Kitamoto, Shunji, E-mail: kitamoto@rikkyo.ac.jp; Murakami, Hiroshi
2014-07-15
We propose and demonstrate a new clocking method for a charge-coupled device (CCD). When a CCD is used for a photon counting detector of X-rays, its weak point is a limitation of its counting rate, because high counting rate makes non-negligible pile-up of photons. In astronomical usage, this pile-up is especially severe for an observation of a bright point-like object. One typical idea to reduce the pile-up is a parallel sum (P-sum) mode. This mode completely loses one-dimensional information. Our new clocking method, panning mode, provides complementary properties between the normal mode and the P-sum mode. We performed a simplemore » simulation in order to investigate a pile-up probability and compared the simulated result and actual obtained event rates. Using this simulation and the experimental results, we compared the pile-up tolerance of various clocking modes including our new method and also compared their other characteristics.« less
Detecting trends in raptor counts: power and type I error rates of various statistical tests
Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.
1996-01-01
We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.
Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, L.G.; Norman, P.I.; Leadbeater, T.W.
Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less
Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels
ERIC Educational Resources Information Center
Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.
2018-01-01
A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…
Anigstein, Robert; Erdman, Michael C.; Ansari, Armin
2017-01-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of 60Co, 137Cs, and 241Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides. PMID:27115229
Anigstein, Robert; Erdman, Michael C; Ansari, Armin
2016-06-01
The detonation of a radiological dispersion device or other radiological incidents could result in the dispersion of radioactive materials and intakes of radionuclides by affected individuals. Transportable radiation monitoring instruments could be used to measure photon radiation from radionuclides in the body for triaging individuals and assigning priorities to their bioassay samples for further assessments. Computer simulations and experimental measurements are required for these instruments to be used for assessing intakes of radionuclides. Count rates from calibrated sources of Co, Cs, and Am were measured on three instruments: a survey meter containing a 2.54 × 2.54-cm NaI(Tl) crystal, a thyroid probe using a 5.08 × 5.08-cm NaI(Tl) crystal, and a portal monitor incorporating two 3.81 × 7.62 × 182.9-cm polyvinyltoluene plastic scintillators. Computer models of the instruments and of the calibration sources were constructed, using engineering drawings and other data provided by the manufacturers. Count rates on the instruments were simulated using the Monte Carlo radiation transport code MCNPX. The computer simulations were within 16% of the measured count rates for all 20 measurements without using empirical radionuclide-dependent scaling factors, as reported by others. The weighted root-mean-square deviations (differences between measured and simulated count rates, added in quadrature and weighted by the variance of the difference) were 10.9% for the survey meter, 4.2% for the thyroid probe, and 0.9% for the portal monitor. These results validate earlier MCNPX models of these instruments that were used to develop calibration factors that enable these instruments to be used for assessing intakes and committed doses from several gamma-emitting radionuclides.
Validation of GATE Monte Carlo simulations of the GE Advance/Discovery LS PET scanners.
Schmidtlein, C Ross; Kirov, Assen S; Nehmeh, Sadek A; Erdi, Yusuf E; Humm, John L; Amols, Howard I; Bidaut, Luc M; Ganin, Alex; Stearns, Charles W; McDaniel, David L; Hamacher, Klaus A
2006-01-01
The recently developed GATE (GEANT4 application for tomographic emission) Monte Carlo package, designed to simulate positron emission tomography (PET) and single photon emission computed tomography (SPECT) scanners, provides the ability to model and account for the effects of photon noncollinearity, off-axis detector penetration, detector size and response, positron range, photon scatter, and patient motion on the resolution and quality of PET images. The objective of this study is to validate a model within GATE of the General Electric (GE) Advance/Discovery Light Speed (LS) PET scanner. Our three-dimensional PET simulation model of the scanner consists of 12 096 detectors grouped into blocks, which are grouped into modules as per the vendor's specifications. The GATE results are compared to experimental data obtained in accordance with the National Electrical Manufactures Association/Society of Nuclear Medicine (NEMA/SNM), NEMA NU 2-1994, and NEMA NU 2-2001 protocols. The respective phantoms are also accurately modeled thus allowing us to simulate the sensitivity, scatter fraction, count rate performance, and spatial resolution. In-house software was developed to produce and analyze sinograms from the simulated data. With our model of the GE Advance/Discovery LS PET scanner, the ratio of the sensitivities with sources radially offset 0 and 10 cm from the scanner's main axis are reproduced to within 1% of measurements. Similarly, the simulated scatter fraction for the NEMA NU 2-2001 phantom agrees to within less than 3% of measured values (the measured scatter fractions are 44.8% and 40.9 +/- 1.4% and the simulated scatter fraction is 43.5 +/- 0.3%). The simulated count rate curves were made to match the experimental curves by using deadtimes as fit parameters. This resulted in deadtime values of 625 and 332 ns at the Block and Coincidence levels, respectively. The experimental peak true count rate of 139.0 kcps and the peak activity concentration of 21.5 kBq/cc were matched by the simulated results to within 0.5% and 0.1% respectively. The simulated count rate curves also resulted in a peak NECR of 35.2 kcps at 10.8 kBq/cc compared to 37.6 kcps at 10.0 kBq/cc from averaged experimental values. The spatial resolution of the simulated scanner matched the experimental results to within 0.2 mm.
Simulation of HLNC and NCC measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ming-Shih; Teichmann, T.; De Ridder, P.
1994-03-01
This report discusses an automatic method of simulating the results of High Level Neutron Coincidence Counting (HLNC) and Neutron Collar Coincidence Counting (NCC) measurements to facilitate the safeguards` inspectors understanding and use of these instruments under realistic conditions. This would otherwise be expensive, and time-consuming, except at sites designed to handle radioactive materials, and having the necessary variety of fuel elements and other samples. This simulation must thus include the behavior of the instruments for variably constituted and composed fuel elements (including poison rods and Gd loading), and must display the changes in the count rates as a function ofmore » these characteristics, as well as of various instrumental parameters. Such a simulation is an efficient way of accomplishing the required familiarization and training of the inspectors by providing a realistic reproduction of the results of such measurements.« less
NASA Astrophysics Data System (ADS)
Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud
2018-04-01
Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.
Sensitivity analysis of pulse pileup model parameter in photon counting detectors
NASA Astrophysics Data System (ADS)
Shunhavanich, Picha; Pelc, Norbert J.
2017-03-01
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishizawa, T., E-mail: nishizawa@wisc.edu; Nornberg, M. D.; Den Hartog, D. J.
2016-11-15
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier’s cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Upgrading a high-throughput spectrometer for high-frequency (<400 kHz) measurements
NASA Astrophysics Data System (ADS)
Nishizawa, T.; Nornberg, M. D.; Den Hartog, D. J.; Craig, D.
2016-11-01
The upgraded spectrometer used for charge exchange recombination spectroscopy on the Madison Symmetric Torus resolves emission fluctuations up to 400 kHz. The transimpedance amplifier's cutoff frequency was increased based upon simulations comparing the change in the measured photon counts for time-dynamic signals. We modeled each signal-processing stage of the diagnostic and scanned the filtering frequency to quantify the uncertainty in the photon counting rate. This modeling showed that uncertainties can be calculated based on assuming each amplification stage is a Poisson process and by calibrating the photon counting rate with a DC light source to address additional variation.
Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D
2012-07-07
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.
Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D
2013-01-01
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15–22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging, or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 90 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25–31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration. PMID:22678106
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Moses, William W.; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R.; Badawi, Ramsey D.
2012-07-01
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.
Li, Gang; Xu, Jiayun; Zhang, Jie
2015-01-01
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua
2016-03-01
Pixelated photon counting detectors with energy discrimination capabilities are of increasing clinical interest for x-ray imaging. Such detectors, presently in clinical use for mammography and under development for breast tomosynthesis and spectral CT, usually employ in-pixel circuits based on crystalline silicon - a semiconductor material that is generally not well-suited for economic manufacture of large-area devices. One interesting alternative semiconductor is polycrystalline silicon (poly-Si), a thin-film technology capable of creating very large-area, monolithic devices. Similar to crystalline silicon, poly-Si allows implementation of the type of fast, complex, in-pixel circuitry required for photon counting - operating at processing speeds that are not possible with amorphous silicon (the material currently used for large-area, active matrix, flat-panel imagers). The pixel circuits of two-dimensional photon counting arrays are generally comprised of four stages: amplifier, comparator, clock generator and counter. The analog front-end (in particular, the amplifier) strongly influences performance and is therefore of interest to study. In this paper, the relationship between incident and output count rate of the analog front-end is explored under diagnostic imaging conditions for a promising poly-Si based design. The input to the amplifier is modeled in the time domain assuming a realistic input x-ray spectrum. Simulations of circuits based on poly-Si thin-film transistors are used to determine the resulting output count rate as a function of input count rate, energy discrimination threshold and operating conditions.
Neutronic analysis of the 1D and 1E banks reflux detection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.
1999-12-21
Two H Canyon neutron monitoring systems for early detection of postulated abnormal reflux conditions in the Second Uranium Cycle 1E and 1D Mixer-Settle Banks have been designed and built. Monte Carlo neutron transport simulations using the general purpose, general geometry, n-particle MCNP code have been performed to model expected response of the monitoring systems to varying conditions.The confirmatory studies documented herein conclude that the 1E and 1D neutron monitoring systems are able to achieve adequate neutron count rates for various neutron source and detector configurations, thereby eliminating excessive integration count time. Neutron count rate sensitivity studies are also performed. Conversely,more » the transport studies concluded that the neutron count rates are statistically insensitive to nitric acid content in the aqueous region and to the transition region length. These studies conclude that the 1E and 1D neutron monitoring systems are able to predict the postulated reflux conditions for all examined perturbations in the neutron source and detector configurations. In the cases examined, the relative change in the neutron count rates due to postulated transitions from normal {sup 235}U concentration levels to reflux levels remain satisfactory detectable.« less
Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses
Myers, Risa B.; Herskovic, Jorge R.
2011-01-01
Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our Bayesian framework. Use of these probabilistic techniques will enable more accurate patient counts and better results for applications requiring this metric. PMID:21986292
A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.
Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi
2010-04-01
The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.
NASA Astrophysics Data System (ADS)
Popota, F. D.; Aguiar, P.; España, S.; Lois, C.; Udias, J. M.; Ros, D.; Pavia, J.; Gispert, J. D.
2015-01-01
In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system’s sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system’s dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.
Popota, F D; Aguiar, P; España, S; Lois, C; Udias, J M; Ros, D; Pavia, J; Gispert, J D
2015-01-07
In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system's sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system's dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.
The Dependence of Tropical Cyclone Count and Size on Rotation Rate
NASA Astrophysics Data System (ADS)
Chavas, D. R.; Reed, K. A.
2017-12-01
Both theory and idealized equilibrium modeling studies indicate that tropical cyclone size decreases with background rotation rate. In contrast, in real-world observations size tends to increase with latitude. Here we seek to resolve this apparent contradiction via a set of reduced-complexity global aquaplanet simulations with varying planetary rotation rates using the NCAR Community Atmosphere Model 5. The latitudinal distribution of both storm count and size are found to vary markedly with rotation rate, yielding insight into the dynamical constraints on tropical cyclone activity on a rotating planet. Moreover, storm size is found to vary non-monotonically with latitude, indicating that non-equilibrium effects are crucial to the life-cycle evolution of size in nature. Results are then compared to experiments in idealized, time-dependent limited-area modeling simulations using CM1 in axisymmetric and three-dimensional geometry. Taken together, this hierarchy of models is used to quantify the role of equilibrium versus transient controls on storm size and the relevance of each to real storms in nature.
Single Photon Counting Detectors for Low Light Level Imaging Applications
NASA Astrophysics Data System (ADS)
Kolb, Kimberly
2015-10-01
This dissertation presents the current state-of-the-art of semiconductor-based photon counting detector technologies. HgCdTe linear-mode avalanche photodiodes (LM-APDs), silicon Geiger-mode avalanche photodiodes (GM-APDs), and electron-multiplying CCDs (EMCCDs) are compared via their present and future performance in various astronomy applications. LM-APDs are studied in theory, based on work done at the University of Hawaii. EMCCDs are studied in theory and experimentally, with a device at NASA's Jet Propulsion Lab. The emphasis of the research is on GM-APD imaging arrays, developed at MIT Lincoln Laboratory and tested at the RIT Center for Detectors. The GM-APD research includes a theoretical analysis of SNR and various performance metrics, including dark count rate, afterpulsing, photon detection efficiency, and intrapixel sensitivity. The effects of radiation damage on the GM-APD were also characterized by introducing a cumulative dose of 50 krad(Si) via 60 MeV protons. Extensive development of Monte Carlo simulations and practical observation simulations was completed, including simulated astronomical imaging and adaptive optics wavefront sensing. Based on theoretical models and experimental testing, both the current state-of-the-art performance and projected future performance of each detector are compared for various applications. LM-APD performance is currently not competitive with other photon counting technologies, and are left out of the application-based comparisons. In the current state-of-the-art, EMCCDs in photon counting mode out-perform GM-APDs for long exposure scenarios, though GM-APDs are better for short exposure scenarios (fast readout) due to clock-induced-charge (CIC) in EMCCDs. In the long term, small improvements in GM-APD dark current will make them superior in both long and short exposure scenarios for extremely low flux. The efficiency of GM-APDs will likely always be less than EMCCDs, however, which is particularly disadvantageous for moderate to high flux rates where dark noise and CIC are insignificant noise sources. Research into decreasing the dark count rate of GM-APDs will lead to development of imaging arrays that are competitive for low light level imaging and spectroscopy applications in the near future.
Enhancing the Linear Dynamic Range in Multi-Channel Single Photon Detector beyond 7OD
Gudkov, Dmytro; Gudkov, George; Gorbovitski, Boris; Gorfinkel, Vera
2015-01-01
We present design, implementation, and characterization of a single photon detector based on 32-channel PMT sensor [model H7260-20, Hamamatsu]. The developed high speed electronics enables the photon counting with linear dynamic range (LDR) up to 108count/s per detector's channel. The experimental characterization and Monte-Carlo simulations showed that in the single photon counting mode the LDR of the PMT sensor is limited by (i) “photon” pulse width (current pulse) of 900ps and (ii) substantial decrease of amplitudes of current pulses for count rates exceeding 108 count/s. The multi-channel architecture of the detector and the developed firm/software allow further expansion of the dynamic range of the device by 32-fold by using appropriate beam shaping. The developed single photon counting detector was tested for the detection of fluorescence labeled microbeads in capillary flow. PMID:27087788
Effective count rates for PET scanners with reduced and extended axial field of view
NASA Astrophysics Data System (ADS)
MacDonald, L. R.; Harrison, R. L.; Alessio, A. M.; Hunter, W. C. J.; Lewellen, T. K.; Kinahan, P. E.
2011-06-01
We investigated the relationship between noise equivalent count (NEC) and axial field of view (AFOV) for PET scanners with AFOVs ranging from one-half to twice those of current clinical scanners. PET scanners with longer or shorter AFOVs could fulfill different clinical needs depending on exam volumes and site economics. Using previously validated Monte Carlo simulations, we modeled true, scattered and random coincidence counting rates for a PET ring diameter of 88 cm with 2, 4, 6, and 8 rings of detector blocks (AFOV 7.8, 15.5, 23.3, and 31.0 cm). Fully 3D acquisition mode was compared to full collimation (2D) and partial collimation (2.5D) modes. Counting rates were estimated for a 200 cm long version of the 20 cm diameter NEMA count-rate phantom and for an anthropomorphic object based on a patient scan. We estimated the live-time characteristics of the scanner from measured count-rate data and applied that estimate to the simulated results to obtain NEC as a function of object activity. We found NEC increased as a quadratic function of AFOV for 3D mode, and linearly in 2D mode. Partial collimation provided the highest overall NEC on the 2-block system and fully 3D mode provided the highest NEC on the 8-block system for clinically relevant activities. On the 4-, and 6-block systems 3D mode NEC was highest up to ~300 MBq in the anthropomorphic phantom, above which 3D NEC dropped rapidly, and 2.5D NEC was highest. Projected total scan time to achieve NEC-density that matches current clinical practice in a typical oncology exam averaged 9, 15, 24, and 61 min for the 8-, 6-, 4-, and 2-block ring systems, when using optimal collimation. Increasing the AFOV should provide a greater than proportional increase in NEC, potentially benefiting patient throughput-to-cost ratio. Conversely, by using appropriate collimation, a two-ring (7.8 cm AFOV) system could acquire whole-body scans achieving NEC-density levels comparable to current standards within long, but feasible, scan times.
NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.
Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron
2018-06-14
This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.
Simulating the component counts of combinatorial structures.
Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon
2018-02-09
This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Everett, Samantha
2010-10-01
A transmission curve experiment was carried out to measure the range of beta particles in aluminum in the health physics laboratory located on the campus of Texas Southern University. The transmission count rate through aluminum for varying radiation lengths was measured using beta particles emitted from a low activity (˜1 μCi) Sr-90 source. The count rate intensity was recorded using a Geiger Mueller tube (SGC N210/BNC) with an active volume of 61 cm^3 within a systematic detection accuracy of a few percent. We compared these data with a realistic simulation of the experimental setup using the Geant4 Monte Carlo toolkit (version 9.3). The purpose of this study was to benchmark our Monte Carlo for future experiments as part of a more comprehensive research program. Transmission curves were simulated based on the standard and low-energy electromagnetic physics models, and using the radioactive decay module for the electrons primary energy distribution. To ensure the validity of our measurements, linear extrapolation techniques were employed to determine the in-medium beta particle range from the measured data and was found to be 1.87 g/cm^2 (˜0.693 cm), in agreement with literature values. We found that the general shape of the measured data and simulated curves were comparable; however, a discrepancy in the relative count rates was observed. The origin of this disagreement is still under investigation.
Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2007-01-01
This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count frequency also improved precision. These results suggest that the current distribution of count timings available in lek count databases such as that of Wyoming (conducted up to 90 minutes after sunrise) can be used to estimate sage-grouse population trends without reducing precision or accuracy relative to trends from counts conducted within 30 minutes of sunrise. However, only 10% of all Wyoming counts in our sample (1995−2014) were conducted 61−90 minutes after sunrise, and further increasing this percentage may still bias trend estimates because of declining lek attendance.
Particle emission from artificial cometary materials
NASA Technical Reports Server (NTRS)
Koelzer, Gabriele; Kochan, Hermann; Thiel, Klaus
1992-01-01
During KOSI (comet simulation) experiments, mineral-ice mixtures are observed in simulated space conditions. Emission of ice-/dust particles from the sample surface is observed by means of different devices. The particle trajectories are recorded with a video system. In the following analysis we extracted the parameters: particle count rate, spatial distribution of starting points on the sample surface, and elevation angle and particle velocity at distances up to 5 cm from the sample surface. Different kinds of detectors are mounted on a frame in front of the sample to register the emitted particles and to collect their dust residues. By means of these instruments the particle count rates, the particle sizes and the composition of the particles can be correlated. The results are related to the gas flux density and the temperature on the sample surface during the insolation period. The particle emission is interpreted in terms of phenomena on the sample surface, e.g., formation of a dust mantle.
Influence of photon energy cuts on PET Monte Carlo simulation results.
Mitev, Krasimir; Gerganov, Georgi; Kirov, Assen S; Schmidtlein, C Ross; Madzhunkov, Yordan; Kawrakow, Iwan
2012-07-01
The purpose of this work is to study the influence of photon energy cuts on the results of positron emission tomography (PET) Monte Carlo (MC) simulations. MC simulations of PET scans of a box phantom and the NEMA image quality phantom are performed for 32 photon energy cut values in the interval 0.3-350 keV using a well-validated numerical model of a PET scanner. The simulations are performed with two MC codes, egs_pet and GEANT4 Application for Tomographic Emission (GATE). The effect of photon energy cuts on the recorded number of singles, primary, scattered, random, and total coincidences as well as on the simulation time and noise-equivalent count rate is evaluated by comparing the results for higher cuts to those for 1 keV cut. To evaluate the effect of cuts on the quality of reconstructed images, MC generated sinograms of PET scans of the NEMA image quality phantom are reconstructed with iterative statistical reconstruction. The effects of photon cuts on the contrast recovery coefficients and on the comparison of images by means of commonly used similarity measures are studied. For the scanner investigated in this study, which uses bismuth germanate crystals, the transport of Bi X(K) rays must be simulated in order to obtain unbiased estimates for the number of singles, true, scattered, and random coincidences as well as for an unbiased estimate of the noise-equivalent count rate. Photon energy cuts higher than 170 keV lead to absorption of Compton scattered photons and strongly increase the number of recorded coincidences of all types and the noise-equivalent count rate. The effect of photon cuts on the reconstructed images and the similarity measures used for their comparison is statistically significant for very high cuts (e.g., 350 keV). The simulation time decreases slowly with the increase of the photon cut. The simulation of the transport of characteristic x rays plays an important role, if an accurate modeling of a PET scanner system is to be achieved. The simulation time decreases slowly with the increase of the cut which, combined with the accuracy loss at high cuts, means that the usage of high photon energy cuts is not recommended for the acceleration of MC simulations.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Silkwood, Justin D; Matthews, Kenneth L; Shikhaliev, Polad M
2013-05-01
Photon counting spectral (PCS) computed tomography (CT) shows promise for breast imaging. An issue with current photon-counting detectors is low count rate capabilities, artifacts resulting from nonuniform count rate across the field of view, and suboptimal spectral information. These issues are addressed in part by using tissue-equivalent adaptive filtration of the x-ray beam. The purpose of the study was to investigate the effect of adaptive filtration on different aspects of PCS breast CT. The theoretical formulation for the filter shape was derived for different filter materials and evaluated by simulation and an experimental prototype of the filter was fabricated from a tissue-like material (acrylic). The PCS CT images of a glandular breast phantom with adipose and iodine contrast elements were simulated at 40, 60, 90, and 120 kVp tube voltages, with and without adaptive filter. The CT numbers, CT noise, and contrast-to-noise ratio (CNR) were compared for spectral CT images acquired with and without adaptive filters. Similar comparison was made for material-decomposed PCS CT images. The adaptive filter improved the uniformity of CT numbers, CT noise, and CNR in both ordinary and material decomposed PCS CT images. At the same tube output the average CT noise with adaptive filter, although uniform, was higher than the average noise without adaptive filter due to x-ray absorption by the filter. Increasing tube output, so that average skin exposure with the adaptive filter was same as without filter, made the noise with adaptive filter comparable to or lower than that without adaptive filter. Similar effects were observed when energy weighting was applied, and when material decompositions were performed using energy selective CT data. An adaptive filter decreases count rate requirements to the photon counting detectors which enables PCS breast CT based on commercially available detector technologies. Adaptive filter also improves image quality in PCS breast CT by decreasing beam hardening artifacts and by eliminating spatial nonuniformities of CT numbers, noise, and CNR.
Monte Carlo simulation of the neutron monitor yield function
NASA Astrophysics Data System (ADS)
Mangeard, P.-S.; Ruffolo, D.; Sáiz, A.; Madlee, S.; Nutaro, T.
2016-08-01
Neutron monitors (NMs) are ground-based detectors that measure variations of the Galactic cosmic ray flux at GV range rigidities. Differences in configuration, electronics, surroundings, and location induce systematic effects on the calculation of the yield functions of NMs worldwide. Different estimates of NM yield functions can differ by a factor of 2 or more. In this work, we present new Monte Carlo simulations to calculate NM yield functions and perform an absolute (not relative) comparison with the count rate of the Princess Sirindhorn Neutron Monitor (PSNM) at Doi Inthanon, Thailand, both for the entire monitor and for individual counter tubes. We model the atmosphere using profiles from the Global Data Assimilation System database and the Naval Research Laboratory Mass Spectrometer, Incoherent Scatter Radar Extended model. Using FLUKA software and the detailed geometry of PSNM, we calculated the PSNM yield functions for protons and alpha particles. An agreement better than 9% was achieved between the PSNM observations and the simulated count rate during the solar minimum of December 2009. The systematic effect from the electronic dead time was studied as a function of primary cosmic ray rigidity at the top of the atmosphere up to 1 TV. We show that the effect is not negligible and can reach 35% at high rigidity for a dead time >1 ms. We analyzed the response function of each counter tube at PSNM using its actual dead time, and we provide normalization coefficients between count rates for various tube configurations in the standard NM64 design that are valid to within ˜1% for such stations worldwide.
Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.
Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung
2012-04-10
We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.
NASA Technical Reports Server (NTRS)
Vallerga, J.; Vanderspek, R. K.; Ricker, G. R.
1982-01-01
To establish the expected sensitivity of a new hard X-ray telescope design, an experiment was conducted to measure the background counting rate at balloon altitudes (40 km) of mercuric iodide, a room temperature solid state X-ray detector. The prototype detector consisted of two thin mercuric iodide (HgI2) detectors surrounded by a large bismuth germanate (Bi4Ge3O12) scintillator operated in anticoincidence. The bismuth germanate shield vetoed most of the background counting rate induced by atmospheric gamma-rays, neutrons and cosmic rays. A balloon-borne gondola containing a prototype detector assembly was designed, constructed and flown twice in the spring of 1982 from Palestine, Texas. The second flight of this instrument established a differential background counting rate of 4.2 O.7 x 10-5 counts/sec cm keV over the energy range of 40 to 80 keV. This measurement was within 50% of the predicted value. The measured rate is approx 5 times lower than previously achieved in shielded NaI/CsI or Ge systems operating in the same energy range. The prediction was based on a Monte Carlo simulation of the detector assembly in the radiation environment at float altitude.
Crewe, Tara L; Taylor, Philip D; Lepage, Denis
2015-01-01
The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration) to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling) to reduce the probability that individuals will be detected on more than one occasion.
Effects of SMEAT on the oral health of crewmen (DTO 71-2). [dental hygiene
NASA Technical Reports Server (NTRS)
Brown, L. R.; Wheatcroft, M. G.
1973-01-01
The oral health status of three astronauts was monitored before, during and after a 56-day simulation of the Skylab mission. Laboratory and clinical parameters which are considered to be ultimately related to dental impairments were evaluated. The most notable changes were observed in increased counts of mycoplasma and S. mutans, decreased counts of enteric bacilli, decreased saliva flow rates, increased secretory IgA and salivary lysozyme levels, and increased clinical scores of dental plaque, calculus and inflammation.
Droplet-counting Microtitration System for Precise On-site Analysis.
Kawakubo, Susumu; Omori, Taichi; Suzuki, Yasutada; Ueta, Ikuo
2018-01-01
A new microtitration system based on the counting of titrant droplets has been developed for precise on-site analysis. The dropping rate was controlled by inserting a capillary tube as a flow resistance in a laboratory-made micropipette. The error of titration was 3% in a simulated titration with 20 droplets. The pre-addition of a titrant was proposed for precise titration within an error of 0.5%. The analytical performances were evaluated for chelate titration, redox titration and acid-base titration.
Na, Ji Ung; Han, Sang Kuk; Choi, Pil Cho; Shin, Dong Hyuk
2017-01-01
Metronome guidance is a feasible and effective feedback technique to improve the quality of cardiopulmonary resuscitation (CPR). The rate of the metronome should be set between 100 to 120 ticks/minute and the speed of ventilation may have crucial effect on the quality of ventilation. We compared three different metronome rates (100, 110, 120 ticks/minute) to investigate its effect on the quality of ventilation during metronome-guided 30:2 CPR. This is a prospective, randomized, crossover observational study using a RespiTrainer○ r . To simulate 30 chest compressions, one investigator counted from 1 to 30 in cadence with the metronome rate (1 count for every 1 tick), and the participant performed 2 consecutive ventilations immediately following the counting of 30. Thirty physicians performed 5 sets of 2 consecutive (total 10) bag-mask ventilations for each metronome rate. Participants were instructed to squeeze the bag over 2 ticks (1.0 to 1.2 seconds depending on the rate of metronome) and deflate the bag over 2 ticks. The sequence of three different metronome rates was randomized. Mean tidal volume significantly decreased as the metronome rate was increased from 110 ticks/minute to 120 ticks/minute (343±84 mL vs. 294±90 mL, P =0.004). Peak airway pressure significantly increased as metronome rate increased from 100 ticks/minute to 110 ticks/minute (18.7 vs. 21.6 mmHg, P =0.006). In metronome-guided 30:2 CPR, a higher metronome rate may adversely affect the quality of bag-mask ventilations. In cases of cardiac arrest where adequate ventilation support is necessary, 100 ticks/minute may be better than 110 or 120 ticks/minute to deliver adequate tidal volume during audio tone guided 30:2 CPR.
Na, Ji Ung; Han, Sang Kuk; Choi, Pil Cho; Shin, Dong Hyuk
2017-01-01
BACKGROUND: Metronome guidance is a feasible and effective feedback technique to improve the quality of cardiopulmonary resuscitation (CPR). The rate of the metronome should be set between 100 to 120 ticks/minute and the speed of ventilation may have crucial effect on the quality of ventilation. We compared three different metronome rates (100, 110, 120 ticks/minute) to investigate its effect on the quality of ventilation during metronome-guided 30:2 CPR. METHODS: This is a prospective, randomized, crossover observational study using a RespiTrainer○r. To simulate 30 chest compressions, one investigator counted from 1 to 30 in cadence with the metronome rate (1 count for every 1 tick), and the participant performed 2 consecutive ventilations immediately following the counting of 30. Thirty physicians performed 5 sets of 2 consecutive (total 10) bag-mask ventilations for each metronome rate. Participants were instructed to squeeze the bag over 2 ticks (1.0 to 1.2 seconds depending on the rate of metronome) and deflate the bag over 2 ticks. The sequence of three different metronome rates was randomized. RESULTS: Mean tidal volume significantly decreased as the metronome rate was increased from 110 ticks/minute to 120 ticks/minute (343±84 mL vs. 294±90 mL, P=0.004). Peak airway pressure significantly increased as metronome rate increased from 100 ticks/minute to 110 ticks/minute (18.7 vs. 21.6 mmHg, P=0.006). CONCLUSION: In metronome-guided 30:2 CPR, a higher metronome rate may adversely affect the quality of bag-mask ventilations. In cases of cardiac arrest where adequate ventilation support is necessary, 100 ticks/minute may be better than 110 or 120 ticks/minute to deliver adequate tidal volume during audio tone guided 30:2 CPR. PMID:28458759
A multi-purpose readout electronics for CdTe and CZT detectors for x-ray imaging applications
NASA Astrophysics Data System (ADS)
Yue, X. B.; Deng, Z.; Xing, Y. X.; Liu, Y. N.
2017-09-01
A multi-purpose readout electronics based on the DPLMS digital filter has been developed for CdTe and CZT detectors for X-ray imaging applications. Different filter coefficients can be synthesized optimized either for high energy resolution at relatively low counting rate or for high rate photon-counting with reduced energy resolution. The effects of signal width constraints, sampling rate and length were numerical studied by Mento Carlo simulation with simple CRRC shaper input signals. The signal width constraint had minor effect and the ENC was only increased by 6.5% when the signal width was shortened down to 2 τc. The sampling rate and length depended on the characteristic time constants of both input and output signals. For simple CR-RC input signals, the minimum number of the filter coefficients was 12 with 10% increase in ENC when the output time constant was close to the input shaping time. A prototype readout electronics was developed for demonstration, using a previously designed analog front ASIC and a commercial ADC card. Two different DPLMS filters were successfully synthesized and applied for high resolution and high counting rate applications respectively. The readout electronics was also tested with a linear array CdTe detector. The energy resolutions of Am-241 59.5 keV peak were measured to be 6.41% in FWHM for the high resolution filter and to be 13.58% in FWHM for the high counting rate filter with 160 ns signal width constraint.
Performance and capacity analysis of Poisson photon-counting based Iter-PIC OCDMA systems.
Li, Lingbin; Zhou, Xiaolin; Zhang, Rong; Zhang, Dingchen; Hanzo, Lajos
2013-11-04
In this paper, an iterative parallel interference cancellation (Iter-PIC) technique is developed for optical code-division multiple-access (OCDMA) systems relying on shot-noise limited Poisson photon-counting reception. The novel semi-analytical tool of extrinsic information transfer (EXIT) charts is used for analysing both the bit error rate (BER) performance as well as the channel capacity of these systems and the results are verified by Monte Carlo simulations. The proposed Iter-PIC OCDMA system is capable of achieving two orders of magnitude BER improvements and a 0.1 nats of capacity improvement over the conventional chip-level OCDMA systems at a coding rate of 1/10.
Physiological responses to simulated firefighter exercise protocols in varying environments.
Horn, Gavin P; Kesler, Richard M; Motl, Robert W; Hsiao-Wecksler, Elizabeth T; Klaren, Rachel E; Ensari, Ipek; Petrucci, Matthew N; Fernhall, Bo; Rosengren, Karl S
2015-01-01
For decades, research to quantify the effects of firefighting activities and personal protective equipment on physiology and biomechanics has been conducted in a variety of testing environments. It is unknown if these different environments provide similar information and comparable responses. A novel Firefighting Activities Station, which simulates four common fireground tasks, is presented for use with an environmental chamber in a controlled laboratory setting. Nineteen firefighters completed three different exercise protocols following common research practices. Simulated firefighting activities conducted in an environmental chamber or live-fire structures elicited similar physiological responses (max heart rate: 190.1 vs 188.0 bpm, core temperature response: 0.047°C/min vs 0.043°C/min) and accelerometry counts. However, the response to a treadmill protocol commonly used in laboratory settings resulted in significantly lower heart rate (178.4 vs 188.0 bpm), core temperature response (0.037°C/min vs 0.043°C/min) and physical activity counts compared with firefighting activities in the burn building. Practitioner Summary: We introduce a new approach for simulating realistic firefighting activities in a controlled laboratory environment for ergonomics assessment of fire service equipment and personnel. Physiological responses to this proposed protocol more closely replicate those from live-fire activities than a traditional treadmill protocol and are simple to replicate and standardise.
McGowan, Conor P.; Gardner, Beth
2013-01-01
Estimating productivity for precocial species can be difficult because young birds leave their nest within hours or days of hatching and detectability thereafter can be very low. Recently, a method for using a modified catch-curve to estimate precocial chick daily survival for age based count data was presented using Piping Plover (Charadrius melodus) data from the Missouri River. However, many of the assumptions of the catch-curve approach were not fully evaluated for precocial chicks. We developed a simulation model to mimic Piping Plovers, a fairly representative shorebird, and age-based count-data collection. Using the simulated data, we calculated daily survival estimates and compared them with the known daily survival rates from the simulation model. We conducted these comparisons under different sampling scenarios where the ecological and statistical assumptions had been violated. Overall, the daily survival estimates calculated from the simulated data corresponded well with true survival rates of the simulation. Violating the accurate aging and the independence assumptions did not result in biased daily survival estimates, whereas unequal detection for younger or older birds and violating the birth death equilibrium did result in estimator bias. Assuring that all ages are equally detectable and timing data collection to approximately meet the birth death equilibrium are key to the successful use of this method for precocial shorebirds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joshi, Jay Prakash
The objectives of this project are to calibrate the Advanced Experimental Fuel Counter (AEFC), benchmark MCNP simulations using experimental results, investigate the effects of change in fuel assembly geometry, and finally to show the boost in doubles count rates with 252Cf active soruces due to the time correlated induced fission (TCIF) effect.
Vijaysegaran, Praveen; Knibbs, Luke D; Morawska, Lidia; Crawford, Ross W
2018-05-01
The role of space suits in the prevention of orthopedic prosthetic joint infection remains unclear. Recent evidence suggests that space suits may in fact contribute to increased infection rates, with bioaerosol emissions from space suits identified as a potential cause. This study aimed to compare the particle and microbiological emission rates (PER and MER) of space suits and standard surgical clothing. A comparison of emission rates between space suits and standard surgical clothing was performed in a simulated surgical environment during 5 separate experiments. Particle counts were analyzed with 2 separate particle counters capable of detecting particles between 0.1 and 20 μm. An Andersen impactor was used to sample bacteria, with culture counts performed at 24 and 48 hours. Four experiments consistently showed statistically significant increases in both PER and MER when space suits are used compared with standard surgical clothing. One experiment showed inconsistent results, with a trend toward increases in both PER and MER when space suits are used compared with standard surgical clothing. Space suits cause increased PER and MER compared with standard surgical clothing. This finding provides mechanistic evidence to support the increased prosthetic joint infection rates observed in clinical studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Goddard, Braden; Croft, Stephen; Lousteau, Angela; ...
2016-05-25
Safeguarding nuclear material is an important and challenging task for the international community. One particular safeguards technique commonly used for uranium assay is active neutron correlation counting. This technique involves irradiating unused uranium with ( α,n) neutrons from an Am-Li source and recording the resultant neutron pulse signal which includes induced fission neutrons. Although this non-destructive technique is widely employed in safeguards applications, the neutron energy spectra from an Am-Li sources is not well known. Several measurements over the past few decades have been made to characterize this spectrum; however, little work has been done comparing the measured spectra ofmore » various Am-Li sources to each other. This paper examines fourteen different Am-Li spectra, focusing on how these spectra affect simulated neutron multiplicity results using the code Monte Carlo N-Particle eXtended (MCNPX). Two measurement and simulation campaigns were completed using Active Well Coincidence Counter (AWCC) detectors and uranium standards of varying enrichment. The results of this work indicate that for standard AWCC measurements, the fourteen Am-Li spectra produce similar doubles and triples count rates. Finally, the singles count rates varied by as much as 20% between the different spectra, although they are usually not used in quantitative analysis.« less
Quantitative basis for component factors of gas flow proportional counting efficiencies
NASA Astrophysics Data System (ADS)
Nichols, Michael C.
This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.
Cho, Hyo-Min; Ding, Huanjun; Barber, William C; Iwanczyk, Jan S; Molloi, Sabee
2015-07-01
To investigate the feasibility of detecting breast microcalcification (μCa) with a dedicated breast computed tomography (CT) system based on energy-resolved photon-counting silicon (Si) strip detectors. The proposed photon-counting breast CT system and a bench-top prototype photon-counting breast CT system were simulated using a simulation package written in matlab to determine the smallest detectable μCa. A 14 cm diameter cylindrical phantom made of breast tissue with 20% glandularity was used to simulate an average-sized breast. Five different size groups of calcium carbonate grains, from 100 to 180 μm in diameter, were simulated inside of the cylindrical phantom. The images were acquired with a mean glandular dose (MGD) in the range of 0.7-8 mGy. A total of 400 images was used to perform a reader study. Another simulation study was performed using a 1.6 cm diameter cylindrical phantom to validate the experimental results from a bench-top prototype breast CT system. In the experimental study, a bench-top prototype CT system was constructed using a tungsten anode x-ray source and a single line 256-pixels Si strip photon-counting detector with a pixel pitch of 100 μm. Calcium carbonate grains, with diameter in the range of 105-215 μm, were embedded in a cylindrical plastic resin phantom to simulate μCas. The physical phantoms were imaged at 65 kVp with an entrance exposure in the range of 0.6-8 mGy. A total of 500 images was used to perform another reader study. The images were displayed in random order to three blinded observers, who were asked to give a 4-point confidence rating on each image regarding the presence of μCa. The μCa detectability for each image was evaluated by using the average area under the receiver operating characteristic curve (AUC) across the readers. The simulation results using a 14 cm diameter breast phantom showed that the proposed photon-counting breast CT system can achieve high detection accuracy with an average AUC greater than 0.89 ± 0.07 for μCas larger than 120 μm in diameter at a MGD of 3 mGy. The experimental results using a 1.6 cm diameter breast phantom showed that the prototype system can achieve an average AUC greater than 0.98 ± 0.01 for μCas larger than 140 μm in diameter using an entrance exposure of 1.2 mGy. The proposed photon-counting breast CT system based on a Si strip detector can potentially offer superior image quality to detect μCa with a lower dose level than a standard two-view mammography.
Multiplicity counting from fission detector signals with time delay effects
NASA Astrophysics Data System (ADS)
Nagy, L.; Pázsit, I.; Pál, L.
2018-03-01
In recent work, we have developed the theory of using the first three auto- and joint central moments of the currents of up to three fission chambers to extract the singles, doubles and triples count rates of traditional multiplicity counting (Pázsit and Pál, 2016; Pázsit et al., 2016). The objective is to elaborate a method for determining the fissile mass, neutron multiplication, and (α, n) neutron emission rate of an unknown assembly of fissile material from the statistics of the fission chamber signals, analogous to the traditional multiplicity counting methods with detectors in the pulse mode. Such a method would be an alternative to He-3 detector systems, which would be free from the dead time problems that would be encountered in high counting rate applications, for example the assay of spent nuclear fuel. A significant restriction of our previous work was that all neutrons born in a source event (spontaneous fission) were assumed to be detected simultaneously, which is not fulfilled in reality. In the present work, this restriction is eliminated, by assuming an independent, identically distributed random time delay for all neutrons arising from one source event. Expressions are derived for the same auto- and joint central moments of the detector current(s) as in the previous case, expressed with the singles, doubles, and triples (S, D and T) count rates. It is shown that if the time-dispersion of neutron detections is of the same order of magnitude as the detector pulse width, as they typically are in measurements of fast neutrons, the multiplicity rates can still be extracted from the moments of the detector current, although with more involved calibration factors. The presented formulae, and hence also the performance of the proposed method, are tested by both analytical models of the time delay as well as with numerical simulations. Methods are suggested also for the modification of the method for large time delay effects (for thermalised neutrons).
Reliability of Multi-Category Rating Scales
ERIC Educational Resources Information Center
Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.
2013-01-01
The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…
Taylor, J M; Law, N
1998-10-30
We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.
Optimal Irrigation and Debridement of Infected Joint Implants
Schwechter, Evan M.; Folk, David; Varshney, Avanish K.; Fries, Bettina C.; Kim, Sun Jin; Hirsh, David M.
2014-01-01
Acute postoperative and acute, late hematogenous prosthetic joint infections have been treated with 1-stage irrigation and debridement with polyethylene exchange. Success rates, however, are highly variable. Reported studies demonstrate that detergents are effective at decreasing bacterial colony counts on orthopedic implants. Our hypothesis is that the combination of a detergent and an antiseptic would be more effective than using a detergent alone to decrease colony counts from a methicillin-resistant Staphylococcus aureus biofilm-coated titanium alloy disk simulating an orthopedic implant. In our study of various agents tested, chlorhexidine gluconate scrub (antiseptic and detergent) was the most effective at decreasing bacterial colony counts both prereincubation and postreincubation of the disks; pulse lavage and scrubbing were not more effective than pulse lavage alone. PMID:21641757
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-12-01
The number of motor unit action potentials (MUAPs) appearing in the surface electromyogram (EMG) signal is directly related to motor unit recruitment and firing rates and therefore offers potentially valuable information about the level of activation of the motoneuron pool. In this paper, based on morphological features of the surface MUAPs, we try to estimate the number of MUAPs present in the surface EMG by counting the negative peaks in the signal. Several signal processing procedures are applied to the surface EMG to facilitate this peak counting process. The MUAP number estimation performance by this approach is first illustrated using the surface EMG simulations. Then, by evaluating the peak counting results from the EMG records detected by a very selective surface electrode, at different contraction levels of the first dorsal interosseous (FDI) muscles, the utility and limitations of such direct peak counts for MUAP number estimation in surface EMG are further explored.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
A matrix-inversion method for gamma-source mapping from gamma-count data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adsley, Ian; Burgess, Claire; Bull, Richard K
In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Abbene, L; Gerardi, G; Principato, F; Del Sordo, S; Ienzi, R; Raso, G
2010-12-01
Direct measurement of mammographic x-ray spectra under clinical conditions is a difficult task due to the high fluence rate of the x-ray beams as well as the limits in the development of high resolution detection systems in a high counting rate environment. In this work we present a detection system, based on a CdTe detector and an innovative digital pulse processing (DPP) system, for high-rate x-ray spectroscopy in mammography. The DPP system performs a digital pile-up inspection and a digital pulse height analysis of the detector signals, digitized through a 14-bit, 100 MHz digitizer, for x-ray spectroscopy even at high photon counting rates. We investigated on the response of the digital detection system both at low (150 cps) and at high photon counting rates (up to 500 kcps) by using monoenergetic x-ray sources and a nonclinical molybdenum anode x-ray tube. Clinical molybdenum x-ray spectrum measurements were also performed by using a pinhole collimator and a custom alignment device. The detection system shows excellent performance up to 512 kcps with an energy resolution of 4.08% FWHM at 22.1 keV. Despite the high photon counting rate (up to 453 kcps), the molybdenum x-ray spectra, measured under clinical conditions, are characterized by a low number of pile-up events. The agreement between the attenuation curves and the half value layer values, obtained from the measured spectra, simulated spectra, and from the exposure values directly measured with an ionization chamber, also shows the accuracy of the measurements. These results make the proposed detection system a very attractive tool for both laboratory research and advanced quality controls in mammography.
Liang, Albert K.; Koniczek, Martin; Antonuk, Larry E.; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A.; Lu, Jeng Ping
2017-01-01
Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si) — a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance — information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% FWHM at 70 keV; and the digital components should work well even in the presence of significant TFT variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm. PMID:26878107
Liang, Albert K; Koniczek, Martin; Antonuk, Larry E; El-Mohri, Youcef; Zhao, Qihua; Street, Robert A; Lu, Jeng Ping
2016-03-07
Photon counting arrays (PCAs), defined as pixelated imagers which measure the absorbed energy of x-ray photons individually and record this information digitally, are of increasing clinical interest. A number of PCA prototypes with a 1 mm pixel-to-pixel pitch have recently been fabricated with polycrystalline silicon (poly-Si)-a thin-film technology capable of creating monolithic imagers of a size commensurate with human anatomy. In this study, analog and digital simulation frameworks were developed to provide insight into the influence of individual poly-Si transistors on pixel circuit performance-information that is not readily available through empirical means. The simulation frameworks were used to characterize the circuit designs employed in the prototypes. The analog framework, which determines the noise produced by individual transistors, was used to estimate energy resolution, as well as to identify which transistors contribute the most noise. The digital framework, which analyzes how well circuits function in the presence of significant variations in transistor properties, was used to estimate how fast a circuit can produce an output (referred to as output count rate). In addition, an algorithm was developed and used to estimate the minimum pixel pitch that could be achieved for the pixel circuits of the current prototypes. The simulation frameworks predict that the analog component of the PCA prototypes could have energy resolution as low as 8.9% full width at half maximum (FWHM) at 70 keV; and the digital components should work well even in the presence of significant thin-film transistor (TFT) variations, with the fastest component having output count rates as high as 3 MHz. Finally, based on conceivable improvements in the underlying fabrication process, the algorithm predicts that the 1 mm pitch of the current PCA prototypes could be reduced significantly, potentially to between ~240 and 290 μm.
van Sighem, Ard; Sabin, Caroline A.; Phillips, Andrew N.
2015-01-01
Background It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). Methods The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. Results For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150–199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150–199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29–100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. Conclusions The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART. PMID:25768925
Lodwick, Rebecca K; Nakagawa, Fumiyo; van Sighem, Ard; Sabin, Caroline A; Phillips, Andrew N
2015-01-01
It is important to have methods available to estimate the number of people who have undiagnosed HIV and are in need of antiretroviral therapy (ART). The method uses the concept that a predictable level of occurrence of AIDS or other HIV-related clinical symptoms which lead to presentation for care, and hence diagnosis of HIV, arises in undiagnosed people with a given CD4 count. The method requires surveillance data on numbers of new HIV diagnoses with HIV-related symptoms, and the CD4 count at diagnosis. The CD4 count-specific rate at which HIV-related symptoms develop are estimated from cohort data. 95% confidence intervals can be constructed using a simple simulation method. For example, if there were 13 HIV diagnoses with HIV-related symptoms made in one year with CD4 count at diagnosis between 150-199 cells/mm3, then since the CD4 count-specific rate of HIV-related symptoms is estimated as 0.216 per person-year, the estimated number of person years lived in people with undiagnosed HIV with CD4 count 150-199 cells/mm3 is 13/0.216 = 60 (95% confidence interval: 29-100), which is considered an estimate of the number of people living with undiagnosed HIV in this CD4 count stratum. The method is straightforward to implement within a short period once a surveillance system of all new HIV diagnoses, collecting data on HIV-related symptoms at diagnosis, is in place and is most suitable for estimating the number of undiagnosed people with CD4 count <200 cells/mm3 due to the low rate of developing HIV-related symptoms at higher CD4 counts. A potential source of bias is under-diagnosis and under-reporting of diagnoses with HIV-related symptoms. Although this method has limitations as with all approaches, it is important for prompting increased efforts to identify undiagnosed people, particularly those with low CD4 count, and for informing levels of unmet need for ART.
A scaling relation between merger rate of galaxies and their close pair count
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, C. Y.; Jing, Y. P.; Han, Jiaxin, E-mail: ypjing@sjtu.edu.cn
We study how to measure the galaxy merger rate from the observed close pair count. Using a high-resolution N-body/SPH cosmological simulation, we find an accurate scaling relation between galaxy pair counts and merger rates down to a stellar mass ratio of about 1:30. The relation explicitly accounts for the dependence on redshift (or time), on pair separation, and on mass of the two galaxies in a pair. With this relation, one can easily obtain the mean merger timescale for a close pair of galaxies. The use of virial masses, instead of the stellar mass, is motivated by the fact thatmore » the dynamical friction timescale is mainly determined by the dark matter surrounding central and satellite galaxies. This fact can also minimize the error induced by uncertainties in modeling star formation in the simulation. Since the virial mass can be determined from the well-established relation between the virial masses and the stellar masses in observations, our scaling relation can easily be applied to observations to obtain the merger rate and merger timescale. For major merger pairs (1:1-1:4) of galaxies above a stellar mass of 4 × 10{sup 10} h {sup –1} M{sub ☉} at z = 0.1, it takes about 0.31 Gyr to merge for pairs within a projected distance of 20 h {sup –1} kpc with a stellar mass ratio of 1:1, while the time goes up to 1.6 Gyr for mergers with stellar mass ratio of 1:4. Our results indicate that a single timescale usually used in the literature is not accurate to describe mergers with a stellar mass ratio spanning even a narrow range from 1:1 to 1:4.« less
A simulator for airborne laser swath mapping via photon counting
NASA Astrophysics Data System (ADS)
Slatton, K. C.; Carter, W. E.; Shrestha, R.
2005-06-01
Commercially marketed airborne laser swath mapping (ALSM) instruments currently use laser rangers with sufficient energy per pulse to work with return signals of thousands of photons per shot. The resulting high signal to noise level virtually eliminates spurious range values caused by noise, such as background solar radiation and sensor thermal noise. However, the high signal level approach requires laser repetition rates of hundreds of thousands of pulses per second to obtain contiguous coverage of the terrain at sub-meter spatial resolution, and with currently available technology, affords little scalability for significantly downsizing the hardware, or reducing the costs. A photon-counting ALSM sensor has been designed by the University of Florida and Sigma Space, Inc. for improved topographic mapping with lower power requirements and weight than traditional ALSM sensors. Major elements of the sensor design are presented along with preliminary simulation results. The simulator is being developed so that data phenomenology and target detection potential can be investigated before the system is completed. Early simulations suggest that precise estimates of terrain elevation and target detection will be possible with the sensor design.
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
NASA Astrophysics Data System (ADS)
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; Menlove, Howard O.; Flaska, Marek; Pozzi, Sara A.
2017-07-01
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. The NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrument that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less
Fresh Fuel Measurements With the Differential Die-Away Self-Interrogation Instrument
Trahan, Alexis C.; Belian, Anthony P.; Swinhoe, Martyn T.; ...
2017-01-05
The purpose of the Next Generation Safeguards Initiative (NGSI)-Spent Fuel (SF) Project is to strengthen the technical toolkit of safeguards inspectors and/or other interested parties. Thus the NGSI-SF team is working to achieve the following technical goals more easily and efficiently than in the past using nondestructive assay measurements of spent fuel assemblies: 1) verify the initial enrichment, burnup, and cooling time of facility declaration; 2) detect the diversion or replacement of pins; 3) estimate the plutonium mass; 4) estimate decay heat; and 5) determine the reactivity of spent fuel assemblies. The differential die-away self-interrogation (DDSI) instrument is one instrumentmore » that was assessed for years regarding its feasibility for robust, timely verification of spent fuel assemblies. The instrument was recently built and was tested using fresh fuel assemblies in a variety of configurations, including varying enrichment, neutron absorber content, and symmetry. The early die-away method, a multiplication determination method developed in simulation space, was successfully tested on the fresh fuel assembly data and determined multiplication with a root-mean-square (RMS) error of 2.9%. The experimental results were compared with MCNP simulations of the instrument as well. Low multiplication assemblies had agreement with an average RMS error of 0.2% in the singles count rate (i.e., total neutrons detected per second) and 3.4% in the doubles count rates (i.e., neutrons detected in coincidence per second). High-multiplication assemblies had agreement with an average RMS error of 4.1% in the singles and 13.3% in the doubles count rates.« less
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1997-07-01
We apply to the specific case of images taken with the ROSAT PSPC detector our wavelet-based X-ray source detection algorithm presented in a companion paper. Such images are characterized by the presence of detector ``ribs,'' strongly varying point-spread function, and vignetting, so that their analysis provides a challenge for any detection algorithm. First, we apply the algorithm to simulated images of a flat background, as seen with the PSPC, in order to calibrate the number of spurious detections as a function of significance threshold and to ascertain that the spatial distribution of spurious detections is uniform, i.e., unaffected by the ribs; this goal was achieved using the exposure map in the detection procedure. Then, we analyze simulations of PSPC images with a realistic number of point sources; the results are used to determine the efficiency of source detection and the accuracy of output quantities such as source count rate, size, and position, upon a comparison with input source data. It turns out that sources with 10 photons or less may be confidently detected near the image center in medium-length (~104 s), background-limited PSPC exposures. The positions of sources detected near the image center (off-axis angles < 15') are accurate to within a few arcseconds. Output count rates and sizes are in agreement with the input quantities, within a factor of 2 in 90% of the cases. The errors on position, count rate, and size increase with off-axis angle and for detections of lower significance. We have also checked that the upper limits computed with our method are consistent with the count rates of undetected input sources. Finally, we have tested the algorithm by applying it on various actual PSPC images, among the most challenging for automated detection procedures (crowded fields, extended sources, and nonuniform diffuse emission). The performance of our method in these images is satisfactory and outperforms those of other current X-ray detection techniques, such as those employed to produce the MPE and WGA catalogs of PSPC sources, in terms of both detection reliability and efficiency. We have also investigated the theoretical limit for point-source detection, with the result that even sources with only 2-3 photons may be reliably detected using an efficient method in images with sufficiently high resolution and low background.
Heliospheric Modulation Strength During The Neutron Monitor Era
NASA Astrophysics Data System (ADS)
Usoskin, I. G.; Alanko, K.; Mursula, K.; Kovaltsov, G. A.
Using a stochastic simulation of a one-dimensional heliosphere we calculate galactic cosmic ray spectra at the Earth's orbit for different values of the heliospheric mod- ulation strength. Convoluting these spectra with the specific yield function of a neu- tron monitor, we obtain the expected neutron monitor count rates for different values of the modulation strength. Finally, inverting this relation, we calculate the modula- tion strength using the actually recorded neutron monitor count rates. We present the reconstructed annual heliospheric modulation strengths for the neutron monitor era (19532000) using several neutron monitors from different latitudes, covering a large range of geomagnetic rigidity cutoffs from polar to equatorial regions. The estimated modulation strengths are shown to be in good agreement with the corresponding esti- mates reported earlier for some years.
Anglaret, Xavier; Scott, Callie A.; Walensky, Rochelle P.; Ouattara, Eric; Losina, Elena; Moh, Raoul; Becker, Jessica E.; Uhler, Lauren; Danel, Christine; Messou, Eugene; Eholié, Serge; Freedberg, Kenneth A.
2013-01-01
Background Initiation of antiretroviral therapy (ART) in all HIV-infected adults, regardless of count, is a proposed strategy for reducing HIV transmission. We investigated the conditions under which starting ART early could entail more risks than benefits for patients with high CD4 counts. Methods We used a simulation model to compare ART initiation upon entry to care (“immediate ART”) to initiation at CD4 ≤350 cells/μL (“WHO 2010 ART”) in African adults with CD4 counts >500 cells/μL. We varied inputs to determine the combination of parameters (population characteristics, conditions of care, treatment outcomes) that would result in higher 15-year mortality with immediate ART. Results Fifteen-year mortality was 56.7% for WHO 2010 and 51.8% for immediate ART. In one-way sensitivity analysis, lower 15-year mortality was consistently achieved with immediate ART unless the rate of fatal ART toxicity was >1.0/100PY, the rate of withdrawal from care was >1.2-fold higher or the rate of ART failure due to poor adherence was >4.3-fold higher on immediate ART. In multi-way sensitivity analysis, immediate ART led to higher mortality when moderate rates of fatal ART toxicity (0.25/100PY) were combined with rates of withdrawal from care >1.1-fold higher and rates of treatment failure >2.1-fold higher on immediate ART than on WHO 2010 ART. Conclusions In sub-Saharan Africa, ART initiation at entry into care would improve long-term survival of patients with high CD4 counts, unless it is associated with increased withdrawal from care and decreased adherence. In early ART trials, a focus on retention and adherence will be critical. PMID:22809695
Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.
Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram
2017-02-01
In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.
Development of a Photon Counting System for Differential Lidar Signal Detection
NASA Technical Reports Server (NTRS)
Elsayed-Ali, Hani
1997-01-01
Photon counting has been chosen as a means to extend the detection range of current airborne DIAL ozone measurements. Lidar backscattered return signals from the on and off-line lasers experience a significant exponential decay. To extract further data from the decaying ozone return signals, photon counting will be used to measure the low light levels, thus extending the detection range. In this application, photon counting will extend signal measurement where the analog return signal is too weak. The current analog measurement range is limited to approximately 25 kilometers from an aircraft flying at 12 kilometers. Photon counting will be able to exceed the current measurement range so as to follow the mid-latitude model of ozone density as a function of height. This report describes the development of a photon counting system. The initial development phase begins with detailed evaluation of individual photomultiplier tubes. The PMT qualities investigated are noise count rates, single electron response peaks, voltage versus gain values, saturation effects, and output signal linearity. These evaluations are followed by analysis of two distinctive tube base gating schemes. The next phase is to construct and operate a photon counting system in a laboratory environment. The laboratory counting simulations are used to determine optimum discriminator setpoints and to continue further evaluations of PMT properties. The final step in the photon counting system evaluation process is the compiling of photon counting measurements on the existing ozone DIAL laser system.
2D dark-count-rate modeling of PureB single-photon avalanche diodes in a TCAD environment
NASA Astrophysics Data System (ADS)
Knežević, Tihomir; Nanver, Lis K.; Suligoj, Tomislav
2018-02-01
PureB silicon photodiodes have nm-shallow p+n junctions with which photons/electrons with penetration-depths of a few nanometer can be detected. PureB Single-Photon Avalanche Diodes (SPADs) were fabricated and analysed by 2D numerical modeling as an extension to TCAD software. The very shallow p+ -anode has high perimeter curvature that enhances the electric field. In SPADs, noise is quantified by the dark count rate (DCR) that is a measure for the number of false counts triggered by unwanted processes in the non-illuminated device. Just like for desired events, the probability a dark count increases with increasing electric field and the perimeter conditions are critical. In this work, the DCR was studied by two 2D methods of analysis: the "quasi-2D" (Q-2D) method where vertical 1D cross-sections were assumed for calculating the electron/hole avalanche-probabilities, and the "ionization-integral 2D" (II-2D) method where crosssections were placed where the maximum ionization-integrals were calculated. The Q-2D method gave satisfactory results in structures where the peripheral regions had a small contribution to the DCR, such as in devices with conventional deepjunction guard rings (GRs). Otherwise, the II-2D method proved to be much more precise. The results show that the DCR simulation methods are useful for optimizing the compromise between fill-factor and p-/n-doping profile design in SPAD devices. For the experimentally investigated PureB SPADs, excellent agreement of the measured and simulated DCR was achieved. This shows that although an implicit GR is attractively compact, the very shallow pn-junction gives a risk of having such a low breakdown voltage at the perimeter that the DCR of the device may be negatively impacted.
Simulations of a micro-PET system based on liquid xenon
NASA Astrophysics Data System (ADS)
Miceli, A.; Glister, J.; Andreyev, A.; Bryman, D.; Kurchaninov, L.; Lu, P.; Muennich, A.; Retiere, F.; Sossi, V.
2012-03-01
The imaging performance of a high-resolution preclinical micro-positron emission tomography (micro-PET) system employing liquid xenon (LXe) as the gamma-ray detection medium was simulated. The arrangement comprises a ring of detectors consisting of trapezoidal LXe time projection ionization chambers and two arrays of large area avalanche photodiodes for the measurement of ionization charge and scintillation light. A key feature of the LXePET system is the ability to identify individual photon interactions with high energy resolution and high spatial resolution in three dimensions and determine the correct interaction sequence using Compton reconstruction algorithms. The simulated LXePET imaging performance was evaluated by computing the noise equivalent count rate, the sensitivity and point spread function for a point source according to the NEMA-NU4 standard. The image quality was studied with a micro-Derenzo phantom. Results of these simulation studies included noise equivalent count rate peaking at 1326 kcps at 188 MBq (705 kcps at 184 MBq) for an energy window of 450-600 keV and a coincidence window of 1 ns for mouse (rat) phantoms. The absolute sensitivity at the center of the field of view was 12.6%. Radial, tangential and axial resolutions of 22Na point sources reconstructed with a list-mode maximum likelihood expectation maximization algorithm were ⩽0.8 mm (full-width at half-maximum) throughout the field of view. Hot-rod inserts of <0.8 mm diameter were resolvable in the transaxial image of a micro-Derenzo phantom. The simulations show that a LXe system would provide new capabilities for significantly enhancing PET images.
NASA Astrophysics Data System (ADS)
Hu, Jianwei; Tobin, Stephen J.; LaFleur, Adrienne M.; Menlove, Howard O.; Swinhoe, Martyn T.
2013-11-01
Self-Interrogation Neutron Resonance Densitometry (SINRD) is one of several nondestructive assay (NDA) techniques being integrated into systems to measure spent fuel as part of the Next Generation Safeguards Initiative (NGSI) Spent Fuel Project. The NGSI Spent Fuel Project is sponsored by the US Department of Energy's National Nuclear Security Administration to measure plutonium in, and detect diversion of fuel pins from, spent nuclear fuel assemblies. SINRD shows promising capability in determining the 239Pu and 235U content in spent fuel. SINRD is a relatively low-cost and lightweight instrument, and it is easy to implement in the field. The technique makes use of the passive neutron source existing in a spent fuel assembly, and it uses ratios between the count rates collected in fission chambers that are covered with different absorbing materials. These ratios are correlated to key attributes of the spent fuel assembly, such as the total mass of 239Pu and 235U. Using count rate ratios instead of absolute count rates makes SINRD less vulnerable to systematic uncertainties. Building upon the previous research, this work focuses on the underlying physics of the SINRD technique: quantifying the individual impacts on the count rate ratios of a few important nuclides using the perturbation method; examining new correlations between count rate ratio and mass quantities based on the results of the perturbation study; quantifying the impacts on the energy windows of the filtering materials that cover the fission chambers by tallying the neutron spectra before and after the neutrons go through the filters; and identifying the most important nuclides that cause cooling-time variations in the count rate ratios. The results of these studies show that 235U content has a major impact on the SINRD signal in addition to the 239Pu content. Plutonium-241 and 241Am are the two main nuclides responsible for the variation in the count rate ratio with cooling time. In short, this work provides insights into some of the main factors that affect the performance of SINRD, and it should help improve the hardware design and the algorithm used to interpret the signal for the SINRD technique. In addition, the modeling and simulation techniques used in this work can be easily adopted for analysis of other NDA systems, especially when complex systems like spent nuclear fuel are involved. These studies were conducted at Los Alamos National Laboratory.
Molteni, Matteo; Ferri, Fabio
2016-11-01
A 10 ns time resolution, multi-tau software correlator, capable of computing simultaneous autocorrelation (A-A, B-B) and cross (A-B) correlation functions at count rates up to ∼10 MHz, with no data loss, has been developed in LabVIEW and C++ by using the National Instrument timer/counterboard (NI PCIe-6612) and a fast Personal Computer (PC) (Intel Core i7-4790 Processor 3.60 GHz ). The correlator works by using two algorithms: for large lag times (τ ≳ 1 μs), a classical time-mode scheme, based on the measure of the number of pulses per time interval, is used; differently, for τ ≲ 1 μs a photon-mode (PM) scheme is adopted and the correlation function is retrieved from the sequence of the photon arrival times. Single auto- and cross-correlation functions can be processed online in full real time up to count rates of ∼1.8 MHz and ∼1.2 MHz, respectively. Two autocorrelation (A-A, B-B) and a cross correlation (A-B) functions can be simultaneously processed in full real time only up to count rates of ∼750 kHz. At higher count rates, the online processing takes place in a delayed modality, but with no data loss. When tested with simulated correlation data and latex spheres solutions, the overall performances of the correlator appear to be comparable with those of commercial hardware correlators, but with several nontrivial advantages related to its flexibility, low cost, and easy adaptability to future developments of PC and data acquisition technology.
NASA Astrophysics Data System (ADS)
Molteni, Matteo; Ferri, Fabio
2016-11-01
A 10 ns time resolution, multi-tau software correlator, capable of computing simultaneous autocorrelation (A-A, B-B) and cross (A-B) correlation functions at count rates up to ˜10 MHz, with no data loss, has been developed in LabVIEW and C++ by using the National Instrument timer/counterboard (NI PCIe-6612) and a fast Personal Computer (PC) (Intel Core i7-4790 Processor 3.60 GHz ). The correlator works by using two algorithms: for large lag times (τ ≳ 1 μs), a classical time-mode scheme, based on the measure of the number of pulses per time interval, is used; differently, for τ ≲ 1 μs a photon-mode (PM) scheme is adopted and the correlation function is retrieved from the sequence of the photon arrival times. Single auto- and cross-correlation functions can be processed online in full real time up to count rates of ˜1.8 MHz and ˜1.2 MHz, respectively. Two autocorrelation (A-A, B-B) and a cross correlation (A-B) functions can be simultaneously processed in full real time only up to count rates of ˜750 kHz. At higher count rates, the online processing takes place in a delayed modality, but with no data loss. When tested with simulated correlation data and latex spheres solutions, the overall performances of the correlator appear to be comparable with those of commercial hardware correlators, but with several nontrivial advantages related to its flexibility, low cost, and easy adaptability to future developments of PC and data acquisition technology.
Jiang, Ailian; Zheng, Lihong
2018-03-29
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.
2018-01-01
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336
NASA Astrophysics Data System (ADS)
Couceiro, Miguel; Crespo, Paulo; Marques, Rui F.; Fonte, Paulo
2014-06-01
Scatter Fraction (SF) and Noise Equivalent Count Rate (NECR) of a 2400 mm wide axial field-of-view Positron Emission Tomography (PET) system based on Resistive Plate Chamber (RPC) detectors with 300 ps Time Of Flight (TOF) resolution were studied by simulation using Geant4. The study followed the NEMA NU2-2001 standards, using the standard 700 mm long phantom and an axially extended one with 1800 mm, modeling the foreseeable use of this PET system. Data was processed based on the actual RPC readout, which requires a 0.2 μs non-paralyzable dead time for timing signals and a paralyzable dead time (τps) for position signals. For NECR, the best coincidence trigger consisted of a multiple time window coincidence sorter retaining single coincidence pairs (involving only two photons) and all possible coincidence pairs obtained from Multiple coincidences, keeping only those for which the direct TOF-reconstructed point falls inside a tight region surrounding the phantom. For the 700 mm phantom, the SF was 51.8% and, with τps = 3.0 μs, the peak NECR was 167 kcps at 7.6 kBq/cm3. Using τps = 1.0 μs the NECR was 349 kcps at 7.6 kBq/cm3, and no peak was found. For the 1800 mm phantom, the SF was slightly higher, and the NECR curves were identical to those obtained with the standard phantom, but shifted to lower activity concentrations. Although the higher SF, the values obtained for NECR allow concluding that the proposed scanner is expected to outperform current commercial PET systems.
Cosmic Rays with Portable Geiger Counters: From Sea Level to Airplane Cruise Altitudes
ERIC Educational Resources Information Center
Blanco, Francesco; La Rocca, Paola; Riggi, Francesco
2009-01-01
Cosmic ray count rates with a set of portable Geiger counters were measured at different altitudes on the way to a mountain top and aboard an aircraft, between sea level and cruise altitude. Basic measurements may constitute an educational activity even with high school teams. For the understanding of the results obtained, simulations of extensive…
Nielsen, Jace C; Hutmacher, Matthew M; Wesche, David L; Tolbert, Dwain; Patel, Mahlaqa; Kowalski, Kenneth G
2015-01-01
Vigabatrin is an irreversible inhibitor of γ-aminobutyric acid transaminase (GABA-T) and is used as an adjunctive therapy for adult patients with refractory complex partial seizures (rCPS). The purpose of this investigation was to describe the relationship between vigabatrin dosage and daily seizure rate for adults and children with rCPS and identify relevant covariates that might impact seizure frequency. This population dose-response analysis used seizure-count data from three pediatric and two adult randomized controlled studies of rCPS patients. A negative binomial distribution model adequately described daily seizure data. Mean seizure rate decreased with time after first dose and was described using an asymptotic model. Vigabatrin drug effects were best characterized by a quadratic model using normalized dosage as the exposure metric. Normalized dosage was an estimated parameter that allowed for individualized changes in vigabatrin exposure based on body weight. Baseline seizure rate increased with decreasing age, but age had no impact on vigabatrin drug effects after dosage was normalized for body weight differences. Posterior predictive checks indicated the final model was capable of simulating data consistent with observed daily seizure counts. Total normalized vigabatrin dosages of 1, 3, and 6 g/day were predicted to reduce seizure rates 23.2%, 45.6%, and 48.5%, respectively. © 2014, The American College of Clinical Pharmacology.
Koren, Katja; Pišot, Rado; Šimunič, Boštjan
2016-05-01
To determine the effects of a moderate-intensity active workstation on time and error during simulated office work. The aim of the study was to analyse simultaneous work and exercise for non-sedentary office workers. We monitored oxygen uptake, heart rate, sweating stains area, self-perceived effort, typing test time with typing error count and cognitive performance during 30 min of exercise with no cycling or cycling at 40 and 80 W. Compared baseline, we found increased physiological responses at 40 and 80 W, which corresponds to moderate physical activity (PA). Typing time significantly increased by 7.3% (p = 0.002) in C40W and also by 8.9% (p = 0.011) in C80W. Typing error count and cognitive performance were unchanged. Although moderate intensity exercise performed on cycling workstation during simulated office tasks increases working task execution time with, it has moderate effect size; however, it does not increase the error rate. Participants confirmed that such a working design is suitable for achieving the minimum standards for daily PA during work hours. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Strategies and limitations for fluorescence detection of XAFS at high flux beamlines
Heald, Steve M.
2015-02-17
The issue of detecting the XAFS signal from dilute samples is discussed in detail with the aim of making best use of high flux beamlines that provide up to 10 13 photons -1. Various detection methods are compared, including filters with slits, solid state detectors, crystal analyzers and combinations of these. These comparisons rely on simulations that use experimentally determined parameters. It is found that inelastic scattering places a fundamental limit on detection, and that it is important to take proper account of the polarization dependence of the signals. The combination of a filter–slit system with a solid state detectormore » is a promising approach. With an optimized system good performance can be obtained even if the total count rate is limited to 10 7 Hz. Detection schemes with better energy resolution can help at the largest dilutions if their collection efficiency and count rate limits can be improved.« less
Strategies and limitations for fluorescence detection of XAFS at high flux beamlines
Heald, Steve M.
2015-01-01
The issue of detecting the XAFS signal from dilute samples is discussed in detail with the aim of making best use of high flux beamlines that provide up to 1013 photons s−1. Various detection methods are compared, including filters with slits, solid state detectors, crystal analyzers and combinations of these. These comparisons rely on simulations that use experimentally determined parameters. It is found that inelastic scattering places a fundamental limit on detection, and that it is important to take proper account of the polarization dependence of the signals. The combination of a filter–slit system with a solid state detector is a promising approach. With an optimized system good performance can be obtained even if the total count rate is limited to 107 Hz. Detection schemes with better energy resolution can help at the largest dilutions if their collection efficiency and count rate limits can be improved. PMID:25723945
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
Microchannel plate life testing for UV spectroscopy instruments
NASA Astrophysics Data System (ADS)
Darling, N. T.; Siegmund, O. H. W.; Curtis, T.; McPhate, J.; Tedesco, J.; Courtade, S.; Holsclaw, G.; Hoskins, A.; Al Dhafri, S.
2017-08-01
The Emirates Mars Mission (EMM) UV Spectrograph (EMUS) is a far ultraviolet (102 nm to 170 nm) imaging spectrograph for characterization of the Martian exosphere and thermosphere. Imaging is accomplished by a photon counting open-face microchannel plate (MCP) detector using a cross delay line (XDL) readout. An MCP gain stabilization ("scrub") followed by lifetime spectral line burn-in simulation has been completed on a bare MCP detector at SSL. Gain and sensitivity stability of better than 7% has been demonstrated for total dose of 2.5 × 1012 photons cm-2 (2 C · cm-2 ) at 5.5 kHz mm-2 counting rates, validating the efficacy of an initial low gain full-field scrub.
Concepts, challenges, and successes in modeling thermodynamics of metabolism.
Cannon, William R
2014-01-01
The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.
Marginalized zero-altered models for longitudinal count data.
Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A
2016-10-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.
Marginalized zero-altered models for longitudinal count data
Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.
2015-01-01
Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423
Mixing Enhancement by Tabs in Round Supersonic Jets
NASA Technical Reports Server (NTRS)
Seiner, John M.; Grosch, C. E.
1998-01-01
The objective of this study was to analyze jet plume mass flow entrainment rates associated with the introduction of counter-rotating streamwise vorticity by prism shaped devices (tabs) located at the lip of the nozzle. We have examined the resulting mixing process through coordinated experimental tests and numerical simulations of the supersonic flow from a model axisymmetric nozzle. In the numerical simulations, the total induced vorticity was held constant while varying the distribution of counter-rotating vorticity around the nozzle lip training edge. In the experiment, the number of tabs applied was varied while holding the total projected area constant. Evaluations were also conducted on initial vortex strength. The results of this work show that the initial growth rate of the jet shear layer is increasingly enhanced as more tabs are added, but that the lowest tab count results in the largest entrained mass flow. The numerical simulations confirm these results.
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
NASA Astrophysics Data System (ADS)
Bristow, Quentin
1990-01-01
Part one of this two-part study is concerned with the multiple coincidences in pulse trains from X-ray and gamma radiation detectors which are the cause of pulse pileup. A sequence of pulses with inter-arrival times less than tau, the resolving time of the pulse-height analysis system used to acquire spectra, is called a multiple pulse string. Such strings can be classified on the basis of the number of pulses they contain, or the number of resolving times they cover. The occurrence rates of such strings are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a NaI(Tl) scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Part two of the study is concerned with a theoretical analysis of pulse pileup and the development of a discrete correction algorithm, based on the use of a function to simulate the coincidence spectrum produced by partial sums of pulses. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern ADC's, were used to generate pileup spectra due to coincidences between two pulses, (1st order pileup) and three pulses (2nd order pileup), for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum which can be regarded as an impulse response for a particular pulse shape. The use of a flat spectrum (identical count rates in all channels) in the simulations, and in a parallel theoretical analysis, showed the 1st order pileup distorted the spectrum to a linear ramp with a pileup tail. The correction algorithm was successfully applied to correct entire spectra for 1st and 2nd order pileup; both those generated by Monte Carlo simulations and in addition some real spectra acquired with a laboratory multichannel analysis system.
NASA Astrophysics Data System (ADS)
Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon
2018-05-01
A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.
Fortune, Brad; Reynaud, Juan; Cull, Grant; Burgoyne, Claude F.; Wang, Lin
2014-01-01
Purpose To evaluate the effect of age on optic nerve axon counts, spectral-domain optical coherence tomography (SDOCT) scan quality, and peripapillary retinal nerve fiber layer thickness (RNFLT) measurements in healthy monkey eyes. Methods In total, 83 healthy rhesus monkeys were included in this study (age range: 1.2–26.7 years). Peripapillary RNFLT was measured by SDOCT. An automated algorithm was used to count 100% of the axons and measure their cross-sectional area in postmortem optic nerve tissue samples (N = 46). Simulation experiments were done to determine the effects of optical changes on measurements of RNFLT. An objective, fully-automated method was used to measure the diameter of the major blood vessel profiles within each SDOCT B-scan. Results Peripapillary RNFLT was negatively correlated with age in cross-sectional analysis (P < 0.01). The best-fitting linear model was RNFLT(μm) = −0.40 × age(years) + 104.5 μm (R2 = 0.1, P < 0.01). Age had very little influence on optic nerve axon count; the result of the best-fit linear model was axon count = −1364 × Age(years) + 1,210,284 (R2 < 0.01, P = 0.74). Older eyes lost the smallest diameter axons and/or axons had an increased diameter in the optic nerve of older animals. There was an inverse correlation between age and SDOCT scan quality (R = −0.65, P < 0.0001). Simulation experiments revealed that approximately 17% of the apparent cross-sectional rate of RNFLT loss is due to reduced scan quality associated with optical changes of the aging eye. Another 12% was due to thinning of the major blood vessels. Conclusions RNFLT declines by 4 μm per decade in healthy rhesus monkey eyes. This rate is approximately three times faster than loss of optic nerve axons. Approximately one-half of this difference is explained by optical degradation of the aging eye reducing SDOCT scan quality and thinning of the major blood vessels. Translational Relevance Current models used to predict retinal ganglion cell losses should be reconsidered. PMID:24932430
NASA Astrophysics Data System (ADS)
Matsuura, Hideharu
2015-04-01
High-resolution silicon X-ray detectors with a large active area are required for effectively detecting traces of hazardous elements in food and soil through the measurement of the energies and counts of X-ray fluorescence photons radially emitted from these elements. The thicknesses and areas of commercial silicon drift detectors (SDDs) are up to 0.5 mm and 1.5 cm2, respectively. We describe 1.5-mm-thick gated SDDs (GSDDs) that can detect photons with energies up to 50 keV. We simulated the electric potential distributions in GSDDs with a Si thickness of 1.5 mm and areas from 0.18 to 168 cm2 at a single high reverse bias. The area of a GSDD could be enlarged simply by increasing all the gate widths by the same multiple, and the capacitance of the GSDD remained small and its X-ray count rate remained high.
Accurate 3D reconstruction by a new PDS-OSEM algorithm for HRRT
NASA Astrophysics Data System (ADS)
Chen, Tai-Been; Horng-Shing Lu, Henry; Kim, Hang-Keun; Son, Young-Don; Cho, Zang-Hee
2014-03-01
State-of-the-art high resolution research tomography (HRRT) provides high resolution PET images with full 3D human brain scanning. But, a short time frame in dynamic study causes many problems related to the low counts in the acquired data. The PDS-OSEM algorithm was proposed to reconstruct the HRRT image with a high signal-to-noise ratio that provides accurate information for dynamic data. The new algorithm was evaluated by simulated image, empirical phantoms, and real human brain data. Meanwhile, the time activity curve was adopted to validate a reconstructed performance of dynamic data between PDS-OSEM and OP-OSEM algorithms. According to simulated and empirical studies, the PDS-OSEM algorithm reconstructs images with higher quality, higher accuracy, less noise, and less average sum of square error than those of OP-OSEM. The presented algorithm is useful to provide quality images under the condition of low count rates in dynamic studies with a short scan time.
Hypothesis tests for the detection of constant speed radiation moving sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir
2015-07-01
Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less
A Bayesian method for detecting pairwise associations in compositional data
Ventz, Steffen; Huttenhower, Curtis
2017-01-01
Compositional data consist of vectors of proportions normalized to a constant sum from a basis of unobserved counts. The sum constraint makes inference on correlations between unconstrained features challenging due to the information loss from normalization. However, such correlations are of long-standing interest in fields including ecology. We propose a novel Bayesian framework (BAnOCC: Bayesian Analysis of Compositional Covariance) to estimate a sparse precision matrix through a LASSO prior. The resulting posterior, generated by MCMC sampling, allows uncertainty quantification of any function of the precision matrix, including the correlation matrix. We also use a first-order Taylor expansion to approximate the transformation from the unobserved counts to the composition in order to investigate what characteristics of the unobserved counts can make the correlations more or less difficult to infer. On simulated datasets, we show that BAnOCC infers the true network as well as previous methods while offering the advantage of posterior inference. Larger and more realistic simulated datasets further showed that BAnOCC performs well as measured by type I and type II error rates. Finally, we apply BAnOCC to a microbial ecology dataset from the Human Microbiome Project, which in addition to reproducing established ecological results revealed unique, competition-based roles for Proteobacteria in multiple distinct habitats. PMID:29140991
Probing the Cosmic Gamma-Ray Burst Rate with Trigger Simulations of the Swift Burst Alert Telescope
NASA Technical Reports Server (NTRS)
Lien, Amy; Sakamoto, Takanori; Gehrels, Neil; Palmer, David M.; Barthelmy, Scott D.; Graziani, Carlo; Cannizzo, John K.
2013-01-01
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously own GRB instruments, Swift has over 500 trigger criteria based on photon count rate and additional image threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest bursts need to be dimmer than previously expected to avoid over-producing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star-formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568 +825 -1429 GRBs per year that are beamed toward us in the whole universe.
NASA Astrophysics Data System (ADS)
Herrmann, Christoph; Engel, Klaus-Jürgen; Wiegert, Jens
2010-12-01
The most obvious problem in obtaining spectral information with energy-resolving photon counting detectors in clinical computed tomography (CT) is the huge x-ray flux present in conventional CT systems. At high tube voltages (e.g. 140 kVp), despite the beam shaper, this flux can be close to 109 Mcps mm-2 in the direct beam or in regions behind the object, which are close to the direct beam. Without accepting the drawbacks of truncated reconstruction, i.e. estimating missing direct-beam projection data, a photon-counting energy-resolving detector has to be able to deal with such high count rates. Sub-structuring pixels into sub-pixels is not enough to reduce the count rate per pixel to values that today's direct converting Cd[Zn]Te material can cope with (<=10 Mcps in an optimistic view). Below 300 µm pixel pitch, x-ray cross-talk (Compton scatter and K-escape) and the effect of charge diffusion between pixels are problematic. By organising the detector in several different layers, the count rate can be further reduced. However this alone does not limit the count rates to the required level, since the high stopping power of the material becomes a disadvantage in the layered approach: a simple absorption calculation for 300 µm pixel pitch shows that the required layer thickness of below 10 Mcps/pixel for the top layers in the direct beam is significantly below 100 µm. In a horizontal multi-layer detector, such thin layers are very difficult to manufacture due to the brittleness of Cd[Zn]Te. In a vertical configuration (also called edge-on illumination (Ludqvist et al 2001 IEEE Trans. Nucl. Sci. 48 1530-6, Roessl et al 2008 IEEE NSS-MIC-RTSD 2008, Conf. Rec. Talk NM2-3)), bonding of the readout electronics (with pixel pitches below 100 µm) is not straightforward although it has already been done successfully (Pellegrini et al 2004 IEEE NSS MIC 2004 pp 2104-9). Obviously, for the top detector layers, materials with lower stopping power would be advantageous. The possible choices are, however, quite limited, since only 'mature' materials, which operate at room temperature and can be manufactured reliably should reasonably be considered. Since GaAs is still known to cause reliability problems, the simplest choice is Si, however with the drawback of strong Compton scatter which can cause considerable inter-pixel cross-talk. To investigate the potential and the problems of Si in a multi-layer detector, in this paper the combination of top detector layers made of Si with lower layers made of Cd[Zn]Te is studied by using Monte Carlo simulated detector responses. It is found that the inter-pixel cross-talk due to Compton scatter is indeed very high; however, with an appropriate cross-talk correction scheme, which is also described, the negative effects of cross-talk are shown to be removed to a very large extent.
NASA Astrophysics Data System (ADS)
Hsieh, Scott S.; Pelc, Norbert J.
2014-06-01
Photon counting x-ray detectors (PCXDs) offer several advantages compared to standard energy-integrating x-ray detectors, but also face significant challenges. One key challenge is the high count rates required in CT. At high count rates, PCXDs exhibit count rate loss and show reduced detective quantum efficiency in signal-rich (or high flux) measurements. In order to reduce count rate requirements, a dynamic beam-shaping filter can be used to redistribute flux incident on the patient. We study the piecewise-linear attenuator in conjunction with PCXDs without energy discrimination capabilities. We examined three detector models: the classic nonparalyzable and paralyzable detector models, and a ‘hybrid’ detector model which is a weighted average of the two which approximates an existing, real detector (Taguchi et al 2011 Med. Phys. 38 1089-102 ). We derive analytic expressions for the variance of the CT measurements for these detectors. These expressions are used with raw data estimated from DICOM image files of an abdomen and a thorax to estimate variance in reconstructed images for both the dynamic attenuator and a static beam-shaping (‘bowtie’) filter. By redistributing flux, the dynamic attenuator reduces dose by 40% without increasing peak variance for the ideal detector. For non-ideal PCXDs, the impact of count rate loss is also reduced. The nonparalyzable detector shows little impact from count rate loss, but with the paralyzable model, count rate loss leads to noise streaks that can be controlled with the dynamic attenuator. With the hybrid model, the characteristic count rates required before noise streaks dominate the reconstruction are reduced by a factor of 2 to 3. We conclude that the piecewise-linear attenuator can reduce the count rate requirements of the PCXD in addition to improving dose efficiency. The magnitude of this reduction depends on the detector, with paralyzable detectors showing much greater benefit than nonparalyzable detectors.
Iwanishi, Katsuhiro; Watabe, Hiroshi; Hayashi, Takuya; Miyake, Yoshinori; Minato, Kotaro; Iida, Hidehiro
2009-06-01
Cerebral blood flow (CBF), cerebral metabolic rate of oxygen (CMRO(2)), oxygen extraction fraction (OEF), and cerebral blood volume (CBV) are quantitatively measured with PET with (15)O gases. Kudomi et al. developed a dual tracer autoradiographic (DARG) protocol that enables the duration of a PET study to be shortened by sequentially administrating (15)O(2) and C(15)O(2) gases. In this protocol, before the sequential PET scan with (15)O(2) and C(15)O(2) gases ((15)O(2)-C(15)O(2) PET scan), a PET scan with C(15)O should be preceded to obtain CBV image. C(15)O has a high affinity for red blood cells and a very slow washout rate, and residual radioactivity from C(15)O might exist during a (15)O(2)-C(15)O(2) PET scan. As the current DARG method assumes no residual C(15)O radioactivity before scanning, we performed computer simulations to evaluate the influence of the residual C(15)O radioactivity on the accuracy of measured CBF and OEF values with DARG method and also proposed a subtraction technique to minimize the error due to the residual C(15)O radioactivity. In the simulation, normal and ischemic conditions were considered. The (15)O(2) and C(15)O(2) PET count curves with the residual C(15)O PET counts were generated by the arterial input function with the residual C(15)O radioactivity. The amounts of residual C(15)O radioactivity were varied by changing the interval between the C(15)O PET scan and (15)O(2)-C(15)O(2) PET scan, and the absolute inhaled radioactivity of the C(15)O gas. Using the simulated input functions and the PET counts, the CBF and OEF were computed by the DARG method. Furthermore, we evaluated a subtraction method that subtracts the influence of the C(15)O gas in the input function and PET counts. Our simulations revealed that the CBF and OEF values were underestimated by the residual C(15)O radioactivity. The magnitude of this underestimation depended on the amount of C(15)O radioactivity and the physiological conditions. This underestimation was corrected by the subtraction method. This study showed the influence of C(15)O radioactivity in DARG protocol, and the magnitude of the influence was affected by several factors, such as the radioactivity of C(15)O, and the physiological condition.
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
NASA Astrophysics Data System (ADS)
Eriksson, L.; Wienhard, K.; Eriksson, M.; Casey, M. E.; Knoess, C.; Bruckbauer, T.; Hamill, J.; Mulnix, T.; Vollmar, S.; Bendriem, B.; Heiss, W. D.; Nutt, R.
2002-06-01
The first and second generation of the Exact and Exact HR family of scanners has been evaluated in terms of noise equivalent count rate (NEC) and count-rate capabilities. The new National Electrical Manufacturers Association standard was used for the evaluation. In spite of improved electronics and improved count-rate capabilities, the peak NEC was found to be fairly constant between the generations. The results are discussed in terms of the different electronic solutions for the two generations and its implications on system dead time and NEC count-rate capability.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
Compensated count-rate circuit for radiation survey meter
Todd, Richard A.
1981-01-01
A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for counting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensated circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
Molenaar, Heike; Glawe, Martin; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Ornamental plant variety improvement is limited by current phenotyping approaches and neglected use of experimental designs. The present study was conducted to show the benefits of using an experimental design and corresponding analysis in ornamental breeding regarding simulated response to selection in Pelargonium zonale for production-related traits. This required establishment of phenotyping protocols for root formation and stem cutting counts, with which 974 genotypes were assessed in a two-phase experimental design. The present paper evaluates this protocol. The possibility of varietal improvement through indirect selection on secondary traits such as branch count and flower count was assessed by genetic correlations. Simulated response to selection varied greatly, depending on the genotypic variances of the breeding population and traits. A varietal improvement of over 20% is possible for stem cutting count, root formation, branch count and flower count. In contrast, indirect selection of stem cutting count by branch count or flower count was found to be ineffective. The established phenotypic protocols and two-phase experimental designs are valuable tools for breeding of P. zonale. PMID:28243453
Molenaar, Heike; Glawe, Martin; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Ornamental plant variety improvement is limited by current phenotyping approaches and neglected use of experimental designs. The present study was conducted to show the benefits of using an experimental design and corresponding analysis in ornamental breeding regarding simulated response to selection in Pelargonium zonale for production-related traits. This required establishment of phenotyping protocols for root formation and stem cutting counts, with which 974 genotypes were assessed in a two-phase experimental design. The present paper evaluates this protocol. The possibility of varietal improvement through indirect selection on secondary traits such as branch count and flower count was assessed by genetic correlations. Simulated response to selection varied greatly, depending on the genotypic variances of the breeding population and traits. A varietal improvement of over 20% is possible for stem cutting count, root formation, branch count and flower count. In contrast, indirect selection of stem cutting count by branch count or flower count was found to be ineffective. The established phenotypic protocols and two-phase experimental designs are valuable tools for breeding of P. zonale .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Katherine A.; Finley, Patrick D.; Moore, Thomas W.
2013-09-01
Infectious diseases can spread rapidly through healthcare facilities, resulting in widespread illness among vulnerable patients. Computational models of disease spread are useful for evaluating mitigation strategies under different scenarios. This report describes two infectious disease models built for the US Department of Veteran Affairs (VA) motivated by a Varicella outbreak in a VA facility. The first model simulates disease spread within a notional contact network representing staff and patients. Several interventions, along with initial infection counts and intervention delay, were evaluated for effectiveness at preventing disease spread. The second model adds staff categories, location, scheduling, and variable contact rates tomore » improve resolution. This model achieved more accurate infection counts and enabled a more rigorous evaluation of comparative effectiveness of interventions.« less
Statistics of Magnetic Reconnection X-Lines in Kinetic Turbulence
NASA Astrophysics Data System (ADS)
Haggerty, C. C.; Parashar, T.; Matthaeus, W. H.; Shay, M. A.; Wan, M.; Servidio, S.; Wu, P.
2016-12-01
In this work we examine the statistics of magnetic reconnection (x-lines) and their associated reconnection rates in intermittent current sheets generated in turbulent plasmas. Although such statistics have been studied previously for fluid simulations (e.g. [1]), they have not yet been generalized to fully kinetic particle-in-cell (PIC) simulations. A significant problem with PIC simulations, however, is electrostatic fluctuations generated due to numerical particle counting statistics. We find that analyzing gradients of the magnetic vector potential from the raw PIC field data identifies numerous artificial (or non-physical) x-points. Using small Orszag-Tang vortex PIC simulations, we analyze x-line identification and show that these artificial x-lines can be removed using sub-Debye length filtering of the data. We examine how turbulent properties such as the magnetic spectrum and scale dependent kurtosis are affected by particle noise and sub-Debye length filtering. We subsequently apply these analysis methods to a large scale kinetic PIC turbulent simulation. Consistent with previous fluid models, we find a range of normalized reconnection rates as large as ½ but with the bulk of the rates being approximately less than to 0.1. [1] Servidio, S., W. H. Matthaeus, M. A. Shay, P. A. Cassak, and P. Dmitruk (2009), Magnetic reconnection and two-dimensional magnetohydrodynamic turbulence, Phys. Rev. Lett., 102, 115003.
Accelerating a Particle-in-Cell Simulation Using a Hybrid Counting Sort
NASA Astrophysics Data System (ADS)
Bowers, K. J.
2001-11-01
In this article, performance limitations of the particle advance in a particle-in-cell (PIC) simulation are discussed. It is shown that the memory subsystem and cache-thrashing severely limit the speed of such simulations. Methods to implement a PIC simulation under such conditions are explored. An algorithm based on a counting sort is developed which effectively eliminates PIC simulation cache thrashing. Sustained performance gains of 40 to 70 percent are measured on commodity workstations for a minimal 2d2v electrostatic PIC simulation. More complete simulations are expected to have even better results as larger simulations are usually even more memory subsystem limited.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
NASA Astrophysics Data System (ADS)
Karaoglanis, K.; Efthimiou, N.; Tsoumpas, C.
2015-09-01
Low count PET data is a challenge for medical image reconstruction. The statistics of a dataset is a key factor of the quality of the reconstructed images. Reconstruction algorithms which would be able to compensate for low count datasets could provide the means to reduce the patient injected doses and/or reduce the scan times. It has been shown that the use of priors improve the image quality in low count conditions. In this study we compared regularised versus post-filtered OSEM for their performance on challenging simulated low count datasets. Initial visual comparison demonstrated that both algorithms improve the image quality, although the use of regularization does not introduce the undesired blurring as post-filtering.
Data-based Considerations in Portal Radiation Monitoring of Cargo Vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weier, Dennis R.; O'Brien, Robert F.; Ely, James H.
2004-07-01
Radiation portal monitoring of cargo vehicles often includes a configuration of four-panel monitors that record gamma and neutron counts from vehicles transporting cargo. As vehicles pass the portal monitors, they generate a count profile over time that can be compared to the average panel background counts obtained just prior to the time the vehicle entered the area of the monitors. Pacific Northwest National Laboratory has accumulated considerable data regarding such background radiation and vehicle profiles from portal installations, as well as in experimental settings using known sources and cargos. Several considerations have a bearing on how alarm thresholds are setmore » in order to maintain sensitivity to radioactive sources while also controlling to a manageable level the rate of false or nuisance alarms. False alarms are statistical anomalies while nuisance alarms occur due to the presence of naturally occurring radioactive material (NORM) in cargo, for example, kitty litter. Considerations to be discussed include: • Background radiation suppression due to the shadow shielding from the vehicle. • The impact of the relative placement of the four panels on alarm decision criteria. • Use of plastic scintillators to separate gamma counts into energy windows. • The utility of using ratio criteria for the energy window counts rather than simply using total window counts. • Detection likelihood for these various decision criteria based on computer simulated injections of sources into vehicle profiles.« less
NASA Astrophysics Data System (ADS)
Aklan, B.; Jakoby, B. W.; Watson, C. C.; Braun, H.; Ritt, P.; Quick, H. H.
2015-06-01
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall accuracy error of less than 10% can now be used for further MC simulation applications such as development of hardware components as well as for testing of new PET/MR software algorithms, such as assessment of point-spread function-based reconstruction algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Validation of a Monte Carlo simulation of the Inveon PET scanner using GATE
NASA Astrophysics Data System (ADS)
Lu, Lijun; Zhang, Houjin; Bian, Zhaoying; Ma, Jianhua; Feng, Qiangjin; Chen, Wufan
2016-08-01
The purpose of this study is to validate the application of GATE (Geant4 Application for Tomographic Emission) Monte Carlo simulation toolkit in order to model the performance characteristics of Siemens Inveon small animal PET system. The simulation results were validated against experimental/published data in accordance with the NEMA NU-4 2008 protocol for standardized evaluation of spatial resolution, sensitivity, scatter fraction (SF) and noise equivalent counting rate (NECR) of a preclinical PET system. An agreement of less than 18% was obtained between the radial, tangential and axial spatial resolutions of the simulated and experimental results. The simulated peak NECR of mouse-size phantom agreed with the experimental result, while for the rat-size phantom simulated value was higher than experimental result. The simulated and experimental SFs of mouse- and rat- size phantom both reached an agreement of less than 2%. It has been shown the feasibility of our GATE model to accurately simulate, within certain limits, all major performance characteristics of Inveon PET system.
Relationship between salivary flow rates and Candida albicans counts.
Navazesh, M; Wood, G J; Brightman, V J
1995-09-01
Seventy-one persons (48 women, 23 men; mean age, 51.76 years) were evaluated for salivary flow rates and Candida albicans counts. Each person was seen on three different occasions. Samples of unstimulated whole, chewing-stimulated whole, acid-stimulated parotid, and candy-stimulated parotid saliva were collected under standardized conditions. An oral rinse was also obtained and evaluated for Candida albicans counts. Unstimulated and chewing-stimulated whole flow rates were negatively and significantly (p < 0.001) related to the Candida counts. Unstimulated whole saliva significantly (p < 0.05) differed in persons with Candida counts of 0 versus <500 versus < or = 500. Chewing-stimulated saliva was significantly (p < 0.05) different in persons with 0 counts compared with those with a > or = 500 count. Differences in stimulated parotid flow rates were not significant among different levels of Candida counts. The results of this study reveal that whole saliva is a better predictor than parotid saliva in identification of persons with high Candida albicans counts.
Improving gross count gamma-ray logging in uranium mining with the NGRS probe
NASA Astrophysics Data System (ADS)
Carasco, C.; Pérot, B.; Ma, J.-L.; Toubon, H.; Dubille-Auchère, A.
2018-01-01
AREVA Mines and the Nuclear Measurement Laboratory of CEA Cadarache are collaborating to improve the sensitivity and precision of uranium concentration measurement by means of gamma ray logging. The determination of uranium concentration in boreholes is performed with the Natural Gamma Ray Sonde (NGRS) based on a NaI(Tl) scintillation detector. The total gamma count rate is converted into uranium concentration using a calibration coefficient measured in concrete blocks with known uranium concentration in the AREVA Mines calibration facility located in Bessines, France. Until now, to take into account gamma attenuation in a variety of boreholes diameters, tubing materials, diameters and thicknesses, filling fluid densities and compositions, a semi-empirical formula was used to correct the calibration coefficient measured in Bessines facility. In this work, we propose to use Monte Carlo simulations to improve gamma attenuation corrections. To this purpose, the NGRS probe and the calibration measurements in the standard concrete blocks have been modeled with MCNP computer code. The calibration coefficient determined by simulation, 5.3 s-1.ppmU-1 ± 10%, is in good agreement with the one measured in Bessines, 5.2 s-1.ppmU-1. Based on the validated MCNP model, several parametric studies have been performed. For instance, the rock density and chemical composition proved to have a limited impact on the calibration coefficient. However, gamma self-absorption in uranium leads to a nonlinear relationship between count rate and uranium concentration beyond approximately 1% of uranium weight fraction, the underestimation of the uranium content reaching more than a factor 2.5 for a 50 % uranium weight fraction. Next steps will concern parametric studies with different tubing materials, diameters and thicknesses, as well as different borehole filling fluids representative of real measurement conditions.
Kawamura, Takahisa; Kasai, Hidefumi; Fermanelli, Valentina; Takahashi, Toshiaki; Sakata, Yukinori; Matsuoka, Toshiyuki; Ishii, Mika; Tanigawara, Yusuke
2018-06-22
Post-marketing surveillance is useful to collect safety data in real-world clinical settings. In this study, we firstly applied the post-marketing real-world data on a mechanistic model analysis for neutropenic profiles of eribulin in patients with recurrent or metastatic breast cancer (RBC/MBC). Demographic and safety data were collected using an active surveillance method from eribulin-treated RBC/MBC patients. Changes in neutrophil counts over time were analyzed using a mechanistic pharmacodynamic model. Pathophysiological factors that may affect the severity of neutropenia were investigated and neutropenic patterns were simulated for different treatment schedules. Clinical and laboratory data were collected from 401 patients (5199 neutrophil count measurements) who had not received granulocyte colony stimulating factor and were eligible for pharmacodynamic analysis. The estimated mean parameters were: mean transit time = 104.5 h, neutrophil proliferation rate constant = 0.0377 h -1 , neutrophil elimination rate constant = 0.0295 h -1 , and linear coefficient of drug effect = 0.0413 mL/ng. Low serum albumin levels and low baseline neutrophil counts were associated with severe neutropenia. The probability of grade ≥3 neutropenia was predicted to be 69%, 27%, and 27% for patients on standard, biweekly, and triweekly treatment scenarios, respectively, based on virtual simulations using the developed pharmacodynamic model. In conclusion, this is the first application of post-marketing surveillance data to a model-based safety analysis. This analysis of safety data reflecting authentic clinical settings will provide useful information on the safe use and potential risk factors of eribulin. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Kennedy, Deirdre; Cronin, Ultan P.; Wilkinson, Martin G.
2011-01-01
Three common food pathogenic microorganisms were exposed to treatments simulating those used in food processing. Treated cell suspensions were then analyzed for reduction in growth by plate counting. Flow cytometry (FCM) and fluorescence-activated cell sorting (FACS) were carried out on treated cells stained for membrane integrity (Syto 9/propidium iodide) or the presence of membrane potential [DiOC2(3)]. For each microbial species, representative cells from various subpopulations detected by FCM were sorted onto selective and nonselective agar and evaluated for growth and recovery rates. In general, treatments giving rise to the highest reductions in counts also had the greatest effects on cell membrane integrity and membrane potential. Overall, treatments that impacted cell membrane permeability did not necessarily have a comparable effect on membrane potential. In addition, some bacterial species with extensively damaged membranes, as detected by FCM, appeared to be able to replicate and grow after sorting. Growth of sorted cells from various subpopulations was not always reflected in plate counts, and in some cases the staining protocol may have rendered cells unculturable. Optimized FCM protocols generated a greater insight into the extent of the heterogeneous bacterial population responses to food control measures than did plate counts. This study underlined the requirement to use FACS to relate various cytometric profiles generated by various staining protocols with the ability of cells to grow on microbial agar plates. Such information is a prerequisite for more-widespread adoption of FCM as a routine microbiological analytical technique. PMID:21602370
A Multi-Contact, Low Capacitance HPGe Detector for High Rate Gamma Spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cox, Christopher
2014-12-04
The detection, identification and non-destructive assay of special nuclear materials and nuclear fission by-products are critically important activities in support of nuclear non-proliferation programs. Both national and international nuclear safeguard agencies recognize that current accounting methods for spent nuclear fuel are inadequate from a safeguards perspective. Radiation detection and analysis by gamma-ray spectroscopy is a key tool in this field, but no instrument exists that can deliver the required performance (energy resolution and detection sensitivity) in the presence of very high background count rates encountered in the nuclear safeguards arena. The work of this project addresses this critical need bymore » developing a unique gamma-ray detector based on high purity germanium that has the previously unachievable property of operating in the 1 million counts-per-second range while achieving state-of-the-art energy resolution necessary to identify and analyze the isotopes of interest. The technical approach was to design and fabricate a germanium detector with multiple segmented electrodes coupled to multi-channel high rate spectroscopy electronics. Dividing the germanium detector’s signal electrode into smaller sections offers two advantages; firstly, the energy resolution of the detector is potentially improved, and secondly, the detector is able to operate at higher count rates. The design challenges included the following; determining the optimum electrode configuration to meet the stringent energy resolution and count rate requirements; determining the electronic noise (and therefore energy resolution) of the completed system after multiple signals are recombined; designing the germanium crystal housing and vacuum cryostat; and customizing electronics to perform the signal recombination function in real time. In this phase I work, commercial off-the-shelf electrostatic modeling software was used to develop the segmented germanium crystal geometry, which underwent several iterations before an optimal electrode configuration was found. The model was tested and validated against real-world measurements with existing germanium detectors. Extensive modeling of electronic noise was conducted using established formulae, and real-world measurements were performed on candidate front-end electronic components. This initial work proved the feasibility of the design with respect to expected high count rate and energy resolution performance. Phase I also delivered the mechanical design of the detector housing and vacuum cryostat to be built in Phase II. Finally, a Monte Carlo simulation was created to show the response of the complete design to a Cs-137 source. This development presents a significant advance for nuclear safeguards instrumentation with increased speed and accuracy of detection and identification of special nuclear materials. Other significant applications are foreseen for a gamma-ray detector that delivers high energy resolution (1keV FWHM noise) at high count rate (1 Mcps), especially in the areas of physics research and materials analysis.« less
Park, Hye Jung; Lee, Jae-Hyun; Park, Kyung Hee; Kim, Kyu Rang; Han, Mae Ja; Choe, Hosoeng
2016-01-01
Purpose The occurrence of pollen allergy is subject to exposure to pollen, which shows regional and temporal variations. We evaluated the changes in pollen counts and skin positivity rates for 6 years, and explored the correlation between their annual rates of change. Materials and Methods We assessed the number of pollen grains collected in Seoul, and retrospectively reviewed the results of 4442 skin-prick tests conducted at the Severance Hospital Allergy-Asthma Clinic from January 1, 2008 to December 31, 2013. Results For 6 years, the mean monthly total pollen count showed two peaks, one in May and the other in September. Pollen count for grasses also showed the same trend. The pollen counts for trees, grasses, and weeds changed annually, but the changes were not significant. The annual skin positivity rates in response to pollen from grasses and weeds increased significantly over the 6 years. Among trees, the skin positivity rates in response to pollen from walnut, popular, elm, and alder significantly increased over the 6 years. Further, there was a significant correlation between the annual rate of change in pollen count and the rate of change in skin positivity rate for oak and hop Japanese. Conclusion The pollen counts and skin positivity rates should be monitored, as they have changed annually. Oak and hop Japanese, which showed a significant correlation with the annual rate of change in pollen count and the rate of change in skin positivity rate over the 6 years may be considered the major allergens in Korea. PMID:26996572
Park, Hye Jung; Lee, Jae-Hyun; Park, Kyung Hee; Kim, Kyu Rang; Han, Mae Ja; Choe, Hosoeng; Oh, Jae-Won; Hong, Chein-Soo
2016-05-01
The occurrence of pollen allergy is subject to exposure to pollen, which shows regional and temporal variations. We evaluated the changes in pollen counts and skin positivity rates for 6 years, and explored the correlation between their annual rates of change. We assessed the number of pollen grains collected in Seoul, and retrospectively reviewed the results of 4442 skin-prick tests conducted at the Severance Hospital Allergy-Asthma Clinic from January 1, 2008 to December 31, 2013. For 6 years, the mean monthly total pollen count showed two peaks, one in May and the other in September. Pollen count for grasses also showed the same trend. The pollen counts for trees, grasses, and weeds changed annually, but the changes were not significant. The annual skin positivity rates in response to pollen from grasses and weeds increased significantly over the 6 years. Among trees, the skin positivity rates in response to pollen from walnut, popular, elm, and alder significantly increased over the 6 years. Further, there was a significant correlation between the annual rate of change in pollen count and the rate of change in skin positivity rate for oak and hop Japanese. The pollen counts and skin positivity rates should be monitored, as they have changed annually. Oak and hop Japanese, which showed a significant correlation with the annual rate of change in pollen count and the rate of change in skin positivity rate over the 6 years may be considered the major allergens in Korea.
Westfall, J M; McGloin, J
2001-05-01
Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double counting and transfer bias should be considered when conducting and interpreting research on ischemic heart disease, particularly in rural regions.
Hill, Andrew; Kelly, Eliza; Horswill, Mark S; Watson, Marcus O
2018-02-01
To investigate whether awareness of manual respiratory rate monitoring affects respiratory rate in adults, and whether count duration influences respiratory rate estimates. Nursing textbooks typically suggest that the patient should ideally be unaware of respiratory rate observations; however, there is little published evidence of the effect of awareness on respiratory rate, and none specific to manual measurement. In addition, recommendations about the length of the respiratory rate count vary from text to text, and the relevant empirical evidence is scant, inconsistent and subject to substantial methodological limitations. Experimental study with awareness of respiration monitoring (aware, unaware; randomised between-subjects) and count duration (60 s, 30 s, 15 s; within-subjects) as the independent variables. Respiratory rate (breaths/minute) was the dependent variable. Eighty-two adult volunteers were randomly assigned to aware and unaware conditions. In the baseline block, no live monitoring occurred. In the subsequent experimental block, the researcher informed aware participants that their respiratory rate would be counted, and did so. Respirations were captured throughout via video recording, and counted by blind raters viewing 60-, 30- and 15-s extracts. The data were collected in 2015. There was no baseline difference between the groups. During the experimental block, the respiratory rates of participants in the aware condition were an average of 2.13 breaths/minute lower compared to unaware participants. Reducing the count duration from 1 min to 15 s caused respiratory rate to be underestimated by an average of 2.19 breaths/minute (and 0.95 breaths/minute for 30-s counts). The awareness effect did not depend on count duration. Awareness of monitoring appears to reduce respiratory rate, and shorter monitoring durations yield systematically lower respiratory rate estimates. When interpreting and acting upon respiratory rate data, clinicians should consider the potential influence of these factors, including cumulative effects. © 2017 The Authors. Journal of Clinical Nursing Published by John Wiley & Sons Ltd.
Probing the cosmic gamma-ray burst rate with trigger simulations of the swift burst alert telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lien, Amy; Cannizzo, John K.; Sakamoto, Takanori
The gamma-ray burst (GRB) rate is essential for revealing the connection between GRBs, supernovae, and stellar evolution. Additionally, the GRB rate at high redshift provides a strong probe of star formation history in the early universe. While hundreds of GRBs are observed by Swift, it remains difficult to determine the intrinsic GRB rate due to the complex trigger algorithm of Swift. Current studies of the GRB rate usually approximate the Swift trigger algorithm by a single detection threshold. However, unlike the previously flown GRB instruments, Swift has over 500 trigger criteria based on photon count rate and an additional imagemore » threshold for localization. To investigate possible systematic biases and explore the intrinsic GRB properties, we develop a program that is capable of simulating all the rate trigger criteria and mimicking the image threshold. Our simulations show that adopting the complex trigger algorithm of Swift increases the detection rate of dim bursts. As a result, our simulations suggest that bursts need to be dimmer than previously expected to avoid overproducing the number of detections and to match with Swift observations. Moreover, our results indicate that these dim bursts are more likely to be high redshift events than low-luminosity GRBs. This would imply an even higher cosmic GRB rate at large redshifts than previous expectations based on star formation rate measurements, unless other factors, such as the luminosity evolution, are taken into account. The GRB rate from our best result gives a total number of 4568{sub −1429}{sup +825} GRBs per year that are beamed toward us in the whole universe.« less
A robust hypothesis test for the sensitive detection of constant speed radiation moving sources
NASA Astrophysics Data System (ADS)
Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence
2015-09-01
Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.
ICESat-2 simulated data from airborne altimetery
NASA Astrophysics Data System (ADS)
Brunt, K. M.; Neumann, T.; Markus, T.; Brenner, A. C.; Barbieri, K.; Field, C.; Sirota, M.
2010-12-01
Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) is scheduled to launch in 2015 and will carry onboard the Advanced Topographic Laser Altimeter System (ATLAS), which represents a new approach to spaceborne determination of surface elevations. Specifically, the current ATLAS design is for a micropulse, multibeam, photon-counting laser altimeter with lower energy, a shorter pulse width, and a higher repetition rate relative to the Geoscience Laser Altimeter (GLAS), the instrument that was onboard ICESat. Given the new and untested technology associated with ATLAS, airborne altimetry data is necessary (1) to test the proposed ATLAS instrument geometry, (2) to validate instrument models, and (3) to assess the atmospheric effects on multibeam altimeters. We present an overview of the airborne instruments and datasets intended to address the ATLAS instrument concept, including data collected over Greenland (July 2009) using an airborne SBIR prototype 100 channel, photon-counting, terrain mapping altimeter, which addresses the first of these 3 scientific concerns. Additionally, we present the plan for further simulator data collection over vegetated and ice covered regions using Multiple Altimeter Beam Experimental Lidar (MABEL), intended to address the latter two scientific concerns. As the ICESAT-2 project is in the design phase, the particular configuration of the ATLAS instrument may change. However, we expect this work to be relevant as long as ATLAS pursues a photon-counting approach.
Rad-hard Dual-threshold High-count-rate Silicon Pixel-array Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Adam
In this program, a Voxtel-led team demonstrates a full-format (192 x 192, 100-µm pitch, VX-810) high-dynamic-range x-ray photon-counting sensor—the Dual Photon Resolved Energy Acquisition (DUPREA) sensor. Within the Phase II program the following tasks were completed: 1) system analysis and definition of the DUPREA sensor requirements; 2) design, simulation, and fabrication of the full-format VX-810 ROIC design; 3) design, optimization, and fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of the optically sensitive focal-plane array; 5) development of an evaluation camera; and 6)more » electrical and optical characterization of the sensor.« less
Photon Counting Detectors for the 1.0 - 2.0 Micron Wavelength Range
NASA Technical Reports Server (NTRS)
Krainak, Michael A.
2004-01-01
We describe results on the development of greater than 200 micron diameter, single-element photon-counting detectors for the 1-2 micron wavelength range. The technical goals include quantum efficiency in the range 10-70%; detector diameter greater than 200 microns; dark count rate below 100 kilo counts-per-second (cps), and maximum count rate above 10 Mcps.
Accounting for seasonal patterns in syndromic surveillance data for outbreak detection.
Burr, Tom; Graves, Todd; Klamann, Richard; Michalak, Sarah; Picard, Richard; Hengartner, Nicolas
2006-12-04
Syndromic surveillance (SS) can potentially contribute to outbreak detection capability by providing timely, novel data sources. One SS challenge is that some syndrome counts vary with season in a manner that is not identical from year to year. Our goal is to evaluate the impact of inconsistent seasonal effects on performance assessments (false and true positive rates) in the context of detecting anomalous counts in data that exhibit seasonal variation. To evaluate the impact of inconsistent seasonal effects, we injected synthetic outbreaks into real data and into data simulated from each of two models fit to the same real data. Using real respiratory syndrome counts collected in an emergency department from 2/1/94-5/31/03, we varied the length of training data from one to eight years, applied a sequential test to the forecast errors arising from each of eight forecasting methods, and evaluated their detection probabilities (DP) on the basis of 1000 injected synthetic outbreaks. We did the same for each of two corresponding simulated data sets. The less realistic, nonhierarchical model's simulated data set assumed that "one season fits all," meaning that each year's seasonal peak has the same onset, duration, and magnitude. The more realistic simulated data set used a hierarchical model to capture violation of the "one season fits all" assumption. This experiment demonstrated optimistic bias in DP estimates for some of the methods when data simulated from the nonhierarchical model was used for DP estimation, thus suggesting that at least for some real data sets and methods, it is not adequate to assume that "one season fits all." For the data we analyze, the "one season fits all " assumption is violated, and DP performance claims based on simulated data that assume "one season fits all," for the forecast methods considered, except for moving average methods, tend to be optimistic. Moving average methods based on relatively short amounts of training data are competitive on all three data sets, but are particularly competitive on the real data and on data from the hierarchical model, which are the two data sets that violate the "one season fits all" assumption.
Image-based spectral distortion correction for photon-counting x-ray detectors
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608
Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol; Kim, Seung Ho
2012-12-01
Metronome guidance is a simple and economic feedback method of guiding cardiopulmonary resuscitation (CPR). It has been proven for its usefulness in regulating the rate of chest compression and ventilation, but it is not yet clear how metronome use may affect compression depth or rescuer fatigue. The aim of this study was to assess the specific effect that metronome guidance has on the quality of CPR and rescuer fatigue. One-person CPRs were performed by senior medical students on Resusci Anne® manikins (Laerdal, Stavanger, Norway) with personal-computer skill-reporting systems. Half of the students performed CPR with metronome guidance and the other half without. CPR performance data, duration, and before-after trial differences in mean arterial pressure (MAP) and heart rate (HR) were compared between groups. Average compression depth (ACD) of the first five cycles, compression rate, no-flow fraction, and ventilation count were significantly lower in the metronome group (p=0.028, < 0.001, 0.001, and 0.041, respectively). Total CPR duration, total work (ACD × total compression count), and the before-after trial differences of the MAP and HR did not differ between the two groups. Metronome guidance is associated with lower chest compression depth of the first five cycles, while shortening the no-flow fraction and the ventilation count in a simulated one-person CPR model. Metronome guidance does not have an obvious effect of intensifying rescuer fatigue. Copyright © 2012 Elsevier Inc. All rights reserved.
500-514 N. Peshtigo Ct, May 2018, Lindsay Light Radiological Survey
maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,800 cpm - 5,000 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
550 E. Illinois, May 2018, Lindsay Light Radiological Survey
Maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,250 cpm to 4,880 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
Hahn, Robert G
2017-01-01
A high number of blood cells increases the viscosity of the blood. The present study explored whether variations in blood cell counts are relevant to the distribution and elimination of infused crystalloid fluid. On three different occasions, 10 healthy male volunteers received an intravenous infusion of 25mL/kg of Ringer's acetate, Ringer's lactate, and isotonic saline over 30min. Blood hemoglobin and urinary excretion were monitored for 4h and used as input in a two-volume kinetic model, using nonlinear mixed effects software. The covariates used in the kinetic model were red blood cell and platelet counts, the total leukocyte count, the use of isotonic saline, and the arterial pressure. Red blood cell and platelet counts in the upper end of the normal range were associated with a decreased rate of distribution and redistribution of crystalloid fluid. Simulations showed that high counts were correlated with volume expansion of the peripheral (interstitial) fluid space, while the plasma volume was less affected. In contrast, the total leukocyte count had no influence on the distribution, redistribution, or elimination. The use of isotonic saline caused a transient reduction in the systolic arterial pressure (P<0.05) and doubled the half-life of infused fluid in the body when compared to the two Ringer solutions. Isotonic saline did not decrease the serum potassium concentration, despite the fact that saline is potassium-free. High red blood cell and platelet counts are associated with peripheral accumulation of infused crystalloid fluid. Copyright © 2017 The Lithuanian University of Health Sciences. Production and hosting by Elsevier Sp. z o.o. All rights reserved.
Herrmann, Christoph; Engel, Klaus-Jürgen; Wiegert, Jens
2010-12-21
The most obvious problem in obtaining spectral information with energy-resolving photon counting detectors in clinical computed tomography (CT) is the huge x-ray flux present in conventional CT systems. At high tube voltages (e.g. 140 kVp), despite the beam shaper, this flux can be close to 10⁹ Mcps mm⁻² in the direct beam or in regions behind the object, which are close to the direct beam. Without accepting the drawbacks of truncated reconstruction, i.e. estimating missing direct-beam projection data, a photon-counting energy-resolving detector has to be able to deal with such high count rates. Sub-structuring pixels into sub-pixels is not enough to reduce the count rate per pixel to values that today's direct converting Cd[Zn]Te material can cope with (≤ 10 Mcps in an optimistic view). Below 300 µm pixel pitch, x-ray cross-talk (Compton scatter and K-escape) and the effect of charge diffusion between pixels are problematic. By organising the detector in several different layers, the count rate can be further reduced. However this alone does not limit the count rates to the required level, since the high stopping power of the material becomes a disadvantage in the layered approach: a simple absorption calculation for 300 µm pixel pitch shows that the required layer thickness of below 10 Mcps/pixel for the top layers in the direct beam is significantly below 100 µm. In a horizontal multi-layer detector, such thin layers are very difficult to manufacture due to the brittleness of Cd[Zn]Te. In a vertical configuration (also called edge-on illumination (Ludqvist et al 2001 IEEE Trans. Nucl. Sci. 48 1530-6, Roessl et al 2008 IEEE NSS-MIC-RTSD 2008, Conf. Rec. Talk NM2-3)), bonding of the readout electronics (with pixel pitches below 100 µm) is not straightforward although it has already been done successfully (Pellegrini et al 2004 IEEE NSS MIC 2004 pp 2104-9). Obviously, for the top detector layers, materials with lower stopping power would be advantageous. The possible choices are, however, quite limited, since only 'mature' materials, which operate at room temperature and can be manufactured reliably should reasonably be considered. Since GaAs is still known to cause reliability problems, the simplest choice is Si, however with the drawback of strong Compton scatter which can cause considerable inter-pixel cross-talk. To investigate the potential and the problems of Si in a multi-layer detector, in this paper the combination of top detector layers made of Si with lower layers made of Cd[Zn]Te is studied by using Monte Carlo simulated detector responses. It is found that the inter-pixel cross-talk due to Compton scatter is indeed very high; however, with an appropriate cross-talk correction scheme, which is also described, the negative effects of cross-talk are shown to be removed to a very large extent.
An Exploratory Analysis of Waterfront Force Protection Measures Using Simulation
2002-03-01
LEFT BLANK 75 APPENDIX B. DESIGN POINT DATA Table 16. Design Point One Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.002469 0.006237 27.63104 7144.875 0.155223 76 Table 17. Design Point Two Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.001163 4.67E-12 29.80891 6393.874 0.188209 77 Table 18. Design Point Three Data breach - count leakers- count numberAv ailablePBs- mean
Zhang, Bei; Wang, Yanping; Tan, Zhongfang; Li, Zongwei; Jiao, Zhen; Huang, Qunce
2016-01-01
In this study, 69 lactobacilli isolated from Tibetan Qula, a raw yak milk cheese, were screened for their potential use as probiotics. The isolates were tested in terms of: Their ability to survive at pH 2.0, pH 3.0, and in the presence of 0.3% bile salts; tolerance of simulated gastric and intestinal juices; antimicrobial activity; sensitivity against 11 specific antibiotics; and their cell surface hydrophobicity. The results show that out of the 69 strains, 29 strains (42%) had survival rates above 90% after 2 h of incubation at pH values of 2.0 or 3.0. Of these 29 strains, 21 strains showed a tolerance for 0.3% bile salt. Incubation of these 21 isolates in simulated gastrointestinal fluid for 3 h revealed survival rates above 90%; the survival rate for 20 of these isolates remained above 90% after 4 h of incubation in simulated intestinal fluid. The viable counts of bacteria after incubation in simulated gastric fluid for 3 h and simulated intestinal fluid for 4 h were both significantly different compared with the counts at 0 h (p<0.001). Further screening performed on the above 20 isolates indicated that all 20 lactobacilli strains exhibited inhibitory activity against Micrococcus luteus ATCC 4698, Bacillus subtilis ATCC 6633, Listeria monocytogenes ATCC 19115, and Salmonella enterica ATCC 43971. Moreover, all of the strains were resistant to vancomycin and streptomycin. Of the 20 strains, three were resistant to all 11 elected antibiotics (ciprofloxacin, erythromycin, tetracycline, penicillin G, ampicillin, streptomycin, polymyxin B, vancomycin, chloramphenicol, rifampicin, and gentamicin) in this study, and five were sensitive to more than half of the antibiotics. Additionally, the cell surface hydrophobicity of seven of the 20 lactobacilli strains was above 70%, including strains Lactobacillus casei 1,133 (92%), Lactobacillus plantarum 1086-1 (82%), Lactobacillus casei 1089 (81%), Lactobacillus casei 1138 (79%), Lactobacillus buchneri 1059 (78%), Lactobacillus plantarum1141 (75%), and Lactobacillus plantarum 1197 (71%). Together, these results suggest that these seven strains are good probiotic candidates, and that tolerance against bile acid, simulated gastric and intestinal juices, antimicrobial activity, antibiotic resistance, and cell surface hydrophobicity could be adopted for preliminary screening of potentially probiotic lactobacilli. PMID:26954218
Better Than Counting: Density Profiles from Force Sampling
NASA Astrophysics Data System (ADS)
de las Heras, Daniel; Schmidt, Matthias
2018-05-01
Calculating one-body density profiles in equilibrium via particle-based simulation methods involves counting of events of particle occurrences at (histogram-resolved) space points. Here, we investigate an alternative method based on a histogram of the local force density. Via an exact sum rule, the density profile is obtained with a simple spatial integration. The method circumvents the inherent ideal gas fluctuations. We have tested the method in Monte Carlo, Brownian dynamics, and molecular dynamics simulations. The results carry a statistical uncertainty smaller than that of the standard counting method, reducing therefore the computation time.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Compensated count-rate circuit for radiation survey meter
Todd, R.A.
1980-05-12
A count-rate compensating circuit is provided which may be used in a portable Geiger-Mueller (G-M) survey meter to ideally compensate for couting loss errors in the G-M tube detector. In a G-M survey meter, wherein the pulse rate from the G-M tube is converted into a pulse rate current applied to a current meter calibrated to indicate dose rate, the compensation circuit generates and controls a reference voltage in response to the rate of pulses from the detector. This reference voltage is gated to the current-generating circuit at a rate identical to the rate of pulses coming from the detector so that the current flowing through the meter is varied in accordance with both the frequency and amplitude of the reference voltage pulses applied thereto so that the count rate is compensated ideally to indicate a true count rate within 1% up to a 50% duty cycle for the detector. A positive feedback circuit is used to control the reference voltage so that the meter output tracks true count rate indicative of the radiation dose rate.
Characterisation of the Hamamatsu photomultipliers for the KM3NeT Neutrino Telescope
NASA Astrophysics Data System (ADS)
Aiello, S.; Akrame, S. E.; Ameli, F.; Anassontzis, E. G.; Andre, M.; Androulakis, G.; Anghinolfi, M.; Anton, G.; Ardid, M.; Aublin, J.; Avgitas, T.; Baars, M.; Bagatelas, C.; Barbarino, G.; Baret, B.; Barrios-Martí, J.; Belias, A.; Berbee, E.; van den Berg, A.; Bertin, V.; Biagi, S.; Biagioni, A.; Biernoth, C.; Bormuth, R.; Boumaaza, J.; Bourret, S.; Bouwhuis, M.; Bozza, C.; Brânzaş, H.; Briukhanova, N.; Bruijn, R.; Brunner, J.; Buis, E.; Buompane, R.; Busto, J.; Calvo, D.; Capone, A.; Caramete, L.; Celli, S.; Chabab, M.; Cherubini, S.; Chiarella, V.; Chiarusi, T.; Circella, M.; Cocimano, R.; Coelho, J. A. B.; Coleiro, A.; Colomer Molla, M.; Coniglione, R.; Coyle, P.; Creusot, A.; Cuttone, G.; D'Onofrio, A.; Dallier, R.; De Sio, C.; Di Palma, I.; Díaz, A. F.; Distefano, C.; Domi, A.; Donà, R.; Donzaud, C.; Dornic, D.; Dörr, M.; Durocher, M.; Eberl, T.; van Eijk, D.; El Bojaddaini, I.; Elsaesser, D.; Enzenhöfer, A.; Ferrara, G.; Fusco, L. A.; Gal, T.; Garufi, F.; Gauchery, S.; Geißelsöder, S.; Gialanella, L.; Giorgio, E.; Giuliante, A.; Gozzini, S. R.; Ruiz, R. Gracia; Graf, K.; Grasso, D.; Grégoire, T.; Grella, G.; Hallmann, S.; van Haren, H.; Heid, T.; Heijboer, A.; Hekalo, A.; Hernández-Rey, J. J.; Hofestädt, J.; Illuminati, G.; James, C. W.; Jongen, M.; Jongewaard, B.; de Jong, M.; de Jong, P.; Kadler, M.; Kalaczyński, P.; Kalekin, O.; Katz, U. F.; Chowdhury, N. R. Khan; Kieft, G.; Kießling, D.; Koffeman, E. N.; Kooijman, P.; Kouchner, A.; Kreter, M.; Kulikovskiy, V.; Lahmann, R.; Le Breton, R.; Leone, F.; Leonora, E.; Levi, G.; Lincetto, M.; Lonardo, A.; Longhitano, F.; Lotze, M.; Loucatos, S.; Maggi, G.; Mańczak, J.; Mannheim, K.; Margiotta, A.; Marinelli, A.; Markou, C.; Martin, L.; Martínez-Mora, J. A.; Martini, A.; Marzaioli, F.; Mele, R.; Melis, K. W.; Migliozzi, P.; Migneco, E.; Mijakowski, P.; Miranda, L. S.; Mollo, C. M.; Morganti, M.; Moser, M.; Moussa, A.; Muller, R.; Musumeci, M.; Nauta, L.; Navas, S.; Nicolau, C. A.; Nielsen, C.; Organokov, M.; Orlando, A.; Panagopoulos, V.; Papalashvili, G.; Papaleo, R.; Păvălaş, G. E.; Pellegrini, G.; Pellegrino, C.; Pérez Romero, J.; Perrin-Terrin, M.; Piattelli, P.; Pikounis, K.; Pisanti, O.; Poirè, C.; Polydefki, G.; Poma, G. E.; Popa, V.; Post, M.; Pradier, T.; Pühlhofer, G.; Pulvirenti, S.; Quinn, L.; Raffaelli, F.; Randazzo, N.; Razzaque, S.; Real, D.; Resvanis, L.; Reubelt, J.; Riccobene, G.; Richer, M.; Rovelli, A.; Salvadori, I.; Samtleben, D. F. E.; Sánchez Losa, A.; Sanguineti, M.; Santangelo, A.; Sapienza, P.; Schermer, B.; Sciacca, V.; Seneca, J.; Sgura, I.; Shanidze, R.; Sharma, A.; Simeone, F.; Sinopoulou, A.; Spisso, B.; Spurio, M.; Stavropoulos, D.; Steijger, J.; Stellacci, S. M.; Strandberg, B.; Stransky, D.; Stüven, T.; Taiuti, M.; Tatone, F.; Tayalati, Y.; Tenllado, E.; Thakore, T.; Timmer, P.; Trovato, A.; Tsagkli, S.; Tzamariudaki, E.; Tzanetatos, D.; Valieri, C.; Vallage, B.; Van Elewyck, V.; Versari, F.; Viola, S.; Vivolo, D.; Volkert, M.; de Waardt, L.; Wilms, J.; de Wolf, E.; Zaborov, D.; Zornoza, J. D.; Zúñiga, J.
2018-05-01
The Hamamatsu R12199-02 3-inch photomultiplier tube is the photodetector chosen for the first phase of the KM3NeT neutrino telescope. About 7000 photomultipliers have been characterised for dark count rate, timing spread and spurious pulses. The quantum efficiency, the gain and the peak-to-valley ratio have also been measured for a sub-sample in order to determine parameter values needed as input to numerical simulations of the detector.
NASA Astrophysics Data System (ADS)
Krimmer, J.; Angellier, G.; Balleyguier, L.; Dauvergne, D.; Freud, N.; Hérault, J.; Létang, J. M.; Mathez, H.; Pinto, M.; Testa, E.; Zoccarato, Y.
2017-04-01
For the purpose of detecting deviations from the prescribed treatment during particle therapy, the integrals of uncollimated prompt gamma-ray timing distributions are investigated. The intention is to provide information, with a simple and cost-effective setup, independent from monitoring devices of the beamline. Measurements have been performed with 65 MeV protons at a clinical cyclotron. Prompt gamma-rays emitted from the target are identified by means of time-of-flight. The proton range inside the PMMA target has been varied via a modulator wheel. The measured variation of the prompt gamma peak integrals as a function of the modulator position is consistent with simulations. With detectors covering a solid angle of 25 msr (corresponding to a diameter of 3-4 in. at a distance of 50 cm from the beam axis) and 108 incident protons, deviations of a few per cent in the prompt gamma-ray count rate can be detected. For the present configuration, this change in the count rate corresponds to a 3 mm change in the proton range in a PMMA target. Furthermore, simulation studies show that a combination of the signals from multiple detectors may be used to detect a misplacement of the target. A different combination of these signals results in a precise number of the detected prompt gamma rays, which is independent on the actual target position.
Measurement of neutron spectra in the experimental reactor LR-0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prenosil, Vaclav; Mravec, Filip; Veskrna, Martin
2015-07-01
The measurement of fast neutron fluxes is important in many areas of nuclear technology. It affects the stability of the reactor structural components, performance of fuel, and also the fuel manner. The experiments performed at the LR-0 reactor were in the past focused on the measurement of neutron field far from the core, in reactor pressure vessel simulator or in biological shielding simulator. In the present the measurement in closer regions to core became more important, especially measurements in structural components like reactor baffle. This importance increases with both reactor power increase and also long term operation. Other important taskmore » is an increasing need for the measurement close to the fuel. The spectra near the fuel are aimed due to the planned measurements with the FLIBE salt, in FHR / MSR research, where one of the task is the measurement of the neutron spectra in it. In both types of experiments there is strong demand for high working count rate. The high count rate is caused mainly by high gamma background and by high fluxes. The fluxes in core or in its vicinity are relatively high to ensure safe reactor operation. This request is met in the digital spectroscopic apparatus. All experiments were realized in the LR-0 reactor. It is an extremely flexible light water zero-power research reactor, operated by the Research Center Rez (Czech Republic). (authors)« less
The X-IFU end-to-end simulations performed for the TES array optimization exercise
NASA Astrophysics Data System (ADS)
Peille, Philippe; Wilms, J.; Brand, T.; Cobo, B.; Ceballos, M. T.; Dauser, T.; Smith, S. J.; Barret, D.; den Herder, J. W.; Piro, L.; Barcons, X.; Pointecouteau, E.; Bandler, S.; den Hartog, R.; de Plaa, J.
2015-09-01
The focal plane assembly of the Athena X-ray Integral Field Unit (X-IFU) includes as the baseline an array of ~4000 single size calorimeters based on Transition Edge Sensors (TES). Other sensor array configurations could however be considered, combining TES of different properties (e.g. size). In attempting to improve the X-IFU performance in terms of field of view, count rate performance, and even spectral resolution, two alternative TES array configurations to the baseline have been simulated, each combining a small and a large pixel array. With the X-IFU end-to-end simulator, a sub-sample of the Athena core science goals, selected by the X-IFU science team as potentially driving the optimal TES array configuration, has been simulated for the results to be scientifically assessed and compared. In this contribution, we will describe the simulation set-up for the various array configurations, and highlight some of the results of the test cases simulated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alpert, B. K.; Horansky, R. D.; Bennett, D. A.
Microcalorimeter sensors operated near 0.1 K can measure the energy of individual x- and gamma-ray photons with significantly more precision than conventional semiconductor technologies. Both microcalorimeter arrays and higher per pixel count rates are desirable to increase the total throughput of spectrometers based on these devices. The millisecond recovery time of gamma-ray microcalorimeters and the resulting pulse pileup are significant obstacles to high per pixel count rates. Here, we demonstrate operation of a microcalorimeter detector at elevated count rates by use of convolution filters designed to be orthogonal to the exponential tail of a preceding pulse. These filters allow operationmore » at 50% higher count rates than conventional filters while largely preserving sensor energy resolution.« less
Thompson, W.L.
2003-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled stream units. Violations of these assumptions may produce suspect results. To determine possible sources of the assumption violations, I used data on the abundance of steelhead Oncorhynchus mykiss from Hankin and Reeves' (1988) in a simulation composed of 50,000 repeated, stratified systematic random samples from a spatially clustered distribution. The simulation was used to investigate effects of a range of removal estimates, from 75% to 100% of true fish abundance, on overall stream fish population estimates. The effects of various categories of removal-estimates-to-snorkel-count correlation levels (r = 0.75-1.0) on fish population estimates were also explored. Simulation results indicated that Hankin and Reeves' approach may produce poor results unless removal estimates exceed at least 85% of the true number of fish within sampled units and unless correlations between removal estimates and snorkel counts are at least 0.90. A potential modification to Hankin and Reeves' approach is the inclusion of environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative approach is to use snorkeling combined with line transect sampling to estimate fish densities within stream units. As with any method of population estimation, a pilot study should be conducted to evaluate its usefulness, which requires a known (or nearly so) population of fish to serve as a benchmark for evaluating bias and precision of estimators.
Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation
NASA Astrophysics Data System (ADS)
Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.
1997-02-01
The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.
Cryogenic, high-resolution x-ray detector with high count rate capability
Frank, Matthias; Mears, Carl A.; Labov, Simon E.; Hiller, Larry J.; Barfknecht, Andrew T.
2003-03-04
A cryogenic, high-resolution X-ray detector with high count rate capability has been invented. The new X-ray detector is based on superconducting tunnel junctions (STJs), and operates without thermal stabilization at or below 500 mK. The X-ray detector exhibits good resolution (.about.5-20 eV FWHM) for soft X-rays in the keV region, and is capable of counting at count rates of more than 20,000 counts per second (cps). Simple, FET-based charge amplifiers, current amplifiers, or conventional spectroscopy shaping amplifiers can provide the electronic readout of this X-ray detector.
NASA Astrophysics Data System (ADS)
Nohtomi, Akihiro; Wakabayashi, Genichiro
2015-11-01
We evaluated the accuracy of a self-activation method with iodine-containing scintillators in quantifying 128I generation in an activation detector; the self-activation method was recently proposed for photo-neutron on-line measurements around X-ray radiotherapy machines. Here, we consider the accuracy of determining the initial count rate R0, observed just after termination of neutron irradiation of the activation detector. The value R0 is directly related to the amount of activity generated by incident neutrons; the detection efficiency of radiation emitted from the activity should be taken into account for such an evaluation. Decay curves of 128I activity were numerically simulated by a computer program for various conditions including different initial count rates (R0) and background rates (RB), as well as counting statistical fluctuations. The data points sampled at minute intervals and integrated over the same period were fit by a non-linear least-squares fitting routine to obtain the value R0 as a fitting parameter with an associated uncertainty. The corresponding background rate RB was simultaneously calculated in the same fitting routine. Identical data sets were also evaluated by a well-known integration algorithm used for conventional activation methods and the results were compared with those of the proposed fitting method. When we fixed RB = 500 cpm, the relative uncertainty σR0 /R0 ≤ 0.02 was achieved for R0/RB ≥ 20 with 20 data points from 1 min to 20 min following the termination of neutron irradiation used in the fitting; σR0 /R0 ≤ 0.01 was achieved for R0/RB ≥ 50 with the same data points. Reasonable relative uncertainties to evaluate initial count rates were reached by the decay-fitting method using practically realistic sampling numbers. These results clarified the theoretical limits of the fitting method. The integration method was found to be potentially vulnerable to short-term variations in background levels, especially instantaneous contaminations by spike-like noise. The fitting method easily detects and removes such spike-like noise.
NASA Astrophysics Data System (ADS)
Sokół, Justyna M.; Bzowski, Maciej; Kubiak, Marzena A.; Möbius, Eberhard
2016-06-01
We simulated the modulation of the interstellar neutral (ISN) He, Ne, and O density and pick-up ion (PUI) production rate and count rate along the Earth's orbit over the solar cycle (SC) from 2002 to 2013 to verify if SC-related effects may modify the inferred ecliptic longitude of the ISN inflow direction. We adopted the classical PUI model with isotropic distribution function and adiabatic cooling, modified by time- and heliolatitude-dependent ionization rates and non-zero injection speed of PUIs. We found that the ionization losses have a noticeable effect on the derivation of the ISN inflow longitude based on the Gaussian fit to the crescent and cone peak locations. We conclude that the non-zero radial velocity of the ISN flow and the energy range of the PUI distribution function that is accumulated are of importance for a precise reproduction of the PUI count rate along the Earth orbit. However, the temporal and latitudinal variations of the ionization in the heliosphere, and particularly their variation on the SC time-scale, may significantly modify the shape of PUI cone and crescent and also their peak positions from year to year and thus bias by a few degrees the derived longitude of the ISN gas inflow direction.
2015-07-17
This figure shows how the Alice instrument count rate changed over time during the sunset and sunrise observations. The count rate is largest when the line of sight to the sun is outside of the atmosphere at the start and end times. Molecular nitrogen (N2) starts absorbing sunlight in the upper reaches of Pluto's atmosphere, decreasing as the spacecraft approaches the planet's shadow. As the occultation progresses, atmospheric methane and hydrocarbons can also absorb the sunlight and further decrease the count rate. When the spacecraft is totally in Pluto's shadow the count rate goes to zero. As the spacecraft emerges from Pluto's shadow into sunrise, the process is reversed. By plotting the observed count rate in the reverse time direction, it is seen that the atmospheres on opposite sides of Pluto are nearly identical. http://photojournal.jpl.nasa.gov/catalog/PIA19716
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...
2017-02-17
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
Simulation on the steel galvanic corrosion and acoustic emission
NASA Astrophysics Data System (ADS)
Yu, Yang; Shi, Xin; Yang, Ping
2015-12-01
Galvanic corrosion is a very destructive localized corrosion. The research on galvanic corrosion could determine equipment corrosion and prevent the accidents occurrence. Steel corrosion had been studied by COMSOL software with mathematical modeling. The galvanic corrosion of steel-aluminum submerged into 10% sodium chloride solution had been on-line detected by PIC-2 acoustic emission system. The results show that the acoustic emission event counts detected within unit time can qualitative judge galvanic corrosion rate and further erosion trend can be judged by the value changes.
Variance in population firing rate as a measure of slow time-scale correlation
Snyder, Adam C.; Morais, Michael J.; Smith, Matthew A.
2013-01-01
Correlated variability in the spiking responses of pairs of neurons, also known as spike count correlation, is a key indicator of functional connectivity and a critical factor in population coding. Underscoring the importance of correlation as a measure for cognitive neuroscience research is the observation that spike count correlations are not fixed, but are rather modulated by perceptual and cognitive context. Yet while this context fluctuates from moment to moment, correlation must be calculated over multiple trials. This property undermines its utility as a dependent measure for investigations of cognitive processes which fluctuate on a trial-to-trial basis, such as selective attention. A measure of functional connectivity that can be assayed on a moment-to-moment basis is needed to investigate the single-trial dynamics of populations of spiking neurons. Here, we introduce the measure of population variance in normalized firing rate for this goal. We show using mathematical analysis, computer simulations and in vivo data how population variance in normalized firing rate is inversely related to the latent correlation in the population, and how this measure can be used to reliably classify trials from different typical correlation conditions, even when firing rate is held constant. We discuss the potential advantages for using population variance in normalized firing rate as a dependent measure for both basic and applied neuroscience research. PMID:24367326
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2016-12-01
We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Kids Count in Indiana: 1996 Data Book.
ERIC Educational Resources Information Center
Erickson, Judith B.
This Kids Count report is the third in a series examining statewide trends in the well-being of Indiana's children. The report combines statistics of special concern in Indiana with 10 national Kids Count well-being indicators: (1) percent low birthweight; (2) infant mortality rate; (3) child death rate; (4) birth rate to unmarried teens ages 15…
Validation of a small-animal PET simulation using GAMOS: a GEANT4-based framework
NASA Astrophysics Data System (ADS)
Cañadas, M.; Arce, P.; Rato Mendes, P.
2011-01-01
Monte Carlo-based modelling is a powerful tool to help in the design and optimization of positron emission tomography (PET) systems. The performance of these systems depends on several parameters, such as detector physical characteristics, shielding or electronics, whose effects can be studied on the basis of realistic simulated data. The aim of this paper is to validate a comprehensive study of the Raytest ClearPET small-animal PET scanner using a new Monte Carlo simulation platform which has been developed at CIEMAT (Madrid, Spain), called GAMOS (GEANT4-based Architecture for Medicine-Oriented Simulations). This toolkit, based on the GEANT4 code, was originally designed to cover multiple applications in the field of medical physics from radiotherapy to nuclear medicine, but has since been applied by some of its users in other fields of physics, such as neutron shielding, space physics, high energy physics, etc. Our simulation model includes the relevant characteristics of the ClearPET system, namely, the double layer of scintillator crystals in phoswich configuration, the rotating gantry, the presence of intrinsic radioactivity in the crystals or the storage of single events for an off-line coincidence sorting. Simulated results are contrasted with experimental acquisitions including studies of spatial resolution, sensitivity, scatter fraction and count rates in accordance with the National Electrical Manufacturers Association (NEMA) NU 4-2008 protocol. Spatial resolution results showed a discrepancy between simulated and measured values equal to 8.4% (with a maximum FWHM difference over all measurement directions of 0.5 mm). Sensitivity results differ less than 1% for a 250-750 keV energy window. Simulated and measured count rates agree well within a wide range of activities, including under electronic saturation of the system (the measured peak of total coincidences, for the mouse-sized phantom, was 250.8 kcps reached at 0.95 MBq mL-1 and the simulated peak was 247.1 kcps at 0.87 MBq mL-1). Agreement better than 3% was obtained in the scatter fraction comparison study. We also measured and simulated a mini-Derenzo phantom obtaining images with similar quality using iterative reconstruction methods. We concluded that the overall performance of the simulation showed good agreement with the measured results and validates the GAMOS package for PET applications. Furthermore, its ease of use and flexibility recommends it as an excellent tool to optimize design features or image reconstruction techniques.
Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner
NASA Astrophysics Data System (ADS)
Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.
2007-02-01
In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.
Bondi Accretion and the Problem of the Missing Isolated Neutron Stars
NASA Technical Reports Server (NTRS)
Perna, Rosalba; Narayan, Ramesh; Rybicki, George; Stella, Luigi; Treves, Aldo
2003-01-01
A large number of neutron stars (NSs), approximately 10(exp 9), populate the Galaxy, but only a tiny fraction of them is observable during the short radio pulsar lifetime. The majority of these isolated NSs, too cold to be detectable by their own thermal emission, should be visible in X-rays as a result of accretion from the interstellar medium. The ROSAT All-Sky Survey has, however, shown that such accreting isolated NSs are very elusive: only a few tentative candidates have been identified, contrary to theoretical predictions that up to several thousand should be seen. We suggest that the fundamental reason for this discrepancy lies in the use of the standard Bondi formula to estimate the accretion rates. We compute the expected source counts using updated estimates of the pulsar velocity distribution, realistic hydrogen atmosphere spectra, and a modified expression for the Bondi accretion rate, as suggested by recent MHD simulations and supported by direct observations in the case of accretion around supermassive black holes in nearby galaxies and in our own. We find that, whereas the inclusion of atmospheric spectra partly compensates for the reduction in the counts due to the higher mean velocities of the new distribution, the modified Bondi formula dramatically suppresses the source counts. The new predictions are consistent with a null detection at the ROSAT sensitivity.
Handheld 2-channel impedimetric cell counting system with embedded real-time processing
NASA Astrophysics Data System (ADS)
Rottigni, A.; Carminati, M.; Ferrari, G.; Vahey, M. D.; Voldman, J.; Sampietro, M.
2011-05-01
Lab-on-a-chip systems have been attracting a growing attention for the perspective of miniaturization and portability of bio-chemical assays. Here we present a the design and characterization of a miniaturized, USB-powered, self-contained, 2-channel instrument for impedance sensing, suitable for label-free tracking and real-time detection of cells flowing in microfluidic channels. This original circuit features a signal generator based on a direct digital synthesizer, a transimpedance amplifier, an integrated square-wave lock-in coupled to a Σ▵ ADC converter, and a digital processing platform. Real-time automatic peak detection on two channels is implemented in a FPGA. System functionality has been tested with an electronic resistance modulator to simulate 1% impedance variation produced by cells, reaching a time resolution of 50μs (enabling a count rate of 2000 events/s) with an applied voltage as low as 200mV. Biological experiments have been carried out counting yeast cells. Statistical analysis of events is in agreement with the expected amplitude and time distributions. 2-channel yeast counting has been performed with concomitant dielectrophoretic cell separation, showing that this novel and ultra compact sensing system, thanks to the selectivity of the lock-in detector, is compatible with other AC electrical fields applied to the device.
Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y
2012-06-01
Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.
Pulse pile-up in hard X-ray detector systems. [for solar X-rays
NASA Technical Reports Server (NTRS)
Datlowe, D. W.
1975-01-01
When pulse-height spectra are measured by a nuclear detection system at high counting rates, the probability that two or more pulses will arrive within the resolving time of the system is significant. This phenomenon, pulse pile-up, distorts the pulse-height spectrum and must be considered in the interpretation of spectra taken at high counting rates. A computational technique for the simulation of pile-up is developed. The model is examined in the three regimes where (1) the time between pulses is long compared to the detector-system resolving time, (2) the time between pulses is comparable to the resolving time, and (3) many pulses occur within the resolving time. The technique is used to model the solar hard X-ray experiment on the OSO-7 satellite; comparison of the model with data taken during three large flares shows excellent agreement. The paper also describes rule-of-thumb tests for pile-up and identifies the important detector design factors for minimizing pile-up, i.e., thick entrance windows and short resolving times in the system electronics.
NASA Astrophysics Data System (ADS)
Tadesse, Abel; Fredriksson, Hasse
2018-06-01
The graphite nodule count and size distributions for boiling water reactor (BWR) and pressurized water reactor (PWR) inserts were investigated by taking samples at heights of 2160 and 1150 mm, respectively. In each cross section, two locations were taken into consideration for both the microstructural and solidification modeling. The numerical solidification modeling was performed in a two-dimensional model by considering the nucleation and growth in eutectic ductile cast iron. The microstructural results reveal that the nodule size and count distribution along the cross sections are different in each location for both inserts. Finer graphite nodules appear in the thinner sections and close to the mold walls. The coarser nodules are distributed mostly in the last solidified location. The simulation result indicates that the finer nodules are related to a higher cooling rate and a lower degree of microsegregation, whereas the coarser nodules are related to a lower cooling rate and a higher degree of microsegregation. The solidification time interval and the last solidifying locations in the BWR and PWR are also different.
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
We examined the effects of using a fixed-count subsample of 300 organisms on metric values using macroinvertebrate samples collected with 3 field sampling methods at 12 boatable river sites. For each sample, we used metrics to compare an initial fixed-count subsample of approxima...
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
Mapping of bird distributions from point count surveys
Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morton, April M; Piburn, Jesse O; McManamay, Ryan A
2017-01-01
Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.
A broken promise: microbiome differential abundance methods do not control the false discovery rate.
Hawinkel, Stijn; Mattiello, Federico; Bijnens, Luc; Thas, Olivier
2017-08-22
High-throughput sequencing technologies allow easy characterization of the human microbiome, but the statistical methods to analyze microbiome data are still in their infancy. Differential abundance methods aim at detecting associations between the abundances of bacterial species and subject grouping factors. The results of such methods are important to identify the microbiome as a prognostic or diagnostic biomarker or to demonstrate efficacy of prodrug or antibiotic drugs. Because of a lack of benchmarking studies in the microbiome field, no consensus exists on the performance of the statistical methods. We have compared a large number of popular methods through extensive parametric and nonparametric simulation as well as real data shuffling algorithms. The results are consistent over the different approaches and all point to an alarming excess of false discoveries. This raises great doubts about the reliability of discoveries in past studies and imperils reproducibility of microbiome experiments. To further improve method benchmarking, we introduce a new simulation tool that allows to generate correlated count data following any univariate count distribution; the correlation structure may be inferred from real data. Most simulation studies discard the correlation between species, but our results indicate that this correlation can negatively affect the performance of statistical methods. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Pneumotachometer counts respiration rate of human subject
NASA Technical Reports Server (NTRS)
Graham, O.
1964-01-01
To monitor breaths per minute, two rate-to-analog converters are alternately used to read and count the respiratory rate from an impedance pneumograph sequentially displayed numerically on electroluminescent matrices.
Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O
1994-01-01
The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.
Assessment of frequency and duration of point counts when surveying for golden eagle presence
Skipper, Ben R.; Boal, Clint W.; Tsai, Jo-Szu; Fuller, Mark R.
2017-01-01
We assessed the utility of the recommended golden eagle (Aquila chrysaetos) survey methodology in the U.S. Fish and Wildlife Service 2013 Eagle Conservation Plan Guidance. We conducted 800-m radius, 1-hr point-count surveys broken into 20-min segments, during 2 sampling periods in 3 areas within the Intermountain West of the United States over 2 consecutive breeding seasons during 2012 and 2013. Our goal was to measure the influence of different survey time intervals and sampling periods on detectability and use estimates of golden eagles among different locations. Our results suggest that a less intensive effort (i.e., survey duration shorter than 1 hr and point-count survey radii smaller than 800 m) would likely be inadequate for rigorous documentation of golden eagle occurrence pre- or postconstruction of wind energy facilities. Results from a simulation analysis of detection probabilities and survey effort suggest that greater temporal and spatial effort could make point-count surveys more applicable for evaluating golden eagle occurrence in survey areas; however, increased effort would increase financial costs associated with additional person-hours and logistics (e.g., fuel, lodging). Future surveys can benefit from a pilot study and careful consideration of prior information about counts or densities of golden eagles in the survey area before developing a survey design. If information is lacking, survey planning may be best served by assuming low detection rates and increasing the temporal and spatial effort.
NASA Astrophysics Data System (ADS)
Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.
2014-09-01
A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.
NASA Astrophysics Data System (ADS)
Singh, Arvind; Desai, Shraddha; Kumar, Arvind; Topkar, Anita
2018-05-01
A novel approach of using thin epitaxial silicon PIN detectors for thermal neutron measurements with reduced γ sensitivity has been presented. Monte Carlo simulations showed that there is a significant reduction in the gamma sensitivity for thin detectors with the thickness of 10- 25 μm compared to a detector of thickness of 300 μm. Epitaxial PIN silicon detectors with the thickness of 10 μm, 15 μm and 25 μm were fabricated using a custom process. The detectors exhibited low leakage currents of a few nano-amperes. The gamma sensitivity of the detectors was experimentally studied using a 33 μCi, 662 keV, 137Cs source. Considering the count rates, compared to a 300 μm thick detector, the gamma sensitivity of the 10 μm, 15 μm and 25 μm thick detectors was reduced by factors of 1874, 187 and 18 respectively. The detector performance for thermal neutrons was subsequently investigated with a thermal neutron beam using an enriched 10B film as a neutron converter layer. The thermal neutron spectra for all three detectors exhibited three distinct regions corresponding to the 4He and 7Li charge products released in the 10B-n reaction. With a 10B converter, the count rates were 1466 cps, 3170 cps and 2980 cps for the detectors of thicknesses of 10 μm, 25 μm and 300 μm respectively. The thermal neutron response of thin detectors with 10 μm and 25 μm thickness showed significant reduction in the gamma sensitivity compared to that observed for the 300 μm thick detector. Considering the total count rate obtained for thermal neutrons with a 10B converter film, the count rate without the converter layer were about 4%, 7% and 36% for detectors with thicknesses of 10 μm, 25 μm and 300 μm respectively. The detector with 10 μm thickness showed negligible gamma sensitivity of 4 cps, but higher electronic noise and reduced pulse heights. The detector with 25 μm thickness demonstrated the best performance with respect to electronic noise, thermal neutron response and gamma sensitivity.
Indirect Estimation of Radioactivity in Containerized Cargo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kenneth D.; Scherrer, Chad; Smith, Eric L.
Detecting illicit nuclear and radiological material in containerized cargo challenges the state of the art in detection systems. Current systems are being evaluated and new systems envisioned to address the need for the high probability of detection and extremely low false alarm rates necessary to thwart potential threats and extremely low nuisance and false alarm rates while maintaining necessary to maintain the flow of commerce impacted by the enormous volume of commodities imported in shipping containers. Maintaining flow of commerce also means that primary inspection must be rapid, requiring relatively indirect measurements of cargo from outside the containers. With increasingmore » information content in such indirect measurements, it is natural to ask how the information might be combined to improved detection. Toward this end, we present an approach to estimating isotopic activity of naturally occurring radioactive material in cargo grouped by commodity type, combining container manifest data with radiography and gamma spectroscopy aligned to location along the container. The heart of this approach is our statistical model of gamma counts within peak regions of interest, which captures the effects of background suppression, counting noise, convolution of neighboring cargo contributions, and down-scattered photons to provide physically constrained estimates of counts due to decay of specific radioisotopes in cargo alone. Coupled to that model, we use a mechanistic model of self-attenuated radiation flux to estimate the isotopic activity within cargo, segmented by location within each container, that produces those counts. We demonstrate our approach by applying it to a set of measurements taken at the Port of Seattle in 2006. This approach to synthesizing disparate available data streams and extraction of cargo characteristics holds the potential to improve primary inspection using current detection capabilities and to enable simulation-based evaluation of new candidate detection systems.« less
Single photon counting linear mode avalanche photodiode technologies
NASA Astrophysics Data System (ADS)
Williams, George M.; Huntington, Andrew S.
2011-10-01
The false count rate of a single-photon-sensitive photoreceiver consisting of a high-gain, low-excess-noise linear-mode InGaAs avalanche photodiode (APD) and a high-bandwidth transimpedance amplifier (TIA) is fit to a statistical model. The peak height distribution of the APD's multiplied dark current is approximated by the weighted sum of McIntyre distributions, each characterizing dark current generated at a different location within the APD's junction. The peak height distribution approximated in this way is convolved with a Gaussian distribution representing the input-referred noise of the TIA to generate the statistical distribution of the uncorrelated sum. The cumulative distribution function (CDF) representing count probability as a function of detection threshold is computed, and the CDF model fit to empirical false count data. It is found that only k=0 McIntyre distributions fit the empirically measured CDF at high detection threshold, and that false count rate drops faster than photon count rate as detection threshold is raised. Once fit to empirical false count data, the model predicts the improvement of the false count rate to be expected from reductions in TIA noise and APD dark current. Improvement by at least three orders of magnitude is thought feasible with further manufacturing development and a capacitive-feedback TIA (CTIA).
Black, James; Gerdtz, Marie; Nicholson, Pat; Crellin, Dianne; Browning, Laura; Simpson, Julie; Bell, Lauren; Santamaria, Nick
2015-05-01
Respiratory rate is an important sign that is commonly either not recorded or recorded incorrectly. Mobile phone ownership is increasing even in resource-poor settings. Phone applications may improve the accuracy and ease of counting of respiratory rates. The study assessed the reliability and initial users' impressions of four mobile phone respiratory timer approaches, compared to a 60-second count by the same participants. Three mobile applications (applying four different counting approaches plus a standard 60-second count) were created using the Java Mobile Edition and tested on Nokia C1-01 phones. Apart from the 60-second timer application, the others included a counter based on the time for ten breaths, and three based on the time interval between breaths ('Once-per-Breath', in which the user presses for each breath and the application calculates the rate after 10 or 20 breaths, or after 60s). Nursing and physiotherapy students used the applications to count respiratory rates in a set of brief video recordings of children with different respiratory illnesses. Limits of agreement (compared to the same participant's standard 60-second count), intra-class correlation coefficients and standard errors of measurement were calculated to compare the reliability of the four approaches, and a usability questionnaire was completed by the participants. There was considerable variation in the counts, with large components of the variation related to the participants and the videos, as well as the methods. None of the methods was entirely reliable, with no limits of agreement better than -10 to +9 breaths/min. Some of the methods were superior to the others, with ICCs from 0.24 to 0.92. By ICC the Once-per-Breath 60-second count and the Once-per-Breath 20-breath count were the most consistent, better even than the 60-second count by the participants. The 10-breath approaches performed least well. Users' initial impressions were positive, with little difference between the applications found. This study provides evidence that applications running on simple phones can be used to count respiratory rates in children. The Once-per-Breath methods are the most reliable, outperforming the 60-second count. For children with raised respiratory rates the 20-breath version of the Once-per-Breath method is faster, so it is a more suitable option where health workers are under time pressure. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nilsen, Erlend B; Strand, Olav
2018-01-01
We developed a model for estimating demographic rates and population abundance based on multiple data sets revealing information about population age- and sex structure. Such models have previously been described in the literature as change-in-ratio models, but we extend the applicability of the models by i) using time series data allowing the full temporal dynamics to be modelled, by ii) casting the model in an explicit hierarchical modelling framework, and by iii) estimating parameters based on Bayesian inference. Based on sensitivity analyses we conclude that the approach developed here is able to obtain estimates of demographic rate with high precision whenever unbiased data of population structure are available. Our simulations revealed that this was true also when data on population abundance are not available or not included in the modelling framework. Nevertheless, when data on population structure are biased due to different observability of different age- and sex categories this will affect estimates of all demographic rates. Estimates of population size is particularly sensitive to such biases, whereas demographic rates can be relatively precisely estimated even with biased observation data as long as the bias is not severe. We then use the models to estimate demographic rates and population abundance for two Norwegian reindeer (Rangifer tarandus) populations where age-sex data were available for all harvested animals, and where population structure surveys were carried out in early summer (after calving) and late fall (after hunting season), and population size is counted in winter. We found that demographic rates were similar regardless whether we include population count data in the modelling, but that the estimated population size is affected by this decision. This suggest that monitoring programs that focus on population age- and sex structure will benefit from collecting additional data that allow estimation of observability for different age- and sex classes. In addition, our sensitivity analysis suggests that focusing monitoring towards changes in demographic rates might be more feasible than monitoring abundance in many situations where data on population age- and sex structure can be collected.
Cerebellar pathology in childhood-onset vs. adult-onset essential tremor.
Louis, Elan D; Kuo, Sheng-Han; Tate, William J; Kelly, Geoffrey C; Faust, Phyllis L
2017-10-17
Although the incidence of ET increases with advancing age, the disease may begin at any age, including childhood. The question arises as to whether childhood-onset ET cases manifest the same sets of pathological changes in the cerebellum as those whose onset is during adult life. We quantified a broad range of postmortem features (Purkinje cell [PC] counts, PC axonal torpedoes, a host of associated axonal changes [PC axonal recurrent collateral count, PC thickened axonal profile count, PC axonal branching count], heterotopic PCs, and basket cell rating) in 60 ET cases (11 childhood-onset and 49 adult-onset) and 30 controls. Compared to controls, childhood-onset ET cases had lower PC counts, higher torpedo counts, higher heterotopic PC counts, higher basket cell plexus rating, and marginally higher PC axonal recurrent collateral counts. The median PC thickened axonal profile count and median PC axonal branching count were two to five times higher in childhood-onset ET than controls, but the differences did not reach statistical significance. Childhood-onset and adult-onset ET had similar PC counts, torpedo counts, heterotopic PC counts, basket cell plexus rating, PC axonal recurrent collateral counts, PC thickened axonal profile count and PC axonal branching count. In conclusion, we found that childhood-onset and adult-onset ET shared similar pathological changes in the cerebellum. The data suggest that pathological changes we have observed in the cerebellum in ET are a part of the pathophysiological cascade of events in both forms of the disease and that both groups seem to reach the same pathological endpoints at a similar age of death. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Xin; Links, Jonathan M.; Frey, Eric C.
2010-09-01
Quantum noise as well as anatomic and uptake variability in patient populations limits observer performance on a defect detection task in myocardial perfusion SPECT (MPS). The goal of this study was to investigate the relative importance of these two effects by varying acquisition time, which determines the count level, and assessing the change in performance on a myocardial perfusion (MP) defect detection task using both mathematical and human observers. We generated ten sets of projections of a simulated patient population with count levels ranging from 1/128 to around 15 times a typical clinical count level to simulate different levels of quantum noise. For the simulated population we modeled variations in patient, heart and defect size, heart orientation and shape, defect location, organ uptake ratio, etc. The projection data were reconstructed using the OS-EM algorithm with no compensation or with attenuation, detector response and scatter compensation (ADS). The images were then post-filtered and reoriented to generate short-axis slices. A channelized Hotelling observer (CHO) was applied to the short-axis images, and the area under the receiver operating characteristics (ROC) curve (AUC) was computed. For each noise level and reconstruction method, we optimized the number of iterations and cutoff frequencies of the Butterworth filter to maximize the AUC. Using the images obtained with the optimal iteration and cutoff frequency and ADS compensation, we performed human observer studies for four count levels to validate the CHO results. Both CHO and human observer studies demonstrated that observer performance was dependent on the relative magnitude of the quantum noise and the patient variation. When the count level was high, the patient variation dominated, and the AUC increased very slowly with changes in the count level for the same level of anatomic variability. When the count level was low, however, quantum noise dominated, and changes in the count level resulted in large changes in the AUC. This behavior agreed with a theoretical expression for the AUC as a function of quantum and anatomical noise levels. The results of this study demonstrate the importance of the tradeoff between anatomical and quantum noise in determining observer performance. For myocardial perfusion imaging, it indicates that, at current clinical count levels, there is some room to reduce acquisition time or injected activity without substantially degrading performance on myocardial perfusion defect detection.
Point count length and detection of forest neotropical migrant birds
Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Zhang, Heyong; Guo, Jin; Zhao, Shuai; Wang, Tingfeng
2012-08-01
In this paper, we report a mathematical derivation of probability density function (PDF) of time-interval between two successive photoelectrons of the laser heterodyne signal, and give a confirmation of the theoretical result by both numerical simulation and an experiment. The PDF curve of the beat signal displays a series of fluctuations, the period and amplitude of which are respectively determined by the beat frequency and the mixing efficiency. The beat frequency is derived from the frequency of fluctuations accordingly when the PDF curve is measured. This frequency measurement method still works while the traditional Fast Fourier Transform (FFT) algorithm hardly derives the correct peak value of the beat frequency in the condition that we detect 80 MHz beat signal with 8 Mcps (counts per-second) photons count rate, and this indicates an advantage of the PDF method.
Observation of the thunderstorm-related ground cosmic ray flux variations by ARGO-YBJ
NASA Astrophysics Data System (ADS)
Bartoli, B.; Bernardini, P.; Bi, X. J.; Cao, Z.; Catalanotti, S.; Chen, S. Z.; Chen, T. L.; Cui, S. W.; Dai, B. Z.; D'Amone, A.; Danzengluobu; De Mitri, I.; D'Ettorre Piazzoli, B.; Di Girolamo, T.; Di Sciascio, G.; Feng, C. F.; Feng, Zhaoyang; Feng, Zhenyong; Gao, W.; Gou, Q. B.; Guo, Y. Q.; He, H. H.; Hu, Haibing; Hu, Hongbo; Iacovacci, M.; Iuppa, R.; Jia, H. Y.; Labaciren; Li, H. J.; Liu, C.; Liu, J.; Liu, M. Y.; Lu, H.; Ma, L. L.; Ma, X. H.; Mancarella, G.; Mari, S. M.; Marsella, G.; Mastroianni, S.; Montini, P.; Ning, C. C.; Perrone, L.; Pistilli, P.; Salvini, P.; Santonico, R.; Shen, P. R.; Sheng, X. D.; Shi, F.; Surdo, A.; Tan, Y. H.; Vallania, P.; Vernetto, S.; Vigorito, C.; Wang, H.; Wu, C. Y.; Wu, H. R.; Xue, L.; Yang, Q. Y.; Yang, X. C.; Yao, Z. G.; Yuan, A. F.; Zha, M.; Zhang, H. M.; Zhang, L.; Zhang, X. Y.; Zhang, Y.; Zhao, J.; Zhaxiciren; Zhaxisangzhu; Zhou, X. X.; Zhu, F. R.; Zhu, Q. Q.; D'Alessandro, F.; ARGO-YBJ Collaboration
2018-02-01
A correlation between the secondary cosmic ray flux and the near-earth electric field intensity, measured during thunderstorms, has been found by analyzing the data of the ARGO-YBJ experiment, a full coverage air shower array located at the Yangbajing Cosmic Ray Laboratory (4300 m a. s. l., Tibet, China). The counting rates of showers with different particle multiplicities (m =1 , 2, 3, and ≥4 ) have been found to be strongly dependent upon the intensity and polarity of the electric field measured during the course of 15 thunderstorms. In negative electric fields (i.e., accelerating negative charges downwards), the counting rates increase with increasing electric field strength. In positive fields, the rates decrease with field intensity until a certain value of the field EFmin (whose value depends on the event multiplicity), above which the rates begin increasing. By using Monte Carlo simulations, we found that this peculiar behavior can be well described by the presence of an electric field in a layer of thickness of a few hundred meters in the atmosphere above the detector, which accelerates/decelerates the secondary shower particles of opposite charge, modifying the number of particles with energy exceeding the detector threshold. These results, for the first time to our knowledge, give a consistent explanation for the origin of the variation of the electron/positron flux observed for decades by high altitude cosmic ray detectors during thunderstorms.
Probing Jupiter's Radiation Environment with Juno-UVS
NASA Astrophysics Data System (ADS)
Kammer, J.; Gladstone, R.; Greathouse, T. K.; Hue, V.; Versteeg, M. H.; Davis, M. W.; Santos-Costa, D.; Becker, H. N.; Bolton, S. J.; Connerney, J. E. P.; Levin, S.
2017-12-01
While primarily designed to observe photon emission from the Jovian aurora, Juno's Ultraviolet Spectrograph (Juno-UVS) has also measured background count rates associated with penetrating high-energy radiation. These background counts are distinguishable from photon events, as they are generally spread evenly across the entire array of the Juno-UVS detector, and as the spacecraft spins, they set a baseline count rate higher than the sky background rate. During eight perijove passes, this background radiation signature has varied significantly on both short (spin-modulated) timescales, as well as longer timescales ( minutes to hours). We present comparisons of the Juno-UVS data across each of the eight perijove passes, with a focus on the count rate that can be clearly attributed to radiation effects rather than photon events. Once calibrated to determine the relationship between count rate and penetrating high-energy radiation (e.g., using existing GEANT models), these in situ measurements by Juno-UVS will provide additional constraints to radiation belt models close to the planet.
NASA Astrophysics Data System (ADS)
Béthermin, Matthieu; Wu, Hao-Yi; Lagache, Guilaine; Davidzon, Iary; Ponthieu, Nicolas; Cousin, Morgane; Wang, Lingyu; Doré, Olivier; Daddi, Emanuele; Lapi, Andrea
2017-11-01
Follow-up observations at high-angular resolution of bright submillimeter galaxies selected from deep extragalactic surveys have shown that the single-dish sources are comprised of a blend of several galaxies. Consequently, number counts derived from low- and high-angular-resolution observations are in tension. This demonstrates the importance of resolution effects at these wavelengths and the need for realistic simulations to explore them. We built a new 2 deg2 simulation of the extragalactic sky from the far-infrared to the submillimeter. It is based on an updated version of the 2SFM (two star-formation modes) galaxy evolution model. Using global galaxy properties generated by this model, we used an abundance-matching technique to populate a dark-matter lightcone and thus simulate the clustering. We produced maps from this simulation and extracted the sources, and we show that the limited angular resolution of single-dish instruments has a strong impact on (sub)millimeter continuum observations. Taking into account these resolution effects, we are reproducing a large set of observables, as number counts and their evolution with redshift and cosmic infrared background power spectra. Our simulation consistently describes the number counts from single-dish telescopes and interferometers. In particular, at 350 and 500 μm, we find that the number counts measured by Herschel between 5 and 50 mJy are biased towards high values by a factor 2, and that the redshift distributions are biased towards low redshifts. We also show that the clustering has an important impact on the Herschel pixel histogram used to derive number counts from P(D) analysis. We find that the brightest galaxy in the beam of a 500 μm Herschel source contributes on average to only 60% of the Herschel flux density, but that this number will rise to 95% for future millimeter surveys on 30 m-class telescopes (e.g., NIKA2 at IRAM). Finally, we show that the large number density of red Herschel sources found in observations but not in models might be an observational artifact caused by the combination of noise, resolution effects, and the steepness of color- and flux density distributions. Our simulation, called Simulated Infrared Dusty Extragalactic Sky (SIDES), is publicly available. Our simulation Simulated Infrared Dusty Extragalactic Sky (SIDES) is available at http://cesam.lam.fr/sides.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cammin, Jochen, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Taguchi, Katsuyuki, E-mail: jcammin1@jhmi.edu, E-mail: ktaguchi@jhmi.edu; Xu, Jennifer
Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra andmore » count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi et al., “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COV{sub W}), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COV{sub W}'s less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COV{sub W} was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE.« less
Cammin, Jochen; Xu, Jennifer; Barber, William C.; Iwanczyk, Jan S.; Hartsough, Neal E.; Taguchi, Katsuyuki
2014-01-01
Purpose: Energy discriminating, photon-counting detectors (PCDs) are an emerging technology for computed tomography (CT) with various potential benefits for clinical CT. The photon energies measured by PCDs can be distorted due to the interactions of a photon with the detector and the interaction of multiple coincident photons. These effects result in distorted recorded x-ray spectra which may lead to artifacts in reconstructed CT images and inaccuracies in tissue identification. Model-based compensation techniques have the potential to account for the distortion effects. This approach requires only a small number of parameters and is applicable to a wide range of spectra and count rates, but it needs an accurate model of the spectral distortions occurring in PCDs. The purpose of this study was to develop a model of those spectral distortions and to evaluate the model using a PCD (model DXMCT-1; DxRay, Inc., Northridge, CA) and various x-ray spectra in a wide range of count rates. Methods: The authors hypothesize that the complex phenomena of spectral distortions can be modeled by: (1) separating them into count-rate independent factors that we call the spectral response effects (SRE), and count-rate dependent factors that we call the pulse pileup effects (PPE), (2) developing separate models for SRE and PPE, and (3) cascading the SRE and PPE models into a combined SRE+PPE model that describes PCD distortions at both low and high count rates. The SRE model describes the probability distribution of the recorded spectrum, with a photo peak and a continuum tail, given the incident photon energy. Model parameters were obtained from calibration measurements with three radioisotopes and then interpolated linearly for other energies. The PPE model used was developed in the authors’ previous work [K. Taguchi , “Modeling the performance of a photon counting x-ray detector for CT: Energy response and pulse pileup effects,” Med. Phys. 38(2), 1089–1102 (2011)]. The agreement between the x-ray spectra calculated by the cascaded SRE+PPE model and the measured spectra was evaluated for various levels of deadtime loss ratios (DLR) and incident spectral shapes, realized using different attenuators, in terms of the weighted coefficient of variation (COVW), i.e., the root mean square difference weighted by the statistical errors of the data and divided by the mean. Results: At low count rates, when DLR < 10%, the distorted spectra measured by the DXMCT-1 were in agreement with those calculated by SRE only, with COVW's less than 4%. At higher count rates, the measured spectra were also in agreement with the ones calculated by the cascaded SRE+PPE model; with PMMA as attenuator, COVW was 5.6% at a DLR of 22% and as small as 6.7% for a DLR as high as 55%. Conclusions: The x-ray spectra calculated by the proposed model agreed with the measured spectra over a wide range of count rates and spectral shapes. The SRE model predicted the distorted, recorded spectra with low count rates over various types and thicknesses of attenuators. The study also validated the hypothesis that the complex spectral distortions in a PCD can be adequately modeled by cascading the count-rate independent SRE and the count-rate dependent PPE. PMID:24694136
Recurrence rate and magma effusion rate for the latest volcanism on Arsia Mons, Mars
NASA Astrophysics Data System (ADS)
Richardson, Jacob A.; Wilson, James A.; Connor, Charles B.; Bleacher, Jacob E.; Kiyosugi, Koji
2017-01-01
Magmatism and volcanism have evolved the Martian lithosphere, surface, and climate throughout the history of Mars. Constraining the rates of magma generation and timing of volcanism on the surface clarifies the ways in which magma and volcanic activity have shaped these Martian systems. The ages of lava flows on other planets are often estimated using impact crater counts, assuming that the number and size-distribution of impact craters per unit area reflect the time the lava flow has been on the surface and exposed to potential impacts. Here we show that impact crater age model uncertainty is reduced by adding stratigraphic information observed at locations where neighboring lavas abut each other, and demonstrate the significance of this reduction in age uncertainty for understanding the history of a volcanic field comprising 29 vents in the 110-km-diameter caldera of Arsia Mons, Mars. Each vent within this caldera produced lava flows several to tens of kilometers in length; these vents are likely among the youngest on Mars, since no impact craters in their lava flows are larger than 1 km in diameter. First, we modeled the age of each vent with impact crater counts performed on their corresponding lava flows and found very large age uncertainties for the ages of individual vents, often spanning the estimated age for the entire volcanic field. The age model derived from impact crater counts alone is broad and unimodal, with estimated peak activity in the field around 130 Ma. Next we applied our volcano event age model (VEAM), which uses a directed graph of stratigraphic relationships and random sampling of the impact crater age determinations to create alternative age models. Monte Carlo simulation was used to create 10,000 possible vent age sets. The recurrence rate of volcanism is calculated for each possible age set, and these rates are combined to calculate the median recurrence rate of all simulations. Applying this approach to the 29 volcanic vents, volcanism likely began around 200-300 Ma then first peaked around 150 Ma, with an average production rate of 0.4 vents per Myr. The recurrence rate estimated including stratigraphic data is distinctly bimodal, with a second, lower peak in activity around 100 Ma. Volcanism then waned until the final vents were produced 10-90 Ma. Based on this model, volume flux is also bimodal, reached a peak rate of 1-8 km3 Myr-1 by 150 Ma and remained above half this rate until about 90 Ma, after which the volume flux diminished greatly. The onset of effusive volcanism from 200-150 Ma might be due to a transition of volcanic style away from explosive volcanism that emplaced tephra on the western flank of Arsia Mons, while the waning of volcanism after the 150 Ma peak might represent a larger-scale diminishing of volcanic activity at Arsia Mons related to the emplacement of flank apron lavas.
Recurrence Rate and Magma Effusion Rate for the Latest Volcanism on Arsia Mons, Mars
NASA Technical Reports Server (NTRS)
Richardson, Jacob A.; Wilson, James A.; Connor, Charles B.; Bleacher, Jacob E.; Kiyosugi, Koji
2016-01-01
Magmatism and volcanism have evolved the Martian lithosphere, surface, and climate throughout the history of Mars. Constraining the rates of magma generation and timing of volcanism on the surface clarifies the ways in which magma and volcanic activity have shaped these Martian systems. The ages of lava flows on other planets are often estimated using impact crater counts, assuming that the number and size-distribution of impact craters per unit area reflect the time the lava flow has been on the surface and exposed to potential impacts. Here we show that impact crater age model uncertainty is reduced by adding stratigraphic information observed at locations where neighboring lavas abut each other, and demonstrate the significance of this reduction in age uncertainty for understanding the history of a volcanic field comprising 29 vents in the 110-kilometer-diameter caldera of Arsia Mons, Mars. Each vent within this caldera produced lava flows several to tens of kilometers in length; these vents are likely among the youngest on Mars, since no impact craters in their lava flows are larger than 1 kilometer in diameter. First, we modeled the age of each vent with impact crater counts performed on their corresponding lava flows and found very large age uncertainties for the ages of individual vents, often spanning the estimated age for the entire volcanic field. The age model derived from impact crater counts alone is broad and unimodal, with estimated peak activity in the field around 130Ma (megaannum, 1 million years). Next we applied our volcano event age model (VEAM), which uses a directed graph of stratigraphic relationships and random sampling of the impact crater age determinations to create alternative age models. Monte Carlo simulation was used to create 10,000 possible vent age sets. The recurrence rate of volcanism is calculated for each possible age set, and these rates are combined to calculate the median recurrence rate of all simulations. Applying this approach to the 29 volcanic vents, volcanism likely began around 200-300Ma then first peaked around 150Ma, with an average production rate of 0.4 vents per Myr (million years). The recurrence rate estimated including stratigraphic data is distinctly bimodal, with a second, lower peak in activity around 100Ma. Volcanism then waned until the final vents were produced 10-90Ma. Based on this model, volume flux is also bimodal, reached a peak rate of 1-8 cubic kilometers per million years by 150Ma and remained above half this rate until about 90Ma, after which the volume flux diminished greatly. The onset of effusive volcanism from 200-150Ma might be due to a transition of volcanic style away from explosive volcanism that emplaced tephra on the western flank of Arsia Mons, while the waning of volcanism after the 150Ma peak might represent a larger-scale diminishing of volcanic activity at Arsia Mons related to the emplacement of flank apron lavas.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
3D Hydrodynamic Simulation of Classical Novae Explosions
NASA Astrophysics Data System (ADS)
Kendrick, Coleman J.
2015-01-01
This project investigates the formation and lifecycle of classical novae and determines how parameters such as: white dwarf mass, star mass and separation affect the evolution of the rotating binary system. These parameters affect the accretion rate, frequency of the nova explosions and light curves. Each particle in the simulation represents a volume of hydrogen gas and are initialized randomly in the outer shell of the companion star. The forces on each particle include: gravity, centrifugal, coriolis, friction, and Langevin. The friction and Langevin forces are used to model the viscosity and internal pressure of the gas. A velocity Verlet method with a one second time step is used to compute velocities and positions of the particles. A new particle recycling method was developed which was critical for computing an accurate and stable accretion rate and keeping the particle count reasonable. I used C++ and OpenCL to create my simulations and ran them on two Nvidia GTX580s. My simulations used up to 1 million particles and required up to 10 hours to complete. My simulation results for novae U Scorpii and DD Circinus are consistent with professional hydrodynamic simulations and observed experimental data (light curves and outburst frequencies). When the white dwarf mass is increased, the time between explosions decreases dramatically. My model was used to make the first prediction for the next outburst of nova DD Circinus. My simulations also show that the companion star blocks the expanding gas shell leading to an asymmetrical expanding shell.
NASA Astrophysics Data System (ADS)
Bristow, Quentin
1990-03-01
The occurrence rates of pulse strings, or sequences of pulses with interarrival times less than the resolving time of the pulse-height analysis system used to acquire spectra, are derived from theoretical considerations. Logic circuits were devised to make experimental measurements of multiple pulse string occurrence rates in the output from a scintillation detector over a wide range of count rates. Markov process theory was used to predict state transition rates in the logic circuits, enabling the experimental data to be checked rigorously for conformity with those predicted for a Poisson distribution. No fundamental discrepancies were observed. Monte Carlo simulations, incorporating criteria for pulse pileup inherent in the operation of modern analog to digital converters, were used to generate pileup spectra due to coincidences between two pulses (first order pileup) and three pulses (second order pileup) for different semi-Gaussian pulse shapes. Coincidences between pulses in a single channel produced a basic probability density function spectrum. The use of a flat spectrum showed the first order pileup distorted the spectrum to a linear ramp with a pileup tail. A correction algorithm was successfully applied to correct entire spectra (simulated and real) for first and second order pileups.
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
2011-01-01
Background It is unclear whether antiretroviral (ART) naive HIV-positive individuals with high CD4 counts have a raised mortality risk compared with the general population, but this is relevant for considering earlier initiation of antiretroviral therapy. Methods Pooling data from 23 European and North American cohorts, we calculated country-, age-, sex-, and year-standardised mortality ratios (SMRs), stratifying by risk group. Included patients had at least one pre-ART CD4 count above 350 cells/mm3. The association between CD4 count and death rate was evaluated using Poisson regression methods. Findings Of 40,830 patients contributing 80,682 person-years of follow up with CD4 count above 350 cells/mm3, 419 (1.0%) died. The SMRs (95% confidence interval) were 1.30 (1.06-1.58) in homosexual men, and 2.94 (2.28-3.73) and 9.37 (8.13-10.75) in the heterosexual and IDU risk groups respectively. CD4 count above 500 cells/mm3 was associated with a lower death rate than 350-499 cells/mm3: adjusted rate ratios (95% confidence intervals) for 500-699 cells/mm3 and above 700 cells/mm3 were 0.77 (0.61-0.95) and 0.66 (0.52-0.85) respectively. Interpretation In HIV-infected ART-naive patients with high CD4 counts, death rates were raised compared with the general population. In homosexual men this was modest, suggesting that a proportion of the increased risk in other groups is due to confounding by other factors. Even in this high CD4 count range, lower CD4 count was associated with raised mortality. PMID:20638118
A Prescription for List-Mode Data Processing Conventions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef
There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less
Study of Compton suppression for use in spent nuclear fuel assay
NASA Astrophysics Data System (ADS)
Bender, Sarah
The focus of this study has been to assess Compton suppressed gamma-ray detection systems for the multivariate analysis of spent nuclear fuel. This objective has been achieved using direct measurement of samples of irradiated fuel elements in two geometrical configurations with Compton suppression systems. In order to address the objective to quantify the number of additionally resolvable photopeaks, direct Compton suppressed spectroscopic measurements of spent nuclear fuel in two configurations were performed: as intact fuel elements and as dissolved feed solutions. These measurements directly assessed and quantified the differences in measured gamma-ray spectrum from the application of Compton suppression. Several irradiated fuel elements of varying cooling time from the Penn State Breazeale Reactor spent fuel inventory were measured using three Compton suppression systems that utilized different primary detectors: HPGe, LaBr3, and NaI(Tl). The application of Compton suppression using a LaBr3 primary detector to the measurement of the current core fuel element, which presented the highest count rate, allowed four additional spectral features to be resolved. In comparison, the HPGe-CSS was able to resolve eight additional photopeaks as compared to the standalone HPGe measurement. Measurements with the NaI(Tl) primary detector were unable to resolve any additional peaks, due to its relatively low resolution. Samples of Approved Test Material (ATM) commercial fuel elements were obtained from Pacific Northwest National Laboratory. The samples had been processed using the beginning stages of the PUREX method and represented the unseparated feed solution from a reprocessing facility. Compton suppressed measurements of the ATM fuel samples were recorded inside the guard detector annulus, to simulate the siphoning of small quantities from the main process stream for long dwell measurement periods. Photopeak losses were observed in the measurements of the dissolved ATM fuel samples because the spectra was recorded from the source in very close proximity to the detector and surrounded by the guard annulus, so the detection probability is very high. Though this configuration is optimal for a Compton suppression system for the measurement of low count rate samples, measurement of high count rate samples in the enclosed arrangement leads to sum peaks in both the suppressed and unsuppressed spectra and losses to photopeak counts in the suppressed spectra. No additional photopeaks were detected using Compton suppression with this geometry. A detector model was constructed that can accurately simulate a Compton suppressed spectral measurement of radiation from spent nuclear fuel using HPGe or LaBr3 detectors. This is the first detector model capable of such an accomplishment. The model uses the Geant4 toolkit coupled with the RadSrc application and it accepts spent fuel composition data in list form. The model has been validated using dissolved ATM fuel samples in the standard, enclosed geometry of the PSU HPGe-CSS. The model showed generally good agreement with both the unsuppressed and suppressed measured fuel sample spectra, however the simulation is more appropriate for the generation of gamma-ray spectra in the beam source configuration. Photopeak losses due to cascade decay emissions in the Compton suppressed spectra were not appropriately managed by the simulation. Compton suppression would be a beneficial addition to NDA process monitoring systems if oriented such that the gamma-ray photons are collimated to impinge the primary detector face as a beam. The analysis has shown that peak losses through accidental coincidences are minimal and the reduction in the Compton continuum allows additional peaks to be resolved. (Abstract shortened by UMI.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Investigation of ultra low-dose scans in the context of quantum-counting clinical CT
NASA Astrophysics Data System (ADS)
Weidinger, T.; Buzug, T. M.; Flohr, T.; Fung, G. S. K.; Kappler, S.; Stierstorfer, K.; Tsui, B. M. W.
2012-03-01
In clinical computed tomography (CT), images from patient examinations taken with conventional scanners exhibit noise characteristics governed by electronics noise, when scanning strongly attenuating obese patients or with an ultra-low X-ray dose. Unlike CT systems based on energy integrating detectors, a system with a quantum counting detector does not suffer from this drawback. Instead, the noise from the electronics mainly affects the spectral resolution of these detectors. Therefore, it does not contribute to the image noise in spectrally non-resolved CT images. This promises improved image quality due to image noise reduction in scans obtained from clinical CT examinations with lowest X-ray tube currents or obese patients. To quantify the benefits of quantum counting detectors in clinical CT we have carried out an extensive simulation study of the complete scanning and reconstruction process for both kinds of detectors. The simulation chain encompasses modeling of the X-ray source, beam attenuation in the patient, and calculation of the detector response. Moreover, in each case the subsequent image preprocessing and reconstruction is modeled as well. The simulation-based, theoretical evaluation is validated by experiments with a novel prototype quantum counting system and a Siemens Definition Flash scanner with a conventional energy integrating CT detector. We demonstrate and quantify the improvement from image noise reduction achievable with quantum counting techniques in CT examinations with ultra-low X-ray dose and strong attenuation.
Exploiting current-generation graphics hardware for synthetic-scene generation
NASA Astrophysics Data System (ADS)
Tanner, Michael A.; Keen, Wayne A.
2010-04-01
Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yanqi; Zhang, Limin; Li, Jiao; Zhou, Zhongxing; Zhao, Huijuan; Gao, Feng
2016-04-01
We present a generalized strategy for direct reconstruction in pharmacokinetic diffuse fluorescence tomography (DFT) with CT-analogous scanning mode, which can accomplish one-step reconstruction of the indocyanine-green pharmacokinetic-rate images within in vivo small animals by incorporating the compartmental kinetic model into an adaptive extended Kalman filtering scheme and using an instantaneous sampling dataset. This scheme, compared with the established indirect and direct methods, eliminates the interim error of the DFT inversion and relaxes the expensive requirement of the instrument for obtaining highly time-resolved date-sets of complete 360 deg projections. The scheme is validated by two-dimensional simulations for the two-compartment model and pilot phantom experiments for the one-compartment model, suggesting that the proposed method can estimate the compartmental concentrations and the pharmacokinetic-rates simultaneously with a fair quantitative and localization accuracy, and is well suitable for cost-effective and dense-sampling instrumentation based on the highly-sensitive photon counting technique.
Simulated fissioning of uranium and testing of the fission-track dating method
McGee, V.E.; Johnson, N.M.; Naeser, C.W.
1985-01-01
A computer program (FTD-SIM) faithfully simulates the fissioning of 238U with time and 235U with neutron dose. The simulation is based on first principles of physics where the fissioning of 238U with the flux of time is described by Ns = ??f 238Ut and the fissioning of 235U with the fluence of neutrons is described by Ni = ??235U??. The Poisson law is used to set the stochastic variation of fissioning within the uranium population. The life history of a given crystal can thus be traced under an infinite variety of age and irradiation conditions. A single dating attempt or up to 500 dating attempts on a given crystal population can be simulated by specifying the age of the crystal population, the size and variation in the areas to be counted, the amount and distribution of uranium, the neutron dose to be used and its variation, and the desired ratio of 238U to 235U. A variety of probability distributions can be applied to uranium and counting-area. The Price and Walker age equation is used to estimate age. The output of FTD-SIM includes the tabulated results of each individual dating attempt (sample) on demand and/or the summary statistics and histograms for multiple dating attempts (samples) including the sampling age. An analysis of the results from FTD-SIM shows that: (1) The external detector method is intrinsically more precise than the population method. (2) For the external detector method a correlation between spontaneous track count, Ns, and induced track count, Ni, results when the population of grains has a stochastic uranium content and/or when the counting areas between grains are stochastic. For the population method no correlation can exist. (3) In the external detector method the sampling distribution of age is independent of the number of grains counted. In the population method the sampling distribution of age is highly dependent on the number of grains counted. (4) Grains with zero-track counts, either in Ns or Ni, are in integral part of fissioning theory and under certain circumstances must be included in any estimate of age. (5) In estimating standard error of age the standard error of Ns and Ni and ?? must be accurately estimated and propagated through the age equation. Several statistical models are presently available to do so. ?? 1985.
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
A unified genetic association test robust to latent population structure for a count phenotype.
Song, Minsun
2018-06-04
Confounding caused by latent population structure in genome-wide association studies has been a big concern despite the success of genome-wide association studies at identifying genetic variants associated with complex diseases. In particular, because of the growing interest in association mapping using count phenotype data, it would be interesting to develop a testing framework for genetic associations that is immune to population structure when phenotype data consist of count measurements. Here, I propose a solution for testing associations between single nucleotide polymorphisms and a count phenotype in the presence of an arbitrary population structure. I consider a classical range of models for count phenotype data. Under these models, a unified test for genetic associations that protects against confounding was derived. An algorithm was developed to efficiently estimate the parameters that are required to fit the proposed model. I illustrate the proposed approach using simulation studies and an empirical study. Both simulated and real-data examples suggest that the proposed method successfully corrects population structure. Copyright © 2018 John Wiley & Sons, Ltd.
THE USE OF QUENCHING IN A LIQUID SCINTILLATION COUNTER FOR QUANTITATIVE ANALYSIS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foster, G.V.
1963-01-01
Quenching was used to quantitatively determine the amonnt of quenching agent present. A sealed promethium147 source was prepared to be used for the count rate determinations. Two methods to determine the amount of quenching agent present in a sample were developed. One method related the count rate of a sample containing a quenching agent to the amount of quenching agent present. Calibration curves were plotted using both color and chemical quenchers. The quenching agents used were: F.D.C. Orange No. 2, F.D.C. Yellow No. 3, F.D.C. Yellow No. 4, Scarlet Red, acetone, benzaldehyde, and carbon tetrachloride. the color quenchers gave amore » linear-relationship, while the chemical quenchers gave a non-linear relationship. Quantities of the color quenchers between about 0.008 mg and 0.100 mg can be determined with an error less than 5%. The calibration curves were found to be usable over a long period of time. The other method related the change in the ratio of the count rates in two voltage windows to the amount of quenching agent present. The quenchers mentioned above were used. Calibration curves were plotted for both the color and chemical quenchers. The relationships of ratio versus amount of quencher were non-linear in each case. It was shown that the reproducibility of the count rate and the ratio was independent of the amount of quencher present but was dependent on the count rate. At count rates above 10,000 counts per minute the reproducibility was better than 1%. (TCO)« less
Performance evaluation of the Ingenuity TF PET/CT scanner with a focus on high count-rate conditions
NASA Astrophysics Data System (ADS)
Kolthammer, Jeffrey A.; Su, Kuan-Hao; Grover, Anu; Narayanan, Manoj; Jordan, David W.; Muzic, Raymond F.
2014-07-01
This study evaluated the positron emission tomography (PET) imaging performance of the Ingenuity TF 128 PET/computed tomography (CT) scanner which has a PET component that was designed to support a wider radioactivity range than is possible with those of Gemini TF PET/CT and Ingenuity TF PET/MR. Spatial resolution, sensitivity, count rate characteristics and image quality were evaluated according to the NEMA NU 2-2007 standard and ACR phantom accreditation procedures; these were supplemented by additional measurements intended to characterize the system under conditions that would be encountered during quantitative cardiac imaging with 82Rb. Image quality was evaluated using a hot spheres phantom, and various contrast recovery and noise measurements were made from replicated images. Timing and energy resolution, dead time, and the linearity of the image activity concentration, were all measured over a wide range of count rates. Spatial resolution (4.8-5.1 mm FWHM), sensitivity (7.3 cps kBq-1), peak noise-equivalent count rate (124 kcps), and peak trues rate (365 kcps) were similar to those of the Gemini TF PET/CT. Contrast recovery was higher with a 2 mm, body-detail reconstruction than with a 4 mm, body reconstruction, although the precision was reduced. The noise equivalent count rate peak was broad (within 10% of peak from 241-609 MBq). The activity measured in phantom images was within 10% of the true activity for count rates up to those observed in 82Rb cardiac PET studies.
Schmitz, Christoph; Eastwood, Brian S.; Tappan, Susan J.; Glaser, Jack R.; Peterson, Daniel A.; Hof, Patrick R.
2014-01-01
Stereologic cell counting has had a major impact on the field of neuroscience. A major bottleneck in stereologic cell counting is that the user must manually decide whether or not each cell is counted according to three-dimensional (3D) stereologic counting rules by visual inspection within hundreds of microscopic fields-of-view per investigated brain or brain region. Reliance on visual inspection forces stereologic cell counting to be very labor-intensive and time-consuming, and is the main reason why biased, non-stereologic two-dimensional (2D) “cell counting” approaches have remained in widespread use. We present an evaluation of the performance of modern automated cell detection and segmentation algorithms as a potential alternative to the manual approach in stereologic cell counting. The image data used in this study were 3D microscopic images of thick brain tissue sections prepared with a variety of commonly used nuclear and cytoplasmic stains. The evaluation compared the numbers and locations of cells identified unambiguously and counted exhaustively by an expert observer with those found by three automated 3D cell detection algorithms: nuclei segmentation from the FARSIGHT toolkit, nuclei segmentation by 3D multiple level set methods, and the 3D object counter plug-in for ImageJ. Of these methods, FARSIGHT performed best, with true-positive detection rates between 38 and 99% and false-positive rates from 3.6 to 82%. The results demonstrate that the current automated methods suffer from lower detection rates and higher false-positive rates than are acceptable for obtaining valid estimates of cell numbers. Thus, at present, stereologic cell counting with manual decision for object inclusion according to unbiased stereologic counting rules remains the only adequate method for unbiased cell quantification in histologic tissue sections. PMID:24847213
Background Conditions for the October 29, 2003 Solar Flare by the AVS-F Apparatus Data
NASA Astrophysics Data System (ADS)
Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Lyapin, A. R.; Troitskaya, E. V.
The background model for AVS-F apparatus onboard CORONAS-F satellite for the October 29, 2003 X10-class solar flare is discussed in the presented work. This background model developed for AVS-F counts rate in the low- and high-energy spectral ranges in both individual channels and summarized. Count rate were approximated by polynomials of high order taking into account the mean count rate in the geomagnetic equatorial region at the different orbits parts and Kp-index averaged on 5 bins in time interval from -24 to -12 hours before the time of geomagnetic equator passing. The observed averaged counts rate on equator in the region of geomagnetic latitude ±5o and estimated minimum count rate values are in coincidence within statistical errors for all selected orbits parts used for background modeling. This model will used to refine the estimated energy of registered during the solar flare spectral features and detailed analysis of their temporal profiles behavior both in corresponding energy bands and in summarized energy range.
Extreme Ultraviolet Explorer observations of the magnetic cataclysmic variable RE 1938-461
NASA Technical Reports Server (NTRS)
Warren, John K.; Vallerga, John V.; Mauche, Christopher W.; Mukai, Koji; Siegmund, Oswald H. W.
1993-01-01
The magnetic cataclysmic variable RE 1938-461 was observed by the Extreme Ultraviolet Explorer (EUVE) Deep Survey instrument on 1992 July 8-9 during in-orbit calibration. It was detected in the Lexan/ boron (65-190 A) band, with a quiescent count rate of 0.0062 +/- 0.0017/s, and was not detected in the aluminum/carbon (160-360 A) band. The Lexan/boron count rate is lower than the corresponding ROSAT wide-field camera Lexan/boron count rate. This is consistent with the fact that the source was in a low state during an optical observation performed just after the EUVE observation, whereas it was in an optical high state during the ROSAT observation. The quiescent count rates are consistent with a virtual cessation of accretion. Two transient events lasting about 1 hr occurred during the Lexan/boron pointing, the second at a count rate of 0.050 +/- 0.006/s. This appears to be the first detection of an EUV transient during the low state of a magnetic cataclysmic variable. We propose two possible explanations for the transient events.
Crater Age and Hydrogen Content in Lunar Regolith from LEND Neutron Data
NASA Astrophysics Data System (ADS)
Sanin, Anton; Starr, Richard; Litvak, Maxim; Petro, Noah; Mitrofanov, Igor
2017-04-01
We are presenting an analysis of Lunar Exploration Neutron Detector (LEND) epithermal neutron count rates for a large set of mid-latitude craters. Epithermal neutron count rates for crater interiors measured by the LEND Sensor for Epithermal Neutrons (SETN) were compared to crater exteriors for 322 craters. An increase in relative count rate at about 9-sigma confidence level was found, consistent with a lower hydrogen content. A smaller subset of 31 craters, all located near three Copernican era craters, Jackson, Tycho, and Necho, also shows a significant increase in Optical Maturity parameter implying an immature regolith. The increase in SETN count rate for these craters is greater than the increase for the full set of craters by more than a factor of two.
Crater Age and Hydrogen Content in Lunar Regolith from LEND Neutron Data
NASA Technical Reports Server (NTRS)
Starr, Richard D.; Litvak, Maxim L.; Petro, Noah E.; Mitrofanov, Igor G.; Boynton, William V.; Chin, Gordon; Livengood, Timothy A.; McClanahan, Timothy P.; Sanin, Anton B.; Sagdeev, Roald Z.;
2017-01-01
Analysis of Lunar Exploration Neutron Detector (LEND) neutron count rates for a large set of mid-latitude craters provides evidence for lower hydrogen content in the crater interiors compared to typical highland values. Epithermal neutron count rates for crater interiors measured by the LEND Sensor for Epithermal Neutrons (SETN) were compared to crater exteriors for 301 craters and displayed an increase in mean count rate at the approx. 9-sigma confidence level, consistent with a lower hydrogen content. A smaller subset of 31 craters also shows a significant increase in Optical Maturity parameter implying an immature regolith. The increase in SETN count rate for these craters is greater than the increase for the full set of craters by more than a factor of two.
A cylindrical SPECT camera with de-centralized readout scheme
NASA Astrophysics Data System (ADS)
Habte, F.; Stenström, P.; Rillbert, A.; Bousselham, A.; Bohm, C.; Larsson, S. A.
2001-09-01
An optimized brain single photon emission computed tomograph (SPECT) camera is being designed at Stockholm University and Karolinska Hospital. The design goal is to achieve high sensitivity, high-count rate and high spatial resolution. The sensitivity is achieved by using a cylindrical crystal, which gives a closed geometry with large solid angles. A de-centralized readout scheme where only a local environment around the light excitation is readout supports high-count rates. The high resolution is achieved by using an optimized crystal configuration. A 12 mm crystal plus 12 mm light guide combination gave an intrinsic spatial resolution better than 3.5 mm (140 keV) in a prototype system. Simulations show that a modified configuration can improve this value. A cylindrical configuration with a rotating collimator significantly simplifies the mechanical design of the gantry. The data acquisition and control system uses early digitization and subsequent digital signal processing to extract timing and amplitude information, and monitors the position of the collimator. The readout system consists of 12 or more modules each based on programmable logic and a digital signal processor. The modules send data to a PC file server-reconstruction engine via a Firewire (IEEE-1394) network.
Yang, Songshan; Cranford, James A; Jester, Jennifer M; Li, Runze; Zucker, Robert A; Buu, Anne
2017-02-28
This study proposes a time-varying effect model for examining group differences in trajectories of zero-inflated count outcomes. The motivating example demonstrates that this zero-inflated Poisson model allows investigators to study group differences in different aspects of substance use (e.g., the probability of abstinence and the quantity of alcohol use) simultaneously. The simulation study shows that the accuracy of estimation of trajectory functions improves as the sample size increases; the accuracy under equal group sizes is only higher when the sample size is small (100). In terms of the performance of the hypothesis testing, the type I error rates are close to their corresponding significance levels under all settings. Furthermore, the power increases as the alternative hypothesis deviates more from the null hypothesis, and the rate of this increasing trend is higher when the sample size is larger. Moreover, the hypothesis test for the group difference in the zero component tends to be less powerful than the test for the group difference in the Poisson component. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Modeling structured population dynamics using data from unmarked individuals
Grant, Evan H. Campbell; Zipkin, Elise; Thorson, James T.; See, Kevin; Lynch, Heather J.; Kanno, Yoichiro; Chandler, Richard; Letcher, Benjamin H.; Royle, J. Andrew
2014-01-01
The study of population dynamics requires unbiased, precise estimates of abundance and vital rates that account for the demographic structure inherent in all wildlife and plant populations. Traditionally, these estimates have only been available through approaches that rely on intensive mark–recapture data. We extended recently developed N-mixture models to demonstrate how demographic parameters and abundance can be estimated for structured populations using only stage-structured count data. Our modeling framework can be used to make reliable inferences on abundance as well as recruitment, immigration, stage-specific survival, and detection rates during sampling. We present a range of simulations to illustrate the data requirements, including the number of years and locations necessary for accurate and precise parameter estimates. We apply our modeling framework to a population of northern dusky salamanders (Desmognathus fuscus) in the mid-Atlantic region (USA) and find that the population is unexpectedly declining. Our approach represents a valuable advance in the estimation of population dynamics using multistate data from unmarked individuals and should additionally be useful in the development of integrated models that combine data from intensive (e.g., mark–recapture) and extensive (e.g., counts) data sources.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Relationship of milking rate to somatic cell count.
Brown, C A; Rischette, S J; Schultz, L H
1986-03-01
Information on milking rate, monthly bucket somatic cell counts, mastitis treatment, and milk production was obtained from 284 lactations of Holstein cows separated into three lactation groups. Significant correlations between somatic cell count (linear score) and other parameters included production in lactation 1 (-.185), production in lactation 2 (-.267), and percent 2-min milk in lactation 2 (.251). Somatic cell count tended to increase with maximum milking rate in all lactations, but correlations were not statistically significant. Twenty-nine percent of cows with milking rate measurements were treated for clinical mastitis. Treated cows in each lactation group produced less milk than untreated cows. In the second and third lactation groups, treated cows had a shorter total milking time and a higher percent 2-min milk than untreated cows, but differences were not statistically significant. Overall, the data support the concept that faster milking cows tend to have higher cell counts and more mastitis treatments, particularly beyond first lactation. However, the magnitude of the relationship was small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huesemann, Michael; Dale, T.; Chavis, A.
Two innovative culturing systems, the LED-lighted and temperature-controlled 800 liter indoor raceways at Pacific Northwest National Laboratory (PNNL) and the Phenometrics environmental Photobioreactors™ (ePBRs) were evaluated in terms of their ability to accurately simulate the microalgae growth performance of outdoor cultures subjected to fluctuating sunlight and water temperature conditions. When repeating a 60-day outdoor pond culture experiment (batch and semi-continuous at two dilution rates) conducted in Arizona with the freshwater strain Chlorella sorokiniana DOE 1412 in these two indoor simulators, it was found that ash-free dry weight based biomass growth and productivity in the PNNL climate-simulation ponds was comparatively slightlymore » higher (8–13%) but significantly lower (44%) in the ePBRs. The difference in biomass productivities between the indoor and outdoor ponds was not statistically significant. When the marine Picochlorum soloecismus was cultured in five replicate ePBRs at Los Alamos National Laboratory (LANL) and in duplicate indoor climate-simulation ponds at PNNL, using the same inoculum, medium, culture depth, and light and temperature scripts, the optical density based biomass productivity and the rate of increase in cell counts in the ePBRs was about 35% and 66%, respectively, lower compared than in the indoor ponds. Potential reasons for the divergence in growth performance in these pond simulators, relative to outdoor raceways, are discussed. In conclusion, the PNNL climate-simulation ponds provide reasonably reliable biomass productivity estimates for microalgae strains cultured in outdoor raceways under different climatic conditions.« less
Survival of microorganisms in smectite clays: Implications for Martian exobiology
NASA Astrophysics Data System (ADS)
Moll, Deborah M.; Vestal, J. Robie
1992-08-01
Manned exploration of Mars may result in the contamination of that planet with terrestrial microbes, a situation requiring assessment of the survival potential of possible contaminating organisms. In this study, the survival of Bacillus subtilis, Azotobacter chroococcum, and the enteric bacteriophage MS2 was examined in clays representing terrestrial (Wyoming type montmorillonite) or Martian (Fe 3+-montmorillonite) soils exposed to terrestrial and Martian environmental conditions of temperature and atmospheric pressure and composition, but not to UV flux or oxidizing conditions. Survival of bacteria was determined by standard plate counts and biochemical and physiological measurements over 112 days. Extractable lipid phosphate was used to measure microbial biomass, and the rate of 14C-acetate incorporation into microbial lipids was used to determine physiological activity. MS2 survival was assayed by plaque counts. Both bacterial types survived terrestrial or Martian conditions in Wyoming montmorillonite better than Martian conditions in Fe 3+-montmorillonite. Decreased survival may have been caused by the lower pH of the Fe 3+-montmorillonite compared to Wyoming montmorillonite. MS2 survived simulated Mars conditions better than the terrestrial environment, likely due to stabilization of the virus caused by the cold and dry conditions of the simulated Martian environment. The survival of MS2 in the simulated Martian environment is the first published indication that viruses may be able to survive in Martian type soils. This work may have implications for planetary protection for future Mars missions.
NASA Technical Reports Server (NTRS)
Schmahl, Edward J.; Kundu, Mukul R.
1998-01-01
We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.
VizieR Online Data Catalog: ChaMP. I. First X-ray source catalog (Kim+, 2004)
NASA Astrophysics Data System (ADS)
Kim, D.-W.; Cameron, R. A.; Drake, J. J.; Evans, N. R.; Freeman, P.; Gaetz, T. J.; Ghosh, H.; Green, P. J.; Harnden, F. R. Jr; Karovska, M.; Kashyap, V.; Maksym, P. W.; Ratzlaff, P. W.; Schlegel, E. M.; Silverman, J. D.; Tananbaum, H. D.; Vikhlinin, A. A.; Wilkes, B. J.; Grimes, J. P.
2004-01-01
The Chandra Multiwavelength Project (ChaMP) is a wide-area (~14deg2 < survey of serendipitous Chandra X-ray sources, aiming to establish fair statistical samples covering a wide range of characteristics (such as absorbed active galactic nuclei, high-z clusters of galaxies) at flux levels (fX~10-15 to 10-14erg/s/cm2) ) intermediate between the Chandra deep surveys and previous missions. We present the first ChaMP catalog, which consists of 991 near on-axis, bright X-ray sources obtained from the initial sample of 62 observations. The data have been uniformly reduced and analyzed with techniques specifically developed for the ChaMP and then validated by visual examination. To assess source reliability and positional uncertainty, we perform a series of simulations and also use Chandra data to complement the simulation study. The false source detection rate is found to be as good as or better than expected for a given limiting threshold. On the other hand, the chance of missing a real source is rather complex, depending on the source counts, off-axis distance (or PSF), and background rate. The positional error (95% confidence level) is usually less than 1" for a bright source, regardless of its off-axis distance, while it can be as large as 4" for a weak source (~20counts) at a large off-axis distance (Doff-axis>8'). We have also developed new methods to find spatially extended or temporary variable sources, and those sources are listed in the catalog. (5 data files).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sternal, O.; Heber, B.; Kopp, A.
The propagation of energetic charged particles in the heliospheric magnetic field is one of the fundamental problems in heliophysics. In particular, the structure of the heliospheric magnetic field remains an unsolved problem and is discussed as a controversial topic. The first successful analytic approach to the structure of the heliospheric magnetic field was the Parker field. However, the measurements of the Ulysses spacecraft at high latitudes revealed the possible need for refinements of the existing magnetic field model during solar minimum. Among other reasons, this led to the development of the Fisk field. This approach is highly debated and couldmore » not be ruled out with magnetic field measurements so far. A promising method to trace this magnetic field structure is to model the propagation of electrons in the energy range of a few MeV. Employing three-dimensional and time-dependent simulations of the propagation of energetic electrons, this work shows that the influence of a Fisk-type field on the particle transport in the heliosphere leads to characteristic variations of the electron intensities on the timescale of a solar rotation. For the first time it is shown that the Ulysses count rates of 2.5-7 MeV electrons contain the imprint of a Fisk-type heliospheric magnetic field structure. From a comparison of simulation results and the Ulysses count rates, realistic parameters for the Fisk theory are derived. Furthermore, these parameters are used to investigate the modeled relative amplitudes of protons and electrons, including the effects of drifts.« less
In situ gamma-spectrometry several years after deposition of radiocesium. II. Peak-to-valley method.
Gering, F; Hillmann, U; Jacob, P; Fehrenbacher, G
1998-12-01
A new method is introduced for deriving radiocesium soil contaminations and kerma rates in air from in situ gamma-ray spectrometric measurements. The approach makes use of additional information about gamma-ray attenuation given by the peak-to-valley ratio, which is the ratio of the count rates for primary and forward scattered photons. In situ measurements are evaluated by comparing the experimental data with the results of Monte Carlo simulations of photon transport and detector response. The influence of photons emitted by natural radionuclides on the calculation of the peak-to-valley ratio is carefully analysed. The new method has been applied to several post-Chernobyl measurements and the results agreed well with those of soil sampling.
Haley, Valerie B; DiRienzo, A Gregory; Lutterloh, Emily C; Stricof, Rachel L
2014-01-01
To assess the effect of multiple sources of bias on state- and hospital-specific National Healthcare Safety Network (NHSN) laboratory-identified Clostridium difficile infection (CDI) rates. Sensitivity analysis. A total of 124 New York hospitals in 2010. New York NHSN CDI events from audited hospitals were matched to New York hospital discharge billing records to obtain additional information on patient age, length of stay, and previous hospital discharges. "Corrected" hospital-onset (HO) CDI rates were calculated after (1) correcting inaccurate case reporting found during audits, (2) incorporating knowledge of laboratory results from outside hospitals, (3) excluding days when patients were not at risk from the denominator of the rates, and (4) adjusting for patient age. Data sets were simulated with each of these sources of bias reintroduced individually and combined. The simulated rates were compared with the corrected rates. Performance (ie, better, worse, or average compared with the state average) was categorized, and misclassification compared with the corrected data set was measured. Counting days patients were not at risk in the denominator reduced the state HO rate by 45% and resulted in 8% misclassification. Age adjustment and reporting errors also shifted rates (7% and 6% misclassification, respectively). Changing the NHSN protocol to require reporting of age-stratified patient-days and adjusting for patient-days at risk would improve comparability of rates across hospitals. Further research is needed to validate the risk-adjustment model before these data should be used as hospital performance measures.
Savu, Anamaria; Schopflocher, Donald; Scholnick, Barry; Kaul, Padma
2016-01-13
We examined the association between personal bankruptcy filing and acute myocardial infarction (AMI) rates in Canada. Between 2002 and 2009, aggregate and yearly bankruptcy and AMI rates were estimated for 1,155 forward sortation areas of Canada. Scatter plot and correlations were used to assess the association of the aggregate rates. Cross-lagged structural equation models were used to explore the longitudinal relationship between bankruptcy and AMI after adjustment for socio-economic factors. A cross-lagged structural equation model estimated that on average, an increase of 100 in bankruptcy filing count is associated with an increase of 1.5 (p = 0.02) in AMI count in the following year, and an increase of 100 in AMI count is associated with an increase of 7 (p < 0.01) in bankruptcy filing count. We found that regions with higher rates of AMI corresponded to those with higher levels of economic and financial stress, as indicated by personal bankruptcy rate, and vice-versa.
Taguchi, Katsuyuki; Polster, Christoph; Lee, Okkyun; Stierstorfer, Karl; Kappler, Steffen
2016-12-01
An x-ray photon interacts with photon counting detectors (PCDs) and generates an electron charge cloud or multiple clouds. The clouds (thus, the photon energy) may be split between two adjacent PCD pixels when the interaction occurs near pixel boundaries, producing a count at both of the pixels. This is called double-counting with charge sharing. (A photoelectric effect with K-shell fluorescence x-ray emission would result in double-counting as well). As a result, PCD data are spatially and energetically correlated, although the output of individual PCD pixels is Poisson distributed. Major problems include the lack of a detector noise model for the spatio-energetic cross talk and lack of a computationally efficient simulation tool for generating correlated Poisson data. A Monte Carlo (MC) simulation can accurately simulate these phenomena and produce noisy data; however, it is not computationally efficient. In this study, the authors developed a new detector model and implemented it in an efficient software simulator that uses a Poisson random number generator to produce correlated noisy integer counts. The detector model takes the following effects into account: (1) detection efficiency; (2) incomplete charge collection and ballistic effect; (3) interaction with PCDs via photoelectric effect (with or without K-shell fluorescence x-ray emission, which may escape from the PCDs or be reabsorbed); and (4) electronic noise. The correlation was modeled by using these two simplifying assumptions: energy conservation and mutual exclusiveness. The mutual exclusiveness is that no more than two pixels measure energy from one photon. The effect of model parameters has been studied and results were compared with MC simulations. The agreement, with respect to the spectrum, was evaluated using the reduced χ 2 statistics or a weighted sum of squared errors, χ red 2 (≥1), where χ red 2 =1 indicates a perfect fit. The model produced spectra with flat field irradiation that qualitatively agree with previous studies. The spectra generated with different model and geometry parameters allowed for understanding the effect of the parameters on the spectrum and the correlation of data. The agreement between the model and MC data was very strong. The mean spectra with 90 keV and 140 kVp agreed exceptionally well: χ red 2 values were 1.049 with 90 keV data and 1.007 with 140 kVp data. The degrees of cross talk (in terms of the relative increase from single pixel irradiation to flat field irradiation) were 22% with 90 keV and 19% with 140 kVp for MC simulations, while they were 21% and 17%, respectively, for the model. The covariance was in strong agreement qualitatively, although it was overestimated. The noisy data generation was very efficient, taking less than a CPU minute as opposed to CPU hours for MC simulators. The authors have developed a novel, computationally efficient PCD model that takes into account double-counting and resulting spatio-energetic correlation between PCD pixels. The MC simulation validated the accuracy.
Kumar, J Vijay; Baghirath, P Venkat; Naishadham, P Parameswar; Suneetha, Sujai; Suneetha, Lavanya; Sreedevi, P
2015-01-01
To determine if long-term highly active antiretroviral therapy (HAART) therapy alters salivary flow rate and also to compare its relation of CD4 count with unstimulated and stimulated whole saliva. A cross-sectional study was performed on 150 individuals divided into three groups. Group I (50 human immunodeficiency virus (HIV) seropositive patients, but not on HAART therapy), Group II (50 HIV-infected subjects and on HAART for less than 3 years called short-term HAART), Group III (50 HIV-infected subjects and on HAART for more than or equal to 3 years called long-term HAART). Spitting method proposed by Navazesh and Kumar was used for the measurement of unstimulated and stimulated salivary flow rate. Chi-square test and analysis of variance (ANOVA) were used for statistical analysis. The mean CD4 count was 424.78 ± 187.03, 497.82 ± 206.11 and 537.6 ± 264.00 in the respective groups. Majority of the patients in all the groups had a CD4 count between 401 and 600. Both unstimulated and stimulated whole salivary (UWS and SWS) flow rates in Group I was found to be significantly higher than in Group II (P < 0.05). Unstimulated salivary flow rate between Group II and III subjects were also found to be statistically significant (P < 0.05). ANOVA performed between CD4 count and unstimulated and stimulated whole saliva in each group demonstrated a statistically significant relationship in Group II (P < 0.05). There were no significant results found between CD4 count and stimulated whole saliva in each groups. The reduction in CD4 cell counts were significantly associated with salivary flow rates of HIV-infected individuals who are on long-term HAART.
Huesemann, Michael; Dale, T.; Chavis, A.; ...
2016-12-02
Two innovative culturing systems, the LED-lighted and temperature-controlled 800 liter indoor raceways at Pacific Northwest National Laboratory (PNNL) and the Phenometrics environmental Photobioreactors™ (ePBRs) were evaluated in terms of their ability to accurately simulate the microalgae growth performance of outdoor cultures subjected to fluctuating sunlight and water temperature conditions. When repeating a 60-day outdoor pond culture experiment (batch and semi-continuous at two dilution rates) conducted in Arizona with the freshwater strain Chlorella sorokiniana DOE 1412 in these two indoor simulators, it was found that ash-free dry weight based biomass growth and productivity in the PNNL climate-simulation ponds was comparatively slightlymore » higher (8–13%) but significantly lower (44%) in the ePBRs. The difference in biomass productivities between the indoor and outdoor ponds was not statistically significant. When the marine Picochlorum soloecismus was cultured in five replicate ePBRs at Los Alamos National Laboratory (LANL) and in duplicate indoor climate-simulation ponds at PNNL, using the same inoculum, medium, culture depth, and light and temperature scripts, the optical density based biomass productivity and the rate of increase in cell counts in the ePBRs was about 35% and 66%, respectively, lower compared than in the indoor ponds. Potential reasons for the divergence in growth performance in these pond simulators, relative to outdoor raceways, are discussed. In conclusion, the PNNL climate-simulation ponds provide reasonably reliable biomass productivity estimates for microalgae strains cultured in outdoor raceways under different climatic conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shinohara, K., E-mail: shinohara.koji@jaea.go.jp; Ochiai, K.; Sukegawa, A.
In order to increase the count rate capability of a neutron detection system as a whole, we propose a multi-stage neutron detection system. Experiments to test the effectiveness of this concept were carried out on Fusion Neutronics Source. Comparing four configurations of alignment, it was found that the influence of an anterior stage on a posterior stage was negligible for the pulse height distribution. The two-stage system using 25 mm thickness scintillator was about 1.65 times the count rate capability of a single detector system for d-D neutrons and was about 1.8 times the count rate capability for d-T neutrons.more » The results suggested that the concept of a multi-stage detection system will work in practice.« less
Goktekin, Mehmet C; Yilmaz, Mustafa
2018-06-01
In this research, the aim was to compare hematological data for the differentiation of subarachnoid hemorrhage, migraine attack, and other headache syndromes during consultation in emergency service. In this research, which was designed as retrospective case control study, hematological parameters (WBC, HgB, HCT, PLT, lymphocyte and neutrophile counts and neutrophile/lymphocyte rates) of the patients consulting to emergency service with SAH and migraine and other consulting patients complaining mainly from headache and having normal cranial CT were analysed. Sixty migraine attack patients (F/M:47/13), 57 SAH patients (F/M:30/27), and 53 patients except migraine having normal brain CT (F/M:36/17) who were consulted to emergency service with headache complaint were included in our research. WBC, Hct, HgB, MCV, PLT, MPV, LY, Neu counts, and NY/LY rates were found to differentiate between SAH and migraine. WBC, PLT, MPV, LY, and Neu rates were found to differentiate between SAH and HS patients. Only Hct, HgB, MCV, and NY/LY rates were found to differ meaningfully between SAH and migraine patients but these rates were not found to have meaningful difference between SAH and HS patients. In addition, an increase in WBC counts and NY/LY rates and decrease in MPV counts in ROC analysis were found to be more specific for SAH. WBC, HgB, HCT, PLT, lymphocyte and Neu counts, and NY/LY rates can indicate distinguishing SAH and migraine. WBC, HgB, HCT, PLT, lymphocyte and Neu counts can indicate to the clinician a differentiation of SAH and other headache syndromes.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html.
Accurate time delay technology in simulated test for high precision laser range finder
NASA Astrophysics Data System (ADS)
Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi
2015-10-01
With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.
Multianode cylindrical proportional counter for high count rates
Hanson, J.A.; Kopp, M.K.
1980-05-23
A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (< 60 keV) at count rates of greater than 10/sup 5/ counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.
Multianode cylindrical proportional counter for high count rates
Hanson, James A.; Kopp, Manfred K.
1981-01-01
A cylindrical, multiple-anode proportional counter is provided for counting of low-energy photons (<60 keV) at count rates of greater than 10.sup.5 counts/sec. A gas-filled proportional counter cylinder forming an outer cathode is provided with a central coaxially disposed inner cathode and a plurality of anode wires disposed in a cylindrical array in coaxial alignment with and between the inner and outer cathodes to form a virtual cylindrical anode coaxial with the inner and outer cathodes. The virtual cylindrical anode configuration improves the electron drift velocity by providing a more uniform field strength throughout the counter gas volume, thus decreasing the electron collection time following the detection of an ionizing event. This avoids pulse pile-up and coincidence losses at these high count rates. Conventional RC position encoding detection circuitry may be employed to extract the spatial information from the counter anodes.
Dark-count-less photon-counting x-ray computed tomography system using a YAP-MPPC detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Sato, Yuich; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2012-10-01
A high-sensitive X-ray computed tomography (CT) system is useful for decreasing absorbed dose for patients, and a dark-count-less photon-counting CT system was developed. X-ray photons are detected using a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator and an MPPC (multipixel photon counter). Photocurrents are amplified by a high-speed current-voltage amplifier, and smooth event pulses from an integrator are sent to a high-speed comparator. Then, logical pulses are produced from the comparator and are counted by a counter card. Tomography is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by the linear scan. The image contrast of gadolinium medium slightly fell with increase in lower-level voltage (Vl) of the comparator. The dark count rate was 0 cps, and the count rate for the CT was approximately 250 kcps.
Montana Kids Count Data Book and County Profiles, 1994.
ERIC Educational Resources Information Center
Healthy Mothers, Healthy Babies--The Montana Coalition, Helena.
This Kids Count publication is the first to examine statewide trends in the well-being of Montana's children. The statistical portrait is based on 13 indicators of well-being: (1) low birthweight rate; (2) infant mortality rate; (3) child death rate; (4) teen violent death rate; (5) percent of public school enrollment in Chapter 1 programs; (6)…
A real-time phoneme counting algorithm and application for speech rate monitoring.
Aharonson, Vered; Aharonson, Eran; Raichlin-Levi, Katia; Sotzianu, Aviv; Amir, Ofer; Ovadia-Blechman, Zehava
2017-03-01
Adults who stutter can learn to control and improve their speech fluency by modifying their speaking rate. Existing speech therapy technologies can assist this practice by monitoring speaking rate and providing feedback to the patient, but cannot provide an accurate, quantitative measurement of speaking rate. Moreover, most technologies are too complex and costly to be used for home practice. We developed an algorithm and a smartphone application that monitor a patient's speaking rate in real time and provide user-friendly feedback to both patient and therapist. Our speaking rate computation is performed by a phoneme counting algorithm which implements spectral transition measure extraction to estimate phoneme boundaries. The algorithm is implemented in real time in a mobile application that presents its results in a user-friendly interface. The application incorporates two modes: one provides the patient with visual feedback of his/her speech rate for self-practice and another provides the speech therapist with recordings, speech rate analysis and tools to manage the patient's practice. The algorithm's phoneme counting accuracy was validated on ten healthy subjects who read a paragraph at slow, normal and fast paces, and was compared to manual counting of speech experts. Test-retest and intra-counter reliability were assessed. Preliminary results indicate differences of -4% to 11% between automatic and human phoneme counting. Differences were largest for slow speech. The application can thus provide reliable, user-friendly, real-time feedback for speaking rate control practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Cho, H-M; Ding, H; Ziemer, B P; Molloi, S
2014-12-07
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm(2) in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
NASA Astrophysics Data System (ADS)
Cho, H.-M.; Ding, H.; Ziemer, BP; Molloi, S.
2014-12-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
Cho, H-M; Ding, H; Ziemer, BP; Molloi, S
2014-01-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using X-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for X-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of X-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded X-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of X-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of X-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic X-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the X-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory. PMID:25369288
HgCdTe APD-based linear-mode photon counting components and ladar receivers
NASA Astrophysics Data System (ADS)
Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.
2011-05-01
Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.
Battaile, Brian C; Trites, Andrew W
2013-01-01
We propose a method to model the physiological link between somatic survival and reproductive output that reduces the number of parameters that need to be estimated by models designed to determine combinations of birth and death rates that produce historic counts of animal populations. We applied our Reproduction and Somatic Survival Linked (RSSL) method to the population counts of three species of North Pacific pinnipeds (harbor seals, Phoca vitulina richardii (Gray, 1864); northern fur seals, Callorhinus ursinus (L., 1758); and Steller sea lions, Eumetopias jubatus (Schreber, 1776))--and found our model outperformed traditional models when fitting vital rates to common types of limited datasets, such as those from counts of pups and adults. However, our model did not perform as well when these basic counts of animals were augmented with additional observations of ratios of juveniles to total non-pups. In this case, the failure of the ratios to improve model performance may indicate that the relationship between survival and reproduction is redefined or disassociated as populations change over time or that the ratio of juveniles to total non-pups is not a meaningful index of vital rates. Overall, our RSSL models show advantages to linking survival and reproduction within models to estimate the vital rates of pinnipeds and other species that have limited time-series of counts.
Hung, Te-Jui; Burrage, John; Bourke, Anita; Taylor, Donna
2017-08-24
Ultrasound or stereotactic guided hook-wire localisation has been the standard-of-care for the pre-surgical localisation of impalpable breast lesions, which account for approximately a third of all breast cancer. Radioguided occult lesion localisation using I-125 seeds (ROLLIS) is a relatively new technique for guiding surgical excision of impalpable breast lesions, and is a promising alternative to the traditional hook-wire method. When combined with Tc-99m labelled colloid for sentinel node mapping in clinically indicated cases, there has been uncertainty regarding whether the downscatter of Tc-99m into the I-125 energy spectrum could adversely affect the intra-operative detection of the I-125 seed, especially pertaining to a peritumoral injection. To evaluate the percentage contribution of downscattered activity from Tc-99m into the I-125 energy spectrum in simulated intra-operative resections of an I-125 seed following different sentinel node injection techniques. Two scenarios were simulated using breast phantoms with lean chicken breast. The first scenario, with a 2cm distance between the Tc-99m injection site and the I-125 seed, simulated a periareolar ipsiquadrant injection with the subdermal or intradermal technique. The second scenario simulated a peritumoral injection technique with the Tc-99m bolus and an I-125 seed at the same site. Count rates were acquired with a hand-held gamma probe, and the percentage contribution of downscattered Tc-99m gamma photons to the I-125 energy window was calculated. In scenarios one and two, downscattered Tc-99m activity contributed 0.5% and 33% respectively to the detected count rate in the I-125 energy window. In both scenarios, the I-125 seed was successfully localised and removed using the gamma probe. There is no significant contribution of downscattered activity associated with a peritumoral injection of Tc-99m to adversely affect the accurate intra-operative localisation of an I- 125 seed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
Senftle, F.E.; Macy, R.J.; Mikesell, J.L.
1979-01-01
The fast- and thermal-neutron fluence rates from a 3.7 ??g 252Cf neutron source in a simulated borehole have been measured as a function of the source-to-detector distance using air, water, coal, iron ore-concrete mix, and dry sand as borehole media. Gamma-ray intensity measurements were made for specific spectral lines at low and high energies for the same range of source-to-detector distances in the iron ore-concrete mix and in coal. Integral gamma-ray counts across the entire spectrum were also made at each source-to-detector distance. From these data, the specific neutron-damage rate, and the critical count-rate criteria, we show that in an iron ore-concrete mix (low hydrogen concentration), 252Cf neutron sources of 2-40 ??g are suitable. The source size required for optimum gamma-ray sensitivity depends on the energy of the gamma ray being measured. In a hydrogeneous medium such as coal, similar measurements were made. The results show that sources from 2 to 20 ??g are suitable to obtain the highest gamma-ray sensitivity, again depending on the energy of the gamma ray being measured. In a hydrogeneous medium, significant improvement in sensitivity can be achieved by using faster electronics; in iron ore, it cannot. ?? 1979 North-Holland Publishing Co.
Schein, Stan; Ahmad, Kareem M
2006-11-01
A rod transmits absorption of a single photon by what appears to be a small reduction in the small number of quanta of neurotransmitter (Q(count)) that it releases within the integration period ( approximately 0.1 s) of a rod bipolar dendrite. Due to the quantal and stochastic nature of release, discrete distributions of Q(count) for darkness versus one isomerization of rhodopsin (R*) overlap. We suggested that release must be regular to narrow these distributions, reduce overlap, reduce the rate of false positives, and increase transmission efficiency (the fraction of R* events that are identified as light). Unsurprisingly, higher quantal release rates (Q(rates)) yield higher efficiencies. Focusing here on the effect of small changes in Q(rate), we find that a slightly higher Q(rate) yields greatly reduced efficiency, due to a necessarily fixed quantal-count threshold. To stabilize efficiency in the face of drift in Q(rate), the dendrite needs to regulate the biochemical realization of its quantal-count threshold with respect to its Q(count). These considerations reveal the mathematical role of calcium-based negative feedback and suggest a helpful role for spontaneous R*. In addition, to stabilize efficiency in the face of drift in degree of regularity, efficiency should be approximately 50%, similar to measurements.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
Simplified energy-balance model for pragmatic multi-dimensional device simulation
NASA Astrophysics Data System (ADS)
Chang, Duckhyun; Fossum, Jerry G.
1997-11-01
To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.
Three-dimensional passive sensing photon counting for object classification
NASA Astrophysics Data System (ADS)
Yeom, Seokwon; Javidi, Bahram; Watson, Edward
2007-04-01
In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.
Frequency-resolved Monte Carlo.
López Carreño, Juan Camilo; Del Valle, Elena; Laussy, Fabrice P
2018-05-03
We adapt the Quantum Monte Carlo method to the cascaded formalism of quantum optics, allowing us to simulate the emission of photons of known energy. Statistical processing of the photon clicks thus collected agrees with the theory of frequency-resolved photon correlations, extending the range of applications based on correlations of photons of prescribed energy, in particular those of a photon-counting character. We apply the technique to autocorrelations of photon streams from a two-level system under coherent and incoherent pumping, including the Mollow triplet regime where we demonstrate the direct manifestation of leapfrog processes in producing an increased rate of two-photon emission events.
Calibration of a portable HPGe detector using MCNP code for the determination of 137Cs in soils.
Gutiérrez-Villanueva, J L; Martín-Martín, A; Peña, V; Iniguez, M P; de Celis, B; de la Fuente, R
2008-10-01
In situ gamma spectrometry provides a fast method to determine (137)Cs inventories in soils. To improve the accuracy of the estimates, one can use not only the information on the photopeak count rates but also on the peak to forward-scatter ratios. Before applying this procedure to field measurements, a calibration including several experimental simulations must be carried out in the laboratory. In this paper it is shown that Monte Carlo methods are a valuable tool to minimize the number of experimental measurements needed for the calibration.
Bolch, Wesley E.; Hurtado, Jorge L.; Lee, Choonsik; Manger, Ryan; Hertel, Nolan; Dickerson, William
2013-01-01
In June of 2006, the Radiation Studies Branch of the Centers for Disease Control and Prevention held a workshop to explore rapid methods of facilitating radiological triage of large numbers of potentially contaminated individuals following detonation of a radiological dispersal device. Two options were discussed. The first was the use of traditional gamma-cameras in nuclear medicine departments operated as make-shift whole-body counters. Guidance on this approach is currently available from the CDC. This approach is feasible if a manageable number of individuals were involved, transportation to the relevant hospitals was quickly provided, and the medical staff at each facility had been previously trained in this non-traditional use of their radiopharmaceutical imaging devices. If, however, substantially large numbers of individuals (100s to 1000s) needed radiological screening, other options must be given to first responders, first receivers, and health physicists providing medical management. In this study, the second option of the workshop was investigated – the use of commercially available portable survey meters (either NaI or GM based) for assessing potential ranges of effective dose (<50, 50–250, 250–500, and >500 mSv). Two hybrid computational phantoms were used to model an adult male and an adult female subject internally contaminated with either 241Am, 60Cs, 137Cs, 131I, and 192Ir following an acute inhalation or ingestion intake. As a function of time following the exposure, the net count rates corresponding to committed effective doses of 50, 250, and 500 mSv were estimated via Monte Carlo radiation transport simulation for each of four different detectors types, positions, and screening distances. Measured count rates can be compared to these values and an assignment of one of four possible effective dose ranges could be made. The method implicitly assumes that all external contamination has been removed prior to screening, and that the measurements be conducted in a low-background, and possibly mobile, facility positioned at the triage location. Net count rate data are provided in both tabular and graphical format within a series of eight handbooks available at the CDC website http://emergency.cdc.gov/radiation. PMID:22420020
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolch, W.E.; Hurtado, J.L.; Lee, C.
2012-01-01
In June 2006, the Radiation Studies Branch of the Centers for Disease Control and Prevention held a workshop to explore rapid methods of facilitating radiological triage of large numbers of potentially contaminated individuals following detonation of a radiological dispersal device. Two options were discussed. The first was the use of traditional gamma cameras in nuclear medicine departments operated as makeshift wholebody counters. Guidance on this approach is currently available from the CDC. This approach would be feasible if a manageable number of individuals were involved, transportation to the relevant hospitals was quickly provided, and the medical staff at each facilitymore » had been previously trained in this non-traditional use of their radiopharmaceutical imaging devices. If, however, substantially larger numbers of individuals (100 s to 1,000 s) needed radiological screening, other options must be given to first responders, first receivers, and health physicists providing medical management. In this study, the second option of the workshop was investigated by the use of commercially available portable survey meters (either NaI or GM based) for assessing potential ranges of effective dose (G50, 50Y250, 250Y500, and 9500 mSv). Two hybrid computational phantoms were used to model an adult male and an adult female subject internally contaminated with 241Am, 60Cs, 137Cs, 131I, or 192Ir following an acute inhalation or ingestion intake. As a function of time following the exposure, the net count rates corresponding to committed effective doses of 50, 250, and 500 mSv were estimated via Monte Carlo radiation transport simulation for each of four different detector types, positions, and screening distances. Measured net count rates can be compared to these values, and an assignment of one of four possible effective dose ranges could be made. The method implicitly assumes that all external contamination has been removed prior to screening and that the measurements be conducted in a low background, and possibly mobile, facility positioned at the triage location. Net count rate data are provided in both tabular and graphical format within a series of eight handbooks available at the CDC website (http://www.bt.cdc.gov/radiation/clinicians/evaluation).« less
NASA Astrophysics Data System (ADS)
Kessler, P.; Behnke, B.; Dombrowski, H.; Neumaier, S.
2017-11-01
For the upgrade of existing dosimetric early warning networks in Europe spectrometric detectors based on CeBr3, LaBr3, SrI2, and CdZnTe are investigated as possible substitutes for the current detector generation which is mainly based on gas filled detectors. The additional information on the nuclide vector which can be derived from the spectra of γ-radiation is highly useful for an appropriate response in case of a nuclear or radiological accident. The measured γ-spectra will be converted into ambient dose equivalent H* (10) using a method where the spectrum is subdivided into multiple energy bands. For each band the conversion coefficients from count rate to dose rate is determined. The derivation of these conversion coefficients is explained in this work. Both experimental and simulative approaches are investigated using quasi-mono-energetic γ-sources and synthetic spectra from Monte-Carlo simulations to determine the conversion coefficients for each detector type. Finally, precision of the obtained characterization is checked by irradiation of the detectors in different well-known photon fields with traceable dose rates.
NASA Astrophysics Data System (ADS)
Nocente, M.; Tardocchi, M.; Olariu, A.; Olariu, S.; Pereira, R. C.; Chugunov, I. N.; Fernandes, A.; Gin, D. B.; Grosso, G.; Kiptily, V. G.; Neto, A.; Shevelev, A. E.; Silva, M.; Sousa, J.; Gorini, G.
2013-04-01
High resolution γ-ray spectroscopy measurements at MHz counting rates were carried out at nuclear accelerators, combining a LaBr 3(Ce) detector with dedicated hardware and software solutions based on digitization and off-line analysis. Spectra were measured at counting rates up to 4 MHz, with little or no degradation of the energy resolution, adopting a pile up rejection algorithm. The reported results represent a step forward towards the final goal of high resolution γ-ray spectroscopy measurements on a burning plasma device.
Facilitated sequence counting and assembly by template mutagenesis
Levy, Dan; Wigler, Michael
2014-01-01
Presently, inferring the long-range structure of the DNA templates is limited by short read lengths. Accurate template counts suffer from distortions occurring during PCR amplification. We explore the utility of introducing random mutations in identical or nearly identical templates to create distinguishable patterns that are inherited during subsequent copying. We simulate the applications of this process under assumptions of error-free sequencing and perfect mapping, using cytosine deamination as a model for mutation. The simulations demonstrate that within readily achievable conditions of nucleotide conversion and sequence coverage, we can accurately count the number of otherwise identical molecules as well as connect variants separated by long spans of identical sequence. We discuss many potential applications, such as transcript profiling, isoform assembly, haplotype phasing, and de novo genome assembly. PMID:25313059
An analysis of the low-earth-orbit communications environment
NASA Astrophysics Data System (ADS)
Diersing, Robert Joseph
Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.
Simulation of thalamic prosthetic vision: reading accuracy, speed, and acuity in sighted humans.
Vurro, Milena; Crowell, Anne Marie; Pezaris, John S
2014-01-01
The psychophysics of reading with artificial sight has received increasing attention as visual prostheses are becoming a real possibility to restore useful function to the blind through the coarse, pseudo-pixelized vision they generate. Studies to date have focused on simulating retinal and cortical prostheses; here we extend that work to report on thalamic designs. This study examined the reading performance of normally sighted human subjects using a simulation of three thalamic visual prostheses that varied in phosphene count, to help understand the level of functional ability afforded by thalamic designs in a task of daily living. Reading accuracy, reading speed, and reading acuity of 20 subjects were measured as a function of letter size, using a task based on the MNREAD chart. Results showed that fluid reading was feasible with appropriate combinations of letter size and phosphene count, and performance degraded smoothly as font size was decreased, with an approximate doubling of phosphene count resulting in an increase of 0.2 logMAR in acuity. Results here were consistent with previous results from our laboratory. Results were also consistent with those from the literature, despite using naive subjects who were not trained on the simulator, in contrast to other reports.
Kross, Ethan; Verduyn, Philippe; Boyer, Margaret; Drake, Brittany; Gainsburg, Izzy; Vickers, Brian; Ybarra, Oscar; Jonides, John
2018-04-05
Psychologists have long debated whether it is possible to assess how people subjectively feel without asking them. The recent proliferation of online social networks has recently added a fresh chapter to this discussion, with research now suggesting that it is possible to index people's subjective experience of emotion by simply counting the number of emotion words contained in their online social network posts. Whether the conclusions that emerge from this work are valid, however, rests on a critical assumption: that people's usage of emotion words in their posts accurately reflects how they feel. Although this assumption is widespread in psychological research, here we suggest that there are reasons to challenge it. We corroborate these assertions in 2 ways. First, using data from 4 experience-sampling studies of emotion in young adults, we show that people's reports of how they feel throughout the day neither predict, nor are predicted by, their use of emotion words on Facebook. Second, using simulations we show that although significant relationships emerge between the use of emotion words on Facebook and self-reported affect with increasingly large numbers of observations, the relationship between these variables was in the opposite of the theoretically expected direction 50% of the time (i.e., 3 of 6 models that we performed simulations on). In contrast to counting emotion words, we show that judges' ratings of the emotionality of participants' Facebook posts consistently predicts how people feel across all analyses. These findings shed light on how to draw inferences about emotion using online social network data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior
ERIC Educational Resources Information Center
Liu, Hui; Powers, Daniel A.
2007-01-01
This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ~6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the artmore » and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr 3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr 3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2” (length) × 2” (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ~3 Mcps. An experimental methodology was developed that uses the average current from the PMT’s anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr 3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ~3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.« less
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril; ...
2017-10-09
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ~6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the artmore » and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr 3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr 3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2” (length) × 2” (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ~3 Mcps. An experimental methodology was developed that uses the average current from the PMT’s anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr 3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ~3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.« less
NASA Astrophysics Data System (ADS)
Favalli, Andrea; Iliev, Metodi; Ianakiev, Kiril; Hunt, Alan W.; Ludewigt, Bernhard
2018-01-01
High-energy delayed γ-ray spectroscopy is a potential technique for directly assaying spent fuel assemblies and achieving the safeguards goal of quantifying nuclear material inventories for spent fuel handling, interim storage, reprocessing facilities, repository sites, and final disposal. Requirements for the γ-ray detection system, up to ∼6 MeV, can be summarized as follows: high efficiency at high γ-ray energies, high energy resolution, good linearity between γ-ray energy and output signal amplitude, ability to operate at very high count rates, and ease of use in industrial environments such as nuclear facilities. High Purity Germanium Detectors (HPGe) are the state of the art and provide excellent energy resolution but are limited in their count rate capability. Lanthanum Bromide (LaBr3) scintillation detectors offer significantly higher count rate capabilities at lower energy resolution. Thus, LaBr3 detectors may be an effective alternative for nuclear spent-fuel applications, where count-rate capability is a requirement. This paper documents the measured performance of a 2" (length) × 2" (diameter) of LaBr3 scintillation detector system, coupled to a negatively biased PMT and a tapered active high voltage divider, with count-rates up to ∼3 Mcps. An experimental methodology was developed that uses the average current from the PMT's anode and a dual source method to characterize the detector system at specific very high count rate values. Delayed γ-ray spectra were acquired with the LaBr3 detector system at the Idaho Accelerator Center, Idaho State University, where samples of ∼3g of 235U were irradiated with moderated neutrons from a photo-neutron source. Results of the spectroscopy characterization and analysis of the delayed γ-ray spectra acquired indicate the possible use of LaBr3 scintillation detectors when high count rate capability may outweigh the lower energy resolution.
Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.
Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard
2017-12-12
We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.
Kids Count Data Book, 2003: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
O'Hare, William P.
This Kids Count data book examines national and statewide trends in the well being of the nation's children. Statistical portraits are based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide, and suicide; (5) teen birth rate; (6)…
KIDS COUNT Data Book, 2002: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
O'Hare, William P.
This KIDS COUNT data book examines national and statewide trends in the well being of the nations children. Statistical portraits are based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide, and suicide; (5) teen birth rate; (6)…
KIDS COUNT Data Book, 2001: State Profiles of Child Well-Being.
ERIC Educational Resources Information Center
Annie E. Casey Foundation, Baltimore, MD.
This Kids Count report examines national and statewide trends in the well-being of the nation's children. The statistical portrait is based on 10 indicators of well being: (1) percent of low birth weight babies; (2) infant mortality rate; (3) child death rate; (4) rate of teen deaths by accident, homicide and suicide; (5) teen birth rate; (6)…
Waveguide integrated low noise NbTiN nanowire single-photon detectors with milli-Hz dark count rate
Schuck, Carsten; Pernice, Wolfram H. P.; Tang, Hong X.
2013-01-01
Superconducting nanowire single-photon detectors are an ideal match for integrated quantum photonic circuits due to their high detection efficiency for telecom wavelength photons. Quantum optical technology also requires single-photon detection with low dark count rate and high timing accuracy. Here we present very low noise superconducting nanowire single-photon detectors based on NbTiN thin films patterned directly on top of Si3N4 waveguides. We systematically investigate a large variety of detector designs and characterize their detection noise performance. Milli-Hz dark count rates are demonstrated over the entire operating range of the nanowire detectors which also feature low timing jitter. The ultra-low dark count rate, in combination with the high detection efficiency inherent to our travelling wave detector geometry, gives rise to a measured noise equivalent power at the 10−20 W/Hz1/2 level. PMID:23714696
NASA Astrophysics Data System (ADS)
Cooper, R. J.; Amman, M.; Vetter, K.
2018-04-01
High-resolution gamma-ray spectrometers are required for applications in nuclear safeguards, emergency response, and fundamental nuclear physics. To overcome one of the shortcomings of conventional High Purity Germanium (HPGe) detectors, we have developed a prototype device capable of achieving high event throughput and high energy resolution at very high count rates. This device, the design of which we have previously reported on, features a planar HPGe crystal with a reduced-capacitance strip electrode geometry. This design is intended to provide good energy resolution at the short shaping or digital filter times that are required for high rate operation and which are enabled by the fast charge collection afforded by the planar geometry crystal. In this work, we report on the initial performance of the system at count rates up to and including two million counts per second.
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
Evaluation of PeneloPET Simulations of Biograph PET/CT Scanners
NASA Astrophysics Data System (ADS)
Abushab, K. M.; Herraiz, J. L.; Vicente, E.; Cal-González, J.; España, S.; Vaquero, J. J.; Jakoby, B. W.; Udías, J. M.
2016-06-01
Monte Carlo (MC) simulations are widely used in positron emission tomography (PET) for optimizing detector design, acquisition protocols, and evaluating corrections and reconstruction methods. PeneloPET is a MC code based on PENELOPE, for PET simulations which considers detector geometry, acquisition electronics and materials, and source definitions. While PeneloPET has been successfully employed and validated with small animal PET scanners, it required a proper validation with clinical PET scanners including time-of-flight (TOF) information. For this purpose, we chose the family of Biograph PET/CT scanners: the Biograph True-Point (B-TP), Biograph True-Point with TrueV (B-TPTV) and the Biograph mCT. They have similar block detectors and electronics, but a different number of rings and configuration. Some effective parameters of the simulations, such as the dead-time and the size of the reflectors in the detectors, were adjusted to reproduce the sensitivity and noise equivalent count (NEC) rate of the B-TPTV scanner. These parameters were then used to make predictions of experimental results such as sensitivity, NEC rate, spatial resolution, and scatter fraction (SF), from all the Biograph scanners and some variations of them (energy windows and additional rings of detectors). Predictions agree with the measured values for the three scanners, within 7% (sensitivity and NEC rate) and 5% (SF). The resolution obtained for the B-TPTV is slightly better (10%) than the experimental values. In conclusion, we have shown that PeneloPET is suitable for simulating and investigating clinical systems with good accuracy and short computational time, though some effort tuning of a few parameters of the scanners modeled may be needed in case that the full details of the scanners studied are not available.
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Material screening with HPGe counting station for PandaX experiment
NASA Astrophysics Data System (ADS)
Wang, X.; Chen, X.; Fu, C.; Ji, X.; Liu, X.; Mao, Y.; Wang, H.; Wang, S.; Xie, P.; Zhang, T.
2016-12-01
A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.
Simulation of background from low-level tritium and radon emanation in the KATRIN spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leiber, B.; Collaboration: KATRIN Collaboration
The KArlsruhe TRItium Neutrino (KATRIN) experiment is a large-scale experiment for the model independent determination of the mass of electron anti-neutrinos with a sensitivity of 200 meV/c{sup 2}. It investigates the kinematics of electrons from tritium beta decay close to the endpoint of the energy spectrum at 18.6 keV. To achieve a good signal to background ratio at the endpoint, a low background rate below 10{sup −2} counts per second is required. The KATRIN setup thus consists of a high luminosity windowless gaseous tritium source (WGTS), a magnetic electron transport system with differential and cryogenic pumping for tritium retention, andmore » electro-static retarding spectrometers (pre-spectrometer and main spectrometer) for energy analysis, followed by a segmented detector system for counting transmitted beta-electrons. A major source of background comes from magnetically trapped electrons in the main spectrometer (vacuum vessel: 1240 m{sup 3}, 10{sup −11} mbar) produced by nuclear decays in the magnetic flux tube of the spectrometer. Major contributions are expected from short-lived radon isotopes and tritium. Primary electrons, originating from these decays, can be trapped for hours, until having lost almost all their energy through inelastic scattering on residual gas particles. Depending on the initial energy of the primary electron, up to hundreds of low energetic secondary electrons can be produced. Leaving the spectrometer, these electrons will contribute to the background rate. This contribution describes results from simulations for the various background sources. Decays of {sup 219}Rn, emanating from the main vacuum pump, and tritium from the WGTS that reaches the spectrometers are expected to account for most of the background. As a result of the radon alpha decay, electrons are emitted through various processes, such as shake-off, internal conversion and the Auger deexcitations. The corresponding simulations were done using the KASSIOPEIA framework, which has been developed for the KATRIN experiment for low-energy electron tracking, field calculation and detector simulation. The results of the simulations have been used to optimize the design parameters of the vacuum system with regard to radon emanation and tritium pumping, in order to reach the stringent requirements of the neutrino mass measurement.« less
Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.
Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S
2001-04-01
Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.
Illinois Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Illinois' Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for Family…
Ma, Xiaosu; Chien, Jenny Y; Johnson, Jennal; Malone, James; Sinha, Vikram
2017-08-01
The purpose of this prospective, model-based simulation approach was to evaluate the impact of various rapid-acting mealtime insulin dose-titration algorithms on glycemic control (hemoglobin A1c [HbA1c]). Seven stepwise, glucose-driven insulin dose-titration algorithms were evaluated with a model-based simulation approach by using insulin lispro. Pre-meal blood glucose readings were used to adjust insulin lispro doses. Two control dosing algorithms were included for comparison: no insulin lispro (basal insulin+metformin only) or insulin lispro with fixed doses without titration. Of the seven dosing algorithms assessed, daily adjustment of insulin lispro dose, when glucose targets were met at pre-breakfast, pre-lunch, and pre-dinner, sequentially, demonstrated greater HbA1c reduction at 24 weeks, compared with the other dosing algorithms. Hypoglycemic rates were comparable among the dosing algorithms except for higher rates with the insulin lispro fixed-dose scenario (no titration), as expected. The inferior HbA1c response for the "basal plus metformin only" arm supports the additional glycemic benefit with prandial insulin lispro. Our model-based simulations support a simplified dosing algorithm that does not include carbohydrate counting, but that includes glucose targets for daily dose adjustment to maintain glycemic control with a low risk of hypoglycemia.
Assessment of fatigue life of remanufactured impeller based on FEA
NASA Astrophysics Data System (ADS)
Xu, Lei; Cao, Huajun; Liu, Hailong; Zhang, Yubo
2016-09-01
Predicting the fatigue life of remanufactured centrifugal compressor impellers is a critical problem. In this paper, the S-N curve data were obtained by combining experimentation and theory deduction. The load spectrum was compiled by the rain-flow counting method based on the comprehensive consideration of the centrifugal force, residual stress, and aerodynamic loads in the repair region. A fatigue life simulation model was built, and fatigue life was analyzed based on the fatigue cumulative damage rule. Although incapable of providing a high-precision prediction, the simulation results were useful for the analysis of fatigue life impact factors and fatigue fracture areas. Results showed that the load amplitude greatly affected fatigue life, the impeller was protected from running at over-speed, and the predicted fatigue life was satisfied within the next service cycle safely at the rated speed.
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Pitch Counts in Youth Baseball and Softball: A Historical Review.
Feeley, Brian T; Schisel, Jessica; Agel, Julie
2018-07-01
Pitching injuries are getting increased attention in the mass media. Many references are made to pitch counts and the role they play in injury prevention. The original purpose of regulating the pitch count in youth baseball was to reduce injury and fatigue to pitchers. This article reviews the history and development of the pitch count limit in baseball, the effect it has had on injury, and the evidence regarding injury rates on softball windmill pitching. Literature search through PubMed, mass media, and organizational Web sites through June 2015. Pitch count limits and rest recommendations were introduced in 1996 after a survey of 28 orthopedic surgeons and baseball coaches showed injuries to baseball pitchers' arms were believed to be from the number of pitches thrown. Follow-up research led to revised recommendations with more detailed guidelines in 2006. Since that time, data show a relationship between innings pitched and upper extremity injury, but pitch type has not clearly been shown to affect injury rates. Current surveys of coaches and players show that coaches, parents, and athletes often do not adhere to these guidelines. There are no pitch count guidelines currently available in softball. The increase in participation in youth baseball and softball with an emphasis on early sport specialization in youth sports activities suggests that there will continue to be a rise in injury rates to young throwers. The published pitch counts are likely to positively affect injury rates but must be adhered to by athletes, coaches, and parents.
NASA Astrophysics Data System (ADS)
Nishizawa, Yukiyasu; Sugita, Takeshi; Sanada, Yukihisa; Torii, Tatsuo
2015-04-01
Since 2011, MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan) have been conducting aerial monitoring to investigate the distribution of radioactive cesium dispersed into the atmosphere after the accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP), Tokyo Electric Power Company. Distribution maps of the air dose-rate at 1 m above the ground and the radioactive cesium deposition concentration on the ground are prepared using spectrum obtained by aerial monitoring. The radioactive cesium deposition is derived from its dose rate, which is calculated by excluding the dose rate of the background radiation due to natural radionuclides from the air dose-rate at 1 m above the ground. The first step of the current method of calculating the dose rate due to natural radionuclides is calculate the ratio of the total count rate of areas where no radioactive cesium is detected and the count rate of regions with energy levels of 1,400 keV or higher (BG-Index). Next, calculate the air dose rate of radioactive cesium by multiplying the BG-Index and the integrated count rate of 1,400 keV or higher for the area where the radioactive cesium is distributed. In high dose-rate areas, however, the count rate of the 1,365-keV peak of Cs-134, though small, is included in the integrated count rate of 1,400 keV or higher, which could cause an overestimation of the air dose rate of natural radionuclides. We developed a method for accurately evaluating the distribution maps of natural air dose-rate by excluding the effect of radioactive cesium, even in contaminated areas, and obtained the accurate air dose-rate map attributed the radioactive cesium deposition on the ground. Furthermore, the natural dose-rate distribution throughout Japan has been obtained by this method.
Radiation Discrimination in LiBaF3 Scintillator Using Digital Signal Processing Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aalseth, Craig E.; Bowyer, Sonya M.; Reeder, Paul L.
2002-11-01
The new scintillator material LiBaF3:Ce offers the possibility of measuring neutron or alpha count rates and energy spectra simultaneously while measuring gamma count rates and spectra using a single detector.
259 E Ohio, April 2014, Lindsay Light Radiological Survey
The count rates for the sidewalk ranged from 5,600 cpm to 16,800 cpm.There were two locations along Ontario St. with elevated count rates approaching thethreshold limit correlating to 7.1 pCi/g total thorium.
Sato, T; Kataoka, R; Yasuda, H; Yashiro, S; Kuwabara, T; Shiota, D; Kubo, Y
2014-10-01
WASAVIES, a warning system for aviation exposure to solar energetic particles (SEPs), is under development by collaboration between several institutes in Japan and the USA. It is designed to deterministically forecast the SEP fluxes incident on the atmosphere within 6 h after flare onset using the latest space weather research. To immediately estimate the aircrew doses from the obtained SEP fluxes, the response functions of the particle fluxes generated by the incidence of monoenergetic protons into the atmosphere were developed by performing air shower simulations using the Particle and Heavy Ion Transport code system. The accuracy of the simulation was well verified by calculating the increase count rates of a neutron monitor during a ground-level enhancement, combining the response function with the SEP fluxes measured by the PAMELA spectrometer. The response function will be implemented in WASAVIES and used to protect aircrews from additional SEP exposure. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mayer, D. P.; Kite, E. S.
2016-12-01
Sandblasting, aeolian infilling, and wind deflation all obliterate impact craters on Mars, complicating the use of crater counts for chronology, particularly on sedimentary rock surfaces. However, crater counts on sedimentary rocks can be exploited to constrain wind erosion rates. Relatively small, shallow craters are preferentially obliterated as a landscape undergoes erosion, so the size-frequency distribution of impact craters in a landscape undergoing steady exhumation will develop a shallower power-law slope than a simple production function. Estimating erosion rates is important for several reasons: (1) Wind erosion is a source of mass for the global dust cycle, so the global dust reservoir will disproportionately sample fast-eroding regions; (2) The pace and pattern of recent wind erosion is a sorely-needed constraint on models of the sculpting of Mars' sedimentary-rock mounds; (3) Near-surface complex organic matter on Mars is destroyed by radiation in <108 years, so high rates of surface exhumation are required for preservation of near-surface organic matter. We use crater counts from 18 HiRISE images over sedimentary rock deposits as the basis for estimating erosion rates. Each image was counted by ≥3 analysts and only features agreed on by ≥2 analysts were included in the erosion rate estimation. Erosion rates range from 0.1-0.2 {μ }m/yr across all images. These rates represent an upper limit on surface erosion by landscape lowering. At the conference we will discuss the within and between-image variability of erosion rates and their implications for recent geological processes on Mars.
Dynamic time-correlated single-photon counting laser ranging
NASA Astrophysics Data System (ADS)
Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang
2018-03-01
We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.
NASA Astrophysics Data System (ADS)
Gao, Zhuo; Zhan, Weida; Sun, Quan; Hao, Ziqiang
2018-04-01
Differential multi-pulse position modulation (DMPPM) is a new type of modulation technology. There is a fast transmission rate, high bandwidth utilization, high modulation rate characteristics. The study of DMPPM modulation has important scientific value and practical significance. Channel capacity is one of the important indexes to measure the communication capability of communication system, and studying the channel capacity of DMPPM without background noise is the key to analyze the characteristics of DMPPM. The DMPPM theoretical model is established. The symbol structure of DMPPM with guard time slot is analyzed, and the channel capacity expression of DMPPM is deduced. Simulation analysis by MATLAB. The curves of unit channel capacity and capacity efficiency at different pulse and photon counting rates are analyzed. The results show that DMPPM is more advantageous than multi-pulse position modulation (MPPM), and is more suitable for future wireless optical communication system.
A study of pile-up in integrated time-correlated single photon counting systems
NASA Astrophysics Data System (ADS)
Arlt, Jochen; Tyndall, David; Rae, Bruce R.; Li, David D.-U.; Richardson, Justin A.; Henderson, Robert K.
2013-10-01
Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.
A study of pile-up in integrated time-correlated single photon counting systems.
Arlt, Jochen; Tyndall, David; Rae, Bruce R; Li, David D-U; Richardson, Justin A; Henderson, Robert K
2013-10-01
Recent demonstration of highly integrated, solid-state, time-correlated single photon counting (TCSPC) systems in CMOS technology is set to provide significant increases in performance over existing bulky, expensive hardware. Arrays of single photon single photon avalanche diode (SPAD) detectors, timing channels, and signal processing can be integrated on a single silicon chip with a degree of parallelism and computational speed that is unattainable by discrete photomultiplier tube and photon counting card solutions. New multi-channel, multi-detector TCSPC sensor architectures with greatly enhanced throughput due to minimal detector transit (dead) time or timing channel dead time are now feasible. In this paper, we study the potential for future integrated, solid-state TCSPC sensors to exceed the photon pile-up limit through analytic formula and simulation. The results are validated using a 10% fill factor SPAD array and an 8-channel, 52 ps resolution time-to-digital conversion architecture with embedded lifetime estimation. It is demonstrated that pile-up insensitive acquisition is attainable at greater than 10 times the pulse repetition rate providing over 60 dB of extended dynamic range to the TCSPC technique. Our results predict future CMOS TCSPC sensors capable of live-cell transient observations in confocal scanning microscopy, improved resolution of near-infrared optical tomography systems, and fluorescence lifetime activated cell sorting.
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
NASA Astrophysics Data System (ADS)
Wen, Xianfei; Enqvist, Andreas
2017-09-01
Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.
Hammes, Jochen; Pietrzyk, Uwe; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang
2011-12-01
The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms "Adult Female" (AF, 16ml thyroid) and "Adult Male" (AM, 19ml thyroid) were used as source regions. Nodules of 1ml and 3ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1ml nodule) to 15.3 (AM, 3ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1ml nodules this effect is smaller: 9-11% (AF) respectively 7-8% (AM). For each phantom, the dependency of count density ratios upon uptake ratios can be modeled well by both linear and quadratic regression (quadratic: r(2)>0.99), yielding sets of parameters which in reverse allow the computation of uptake ratios (and thus dose) from count density ratios. A single regression model obtained by fitting the data of all simulations simultaneously did not provide satisfactory results except for GP, while underestimating the true uptake ratios in AF and overestimating them in AM. The scintigraphic count density ratios depend upon the uptake ratios between nodule and rest of thyroid, upon their volumes, and their respective position in a non-trivial way. Further investigations are required to derive a comprehensive rule to calculate the uptake or dose ratios based on post-therapeutic scintigraphy. Copyright © 2011. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Giocoli, Carlo; Moscardini, Lauro; Baldi, Marco; Meneghetti, Massimo; Metcalf, Robert B.
2018-05-01
In this paper, we study the statistical properties of weak lensing peaks in light-cones generated from cosmological simulations. In order to assess the prospects of such observable as a cosmological probe, we consider simulations that include interacting Dark Energy (hereafter DE) models with coupling term between DE and Dark Matter. Cosmological models that produce a larger population of massive clusters have more numerous high signal-to-noise peaks; among models with comparable numbers of clusters those with more concentrated haloes produce more peaks. The most extreme model under investigation shows a difference in peak counts of about 20% with respect to the reference ΛCDM model. We find that peak statistics can be used to distinguish a coupling DE model from a reference one with the same power spectrum normalisation. The differences in the expansion history and the growth rate of structure formation are reflected in their halo counts, non-linear scale features and, through them, in the properties of the lensing peaks. For a source redshift distribution consistent with the expectations of future space-based wide field surveys, we find that typically seventy percent of the cluster population contributes to weak-lensing peaks with signal-to-noise ratios larger than two, and that the fraction of clusters in peaks approaches one-hundred percent for haloes with redshift z ≤ 0.5. Our analysis demonstrates that peak statistics are an important tool for disentangling DE models by accurately tracing the structure formation processes as a function of the cosmic time.
355 E Riverwalk, February 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 2,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
230 E. Ontario, May 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,366 cpm.
Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM
2006-07-25
The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.
500-MHz x-ray counting with a Si-APD and a fast-pulse processing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kishimoto, Shunji; Taniguchi, Takashi; Tanaka, Manobu
2010-06-23
We introduce a counting system of up to 500 MHz for synchrotron x-ray high-rate measurements. A silicon avalanche photodiode detector was used in the counting system. The fast-pulse circuit of the amplifier was designed with hybrid ICs to prepare an ASIC system for a large-scale pixel array detector in near future. The fast amplifier consists of two cascading emitter-followers using 10-GHz band transistors. A count-rate of 3.25x10{sup 8} s{sup -1} was then achieved using the system for 8-keV x-rays. However, a baseline shift by adopting AC-coupling in the amplifier disturbed us to observe the maximum count of 4.49x10{sup 8} s{supmore » -1}, determined by electron-bunch filling into a ring accelerator. We also report that an amplifier with a baseline restorer was tested in order to keep the baseline level to be 0 V even at high input rates.« less
Computer Simulation of the Population Growth (Schizosaccharomyces Pombe) Experiment.
ERIC Educational Resources Information Center
Daley, Michael; Hillier, Douglas
1981-01-01
Describes a computer program (available from authors) developed to simulate "Growth of a Population (Yeast) Experiment." Students actively revise the counting techniques with realistically simulated haemocytometer or eye-piece grid and are reminded of the necessary dilution technique. Program can be modified to introduce such variables…
Modeling of displacement damage in silicon carbide detectors resulting from neutron irradiation
NASA Astrophysics Data System (ADS)
Khorsandi, Behrooz
There is considerable interest in developing a power monitor system for Generation IV reactors (for instance GT-MHR). A new type of semiconductor radiation detector is under development based on silicon carbide (SiC) technology for these reactors. SiC has been selected as the semiconductor material due to its superior thermal-electrical-neutronic properties. Compared to Si, SiC is a radiation hard material; however, like Si, the properties of SiC are changed by irradiation by a large fluence of energetic neutrons, as a consequence of displacement damage, and that irradiation decreases the life-time of detectors. Predictions of displacement damage and the concomitant radiation effects are important for deciding where the SiC detectors should be placed. The purpose of this dissertation is to develop computer simulation methods to estimate the number of various defects created in SiC detectors, because of neutron irradiation, and predict at what positions of a reactor, SiC detectors could monitor the neutron flux with high reliability. The simulation modeling includes several well-known---and commercial---codes (MCNP5, TRIM, MARLOWE and VASP), and two kinetic Monte Carlo codes written by the author (MCASIC and DCRSIC). My dissertation will highlight the displacement damage that may happen in SiC detectors located in available positions in the OSURR, GT-MHR and IRIS. As extra modeling output data, the count rates of SiC for the specified locations are calculated. A conclusion of this thesis is SiC detectors that are placed in the thermal neutron region of a graphite moderator-reflector reactor have a chance to survive at least one reactor refueling cycle, while their count rates are acceptably high.
Relationship between salivary flow rates and Candida counts in subjects with xerostomia.
Torres, Sandra R; Peixoto, Camila Bernardo; Caldas, Daniele Manhães; Silva, Eline Barboza; Akiti, Tiyomi; Nucci, Márcio; de Uzeda, Milton
2002-02-01
This study evaluated the relationship between salivary flow and Candida colony counts in the saliva of patients with xerostomia. Sialometry and Candida colony-forming unit (CFU) counts were taken from 112 subjects who reported xerostomia in a questionnaire. Chewing-stimulated whole saliva was collected and streaked in Candida plates and counted in 72 hours. Species identification was accomplished under standard methods. There was a significant inverse relationship between salivary flow and Candida CFU counts (P =.007) when subjects with high colony counts were analyzed (cutoff point of 400 or greater CFU/mL). In addition, the median sialometry of men was significantly greater than that of women (P =.003), even after controlling for confounding variables like underlying disease and medications. Sjögren's syndrome was associated with low salivary flow rate (P =.007). There was no relationship between the median Candida CFU counts and gender or age. There was a high frequency (28%) of mixed colonization. Candida albicans was the most frequent species, followed by C parapsilosis, C tropicalis, and C krusei. In subjects with high Candida CFU counts there was an inverse relationship between salivary flow and Candida CFU counts.
ERIC Educational Resources Information Center
Annie E. Casey Foundation, Baltimore, MD.
Data from the 50 United States are listed for 1997 from Kids Count in an effort to track state-by-state the status of children in the United States and to secure better futures for all children. Data include percent low birth weight babies; infant mortality rate; child death rate; rate of teen deaths by accident, homicide, and suicide; teen birth…
Palm Beach Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Palm Beach's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…
Miami-Dade Quality Counts: QRS Profile. The Child Care Quality Rating System (QRS) Assessment
ERIC Educational Resources Information Center
Child Trends, 2010
2010-01-01
This paper presents a profile of Miami-Dade's Quality Counts prepared as part of the Child Care Quality Rating System (QRS) Assessment Study. The profile consists of several sections and their corresponding descriptions including: (1) Program Information; (2) Rating Details; (3) Quality Indicators for Center-Based Programs; (4) Indicators for…
Richard L. Hutto; Sallie J. Hejl; Jeffrey F. Kelly; Sandra M. Pletschet
1995-01-01
We conducted a series of 275 paired (on- and off-road) point counts within 4 distinct vegetation cover types in northwestern Montana. Roadside counts generated a bird list that was essentially the same as the list generated from off-road counts within the same vegetation cover type. Species that were restricted to either on- or off-road counts were rare, suggesting...
Flow rate calibration to determine cell-derived microparticles and homogeneity of blood components.
Noulsri, Egarit; Lerdwana, Surada; Kittisares, Kulvara; Palasuwan, Attakorn; Palasuwan, Duangdao
2017-08-01
Cell-derived microparticles (MPs) are currently of great interest to screening transfusion donors and blood components. However, the current approach to counting MPs is not affordable for routine laboratory use due to its high cost. The current study aimed to investigate the potential use of flow-rate calibration for counting MPs in whole blood, packed red blood cells (PRBCs), and platelet concentrates (PCs). The accuracy of flow-rate calibration was investigated by comparing the platelet counts of an automated counter and a flow-rate calibrator. The concentration of MPs and their origins in whole blood (n=100), PRBCs (n=100), and PCs (n=92) were determined using a FACSCalibur. The MPs' fold-changes were calculated to assess the homogeneity of the blood components. Comparing the platelet counts conducted by automated counting and flow-rate calibration showed an r 2 of 0.6 (y=0.69x+97,620). The CVs of the within-run and between-run variations of flow-rate calibration were 8.2% and 12.1%, respectively. The Bland-Altman plot showed a mean bias of -31,142platelets/μl. MP enumeration revealed both the difference in MP levels and their origins in whole blood, PRBCs, and PCs. Screening the blood components demonstrated high heterogeneity of the MP levels in PCs when compared to whole blood and PRBCs. The results of the present study suggest the accuracy and precision of flow-rate calibration for enumerating MPs. This flow-rate approach is affordable for assessing the homogeneity of MPs in blood components in routine laboratory practice. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Motivational Technique for Business Math
ERIC Educational Resources Information Center
Voelker, Pamela
1977-01-01
The author suggests the use of simulation and role playing as a method of motivating students in business math. Examples of career-oriented business math simulation games are counting change, banking, payrolls, selling, and shopping. (MF)
371 E. Lower Wacker Drive, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
220 E. Illinois St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,500 cpm to 5,600 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
8-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,200 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.
429 E. Grand Ave, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
201-211 E. Grand Ave, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 3,900 cpm. No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
230 N. Michigan Ave, April 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,400 cpm to 3,800 cpm. No count rates were found at any time that exceeded the threshold limit of 6,542 cpm.
36 W. Illinois St, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 2,400 cpm.No count rates were found at any time that exceeded the threshold limit of 7,029 cpm.
1-37 W. Hubbard, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,800 cpm to 5,000 cpm. No count rates were found at any time that exceeded the threshold limit of 7,389 cpm.
211 E. Ohio St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,300 cpm.No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.
140-200 E. Grand Ave, February 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,700 cpm to 2,400 cpm. No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.
430 N. Michigan Ave, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 2,100 cpm. No count rates were found at any time that exceeded the threshold limit of 6,338 cpm.
401-599 N. Dearborn St., March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation, The count rates in the excavation ranged from 1,700 cpm to 5,800 cpm.No count rates were found at any time that exceeded the threshold limit of 6,738 cpm.
Brooks, John P; Adeli, Ardeshir; Read, John J; McLaughlin, Michael R
2009-01-01
Runoff water following a rain event is one possible source of environmental contamination after a manure application. This greenhouse study used a rainfall simulator to determine bacterial-associated runoff from troughs of common bermudagrass [Cynodon dactylon (L.) Pers.] that were treated with P-based, N-based, and N plus lime rates of poultry (Gallus gallus) litter, recommended inorganic fertilizer, and control. Total heterotrophic plate count (HPC) bacteria, total and thermotolerant coliforms, enterococci, staphylococci, Clostridium perfringens, Salmonella, and Campylobacter, as well as antibiotic resistance profiles for the staphylococci and enterococci isolates were all monitored in runoff waters. Analysis following five rainfall events indicated that staphylococci, enterococci, and clostridia levels were related to manure application rate. Runoff release of staphylococci, enterococci, and C. perfringens were approximately 3 to 6 log10 greater in litter vs. control treatment. In addition, traditional indicators such as thermotolerant and total coliforms performed poorly as fecal indicators. Some isolated enterococci demonstrated increased antibiotic resistance to polymixin b and/or select aminoglyocosides, while many staphylococci were susceptible to most antimicrobials tested. Results indicated poultry litter application can lead to microbial runoff following simulated rain events. Future studies should focus on the use of staphylococci, enterococci, and C. perfringens as indicators.
Chi-squared and C statistic minimization for low count per bin data
NASA Astrophysics Data System (ADS)
Nousek, John A.; Shue, David R.
1989-07-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy
NASA Technical Reports Server (NTRS)
Nousek, John A.; Shue, David R.
1989-01-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Validity of Activity Monitor Step Detection Is Related to Movement Patterns.
Hickey, Amanda; John, Dinesh; Sasaki, Jeffer E; Mavilia, Marianna; Freedson, Patty
2016-02-01
There is a need to examine step-counting accuracy of activity monitors during different types of movements. The purpose of this study was to compare activity monitor and manually counted steps during treadmill and simulated free-living activities and to compare the activity monitor steps to the StepWatch (SW) in a natural setting. Fifteen participants performed laboratory-based treadmill (2.4, 4.8, 7.2 and 9.7 km/h) and simulated free-living activities (eg, cleaning room) while wearing an activPAL, Omron HJ720-ITC, Yamax Digi- Walker SW-200, 2 ActiGraph GT3Xs (1 in "low-frequency extension" [AGLFE] and 1 in "normal-frequency" mode), an ActiGraph 7164, and a SW. Participants also wore monitors for 1-day in their free-living environment. Linear mixed models identified differences between activity monitor steps and the criterion in the laboratory/free-living settings. Most monitors performed poorly during treadmill walking at 2.4 km/h. Cleaning a room had the largest errors of all simulated free-living activities. The accuracy was highest for forward/rhythmic movements for all monitors. In the free-living environment, the AGLFE had the largest discrepancy with the SW. This study highlights the need to verify step-counting accuracy of activity monitors with activities that include different movement types/directions. This is important to understand the origin of errors in step-counting during free-living conditions.
Norris, Laura C; Fornadel, Christen M; Hung, Wei-Chien; Pineda, Fernando J; Norris, Douglas E
2010-07-01
Anopheles arabiensis is a major vector of Plasmodium falciparum in southern Zambia. This study aimed to determine the rate of multiple human blood meals taken by An. arabiensis to more accurately estimate entomologic inoculation rates (EIRs). Mosquitoes were collected in four village areas over two seasons. DNA from human blood meals was extracted and amplified at four microsatellite loci. Using the three-allele method, which counts > or = 3 alleles at any microsatellite locus as a multiple blood meal, we determined that the overall frequency of multiple blood meals was 18.9%, which was higher than rates reported for An. gambiae in Kenya and An. funestus in Tanzania. Computer simulations showed that the three-allele method underestimates the true multiple blood meal proportion by 3-5%. Although P. falciparum infection status was not shown to influence the frequency of multiple blood feeding, the high multiple feeding rate found in this study increased predicted malaria risk by increasing EIR.
Point Count Length and Detection of Forest Neotropical Migrant Birds
Deanna K. Dawson; David R. Smith; Chandler S. Robbins
1995-01-01
Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences...
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
A compact 7-cell Si-drift detector module for high-count rate X-ray spectroscopy.
Hansen, K; Reckleben, C; Diehl, I; Klär, H
2008-05-01
A new Si-drift detector module for fast X-ray spectroscopy experiments was developed and realized. The Peltier-cooled module comprises a sensor with 7 × 7-mm 2 active area, an integrated circuit for amplification, shaping and detection, storage, and derandomized readout of signal pulses in parallel, and amplifiers for line driving. The compactness and hexagonal shape of the module with a wrench size of 16mm allow very short distances to the specimen and multi-module arrangements. The power dissipation is 186mW. At a shaper peaking time of 190 ns and an integration time of 450 ns an electronic rms noise of ~11 electrons was achieved. When operated at 7 °C, FWHM line widths around 260 and 460 eV (Cu-K α ) were obtained at low rates and at sum-count rates of 1.7 MHz, respectively. The peak shift is below 1% for a broad range of count rates. At 1.7-MHz sum-count rate the throughput loss amounts to 30%.
Acoustic Emission Parameters of Three Gorges Sandstone during Shear Failure
NASA Astrophysics Data System (ADS)
Xu, Jiang; Liu, Yixin; Peng, Shoujian
2016-12-01
In this paper, an experimental investigation of sandstone samples from the Three Gorges during shear failure was conducted using acoustic emission (AE) and direct shear tests. The AE count rate, cumulative AE count, AE energy, and amplitude of the sandstone samples were determined. Then, the relationships among the AE signals and shearing behaviors of the samples were analyzed in order to detect micro-crack initiation and propagation and reflect shear failure. The results indicated that both the shear strength and displacement exhibited a logarithmic relationship with the displacement rate at peak levels of stress. In addition, the various characteristics of the AE signals were apparent in various situations. The AE signals corresponded with the shear stress under different displacement rates. As the displacement rate increased, the amount of accumulative damage to each specimen decreased, while the AE energy peaked earlier and more significantly. The cumulative AE count primarily increased during the post-peak period. Furthermore, the AE count rate and amplitude exhibited two peaks during the peak shear stress period due to crack coalescence and rock bridge breakage. These isolated cracks later formed larger fractures and eventually caused ruptures.
NASA Technical Reports Server (NTRS)
Yonekura, Emmi; Hall, Timothy M.
2011-01-01
A new statistical model for western North Pacific Ocean tropical cyclone genesis and tracks is developed and applied to estimate regionally resolved tropical cyclone landfall rates along the coasts of the Asian mainland, Japan, and the Philippines. The model is constructed on International Best Track Archive for Climate Stewardship (IBTrACS) 1945-2007 historical data for the western North Pacific. The model is evaluated in several ways, including comparing the stochastic spread in simulated landfall rates with historic landfall rates. Although certain biases have been detected, overall the model performs well on the diagnostic tests, for example, reproducing well the geographic distribution of landfall rates. Western North Pacific cyclogenesis is influenced by El Nino-Southern Oscillation (ENSO). This dependence is incorporated in the model s genesis component to project the ENSO-genesis dependence onto landfall rates. There is a pronounced shift southeastward in cyclogenesis and a small but significant reduction in basinwide annual counts with increasing ENSO index value. On almost all regions of coast, landfall rates are significantly higher in a negative ENSO state (La Nina).
Novis, David A; Walsh, Molly; Wilkinson, David; St Louis, Mary; Ben-Ezra, Jonathon
2006-05-01
Automated laboratory hematology analyzers are capable of performing differential counts on peripheral blood smears with greater precision and more accurate detection of distributional and morphologic abnormalities than those performed by manual examinations of blood smears. Manual determinations of blood morphology and leukocyte differential counts are time-consuming, expensive, and may not always be necessary. The frequency with which hematology laboratory workers perform manual screens despite the availability of labor-saving features of automated analyzers is unknown. To determine the normative rates with which manual peripheral blood smears were performed in clinical laboratories, to examine laboratory practices associated with higher or lower manual review rates, and to measure the effects of manual smear review on the efficiency of generating complete blood count (CBC) determinations. From each of 3 traditional shifts per day, participants were asked to select serially, 10 automated CBC specimens, and to indicate whether manual scans and/or reviews with complete differential counts were performed on blood smears prepared from those specimens. Sampling continued until a total of 60 peripheral smears were reviewed manually. For each specimen on which a manual review was performed, participants indicated the patient's age, hemoglobin value, white blood cell count, platelet count, and the primary reason why the manual review was performed. Participants also submitted data concerning their institutions' demographic profiles and their laboratories' staffing, work volume, and practices regarding CBC determinations. The rates of manual reviews and estimations of efficiency in performing CBC determinations were obtained from the data. A total of 263 hospitals and independent laboratories, predominantly located in the United States, participating in the College of American Pathologists Q-Probes Program. There were 95,141 CBC determinations examined in this study; participants reviewed 15,423 (16.2%) peripheral blood smears manually. In the median institution (50th percentile), manual reviews of peripheral smears were performed on 26.7% of specimens. Manual differential count review rates were inversely associated with the magnitude of platelet counts that were required by laboratory policy to trigger smear reviews and with the efficiency of generating CBC reports. Lower manual differential count review rates were associated with laboratory policies that allowed manual reviews solely on the basis of abnormal automated red cell parameters and that precluded performing repeat manual reviews within designated time intervals. The manual scan rate elevated with increased number of hospital beds. In more than one third (35.7%) of the peripheral smears reviewed manually, participants claimed to have learned additional information beyond what was available on automated hematology analyzer printouts alone. By adopting certain laboratory practices, it may be possible to reduce the rates of manual reviews of peripheral blood smears and increase the efficiency of generating CBC results.
Linear-log counting-rate meter uses transconductance characteristics of a silicon planar transistor
NASA Technical Reports Server (NTRS)
Eichholz, J. J.
1969-01-01
Counting rate meter compresses a wide range of data values, or decades of current. Silicon planar transistor, operating in the zero collector-base voltage mode, is used as a feedback element in an operational amplifier to obtain the log response.
The use of noise equivalent count rate and the NEMA phantom for PET image quality evaluation.
Yang, Xin; Peng, Hao
2015-03-01
PET image quality is directly associated with two important parameters among others: count-rate performance and image signal-to-noise ratio (SNR). The framework of noise equivalent count rate (NECR) was developed back in the 1990s and has been widely used since then to evaluate count-rate performance for PET systems. The concept of NECR is not entirely straightforward, however, and among the issues requiring clarification are its original definition, its relationship to image quality, and its consistency among different derivation methods. In particular, we try to answer whether a higher NECR measurement using a standard NEMA phantom actually corresponds to better imaging performance. The paper includes the following topics: 1) revisiting the original analytical model for NECR derivation; 2) validating three methods for NECR calculation based on the NEMA phantom/standard; and 3) studying the spatial dependence of NECR and quantitative relationship between NECR and image SNR. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Reducing the Teen Death Rate. KIDS COUNT Indicator Brief
ERIC Educational Resources Information Center
Shore, Rima; Shore, Barbara
2009-01-01
Life continues to hold considerable risk for adolescents in the United States. In 2006, the teen death rate stood at 64 deaths per 100,000 teens (13,739 teens) (KIDS COUNT Data Center, 2009). Although it has declined by 4 percent since 2000, the rate of teen death in this country remains substantially higher than in many peer nations, based…
CASA-Mot technology: how results are affected by the frame rate and counting chamber.
Bompart, Daznia; García-Molina, Almudena; Valverde, Anthony; Caldeira, Carina; Yániz, Jesús; Núñez de Murga, Manuel; Soler, Carles
2018-04-04
For over 30 years, CASA-Mot technology has been used for kinematic analysis of sperm motility in different mammalian species, but insufficient attention has been paid to the technical limitations of commercial computer-aided sperm analysis (CASA) systems. Counting chamber type and frame rate are two of the most important aspects to be taken into account. Counting chambers can be disposable or reusable, with different depths. In human semen analysis, reusable chambers with a depth of 10µm are the most frequently used, whereas for most farm animal species it is more common to use disposable chambers with a depth of 20µm . The frame rate was previously limited by the hardware, although changes in the number of images collected could lead to significant variations in some kinematic parameters, mainly in curvilinear velocity (VCL). A frame rate of 60 frames s-1 is widely considered to be the minimum necessary for satisfactory results. However, the frame rate is species specific and must be defined in each experimental condition. In conclusion, we show that the optimal combination of frame rate and counting chamber type and depth should be defined for each species and experimental condition in order to obtain reliable results.
High Resolution Modeling of Hurricanes in a Climate Context
NASA Astrophysics Data System (ADS)
Knutson, T. R.
2007-12-01
Modeling of tropical cyclone activity in a climate context initially focused on simulation of relatively weak tropical storm-like disturbances as resolved by coarse grid (200 km) global models. As computing power has increased, multi-year simulations with global models of grid spacing 20-30 km have become feasible. Increased resolution also allowed for simulation storms of increasing intensity, and some global models generate storms of hurricane strength, depending on their resolution and other factors, although detailed hurricane structure is not simulated realistically. Results from some recent high resolution global model studies are reviewed. An alternative for hurricane simulation is regional downscaling. An early approach was to embed an operational (GFDL) hurricane prediction model within a global model solution, either for 5-day case studies of particular model storm cases, or for "idealized experiments" where an initial vortex is inserted into an idealized environments derived from global model statistics. Using this approach, hurricanes up to category five intensity can be simulated, owing to the model's relatively high resolution (9 km grid) and refined physics. Variants on this approach have been used to provide modeling support for theoretical predictions that greenhouse warming will increase the maximum intensities of hurricanes. These modeling studies also simulate increased hurricane rainfall rates in a warmer climate. The studies do not address hurricane frequency issues, and vertical shear is neglected in the idealized studies. A recent development is the use of regional model dynamical downscaling for extended (e.g., season-length) integrations of hurricane activity. In a study for the Atlantic basin, a non-hydrostatic model with grid spacing of 18km is run without convective parameterization, but with internal spectral nudging toward observed large-scale (basin wavenumbers 0-2) atmospheric conditions from reanalyses. Using this approach, our model reproduces the observed increase in Atlantic hurricane activity (numbers, Accumulated Cyclone Energy (ACE), Power Dissipation Index (PDI), etc.) over the period 1980-2006 fairly realistically, and also simulates ENSO-related interannual variations in hurricane counts. Annual simulated hurricane counts from a two-member ensemble correlate with observed counts at r=0.86. However, the model does not simulate hurricanes as intense as those observed, with minimum central pressures of 937 hPa (category 4) and maximum surface winds of 47 m/s (category 2) being the most intense simulated so far in these experiments. To explore possible impacts of future climate warming on Atlantic hurricane activity, we are re-running the 1980- 2006 seasons, keeping the interannual to multidecadal variations unchanged, but altering the August-October mean climate according to changes simulated by an 18-member ensemble of AR4 climate models (years 2080- 2099, A1B emission scenario). The warmer climate state features higher Atlantic SSTs, and also increased vertical wind shear across the Caribbean (Vecchi and Soden, GRL 2007). A key assumption of this approach is that the 18-model ensemble-mean climate change is the best available projection of future climate change in the Atlantic. Some of the 18 global models show little increase in wind shear, or even a decrease, and thus there will be considerable uncertainty associated with the hurricane frequency results, which will require further exploration. Results from our simulations will be presented at the meeting.
Sensitivity of photon-counting based K-edge imaging in X-ray computed tomography.
Roessl, Ewald; Brendel, Bernhard; Engel, Klaus-Jürgen; Schlomka, Jens-Peter; Thran, Axel; Proksa, Roland
2011-09-01
The feasibility of K-edge imaging using energy-resolved, photon-counting transmission measurements in X-ray computed tomography (CT) has been demonstrated by simulations and experiments. The method is based on probing the discontinuities of the attenuation coefficient of heavy elements above and below the K-edge energy by using energy-sensitive, photon counting X-ray detectors. In this paper, we investigate the dependence of the sensitivity of K-edge imaging on the atomic number Z of the contrast material, on the object diameter D , on the spectral response of the X-ray detector and on the X-ray tube voltage. We assume a photon-counting detector equipped with six adjustable energy thresholds. Physical effects leading to a degradation of the energy resolution of the detector are taken into account using the concept of a spectral response function R(E,U) for which we assume four different models. As a validation of our analytical considerations and in order to investigate the influence of elliptically shaped phantoms, we provide CT simulations of an anthropomorphic Forbild-Abdomen phantom containing a gold-contrast agent. The dependence on the values of the energy thresholds is taken into account by optimizing the achievable signal-to-noise ratios (SNR) with respect to the threshold values. We find that for a given X-ray spectrum and object size the SNR in the heavy element's basis material image peaks for a certain atomic number Z. The dependence of the SNR in the high- Z basis-material image on the object diameter is the natural, exponential decrease with particularly deteriorating effects in the case where the attenuation from the object itself causes a total signal loss below the K-edge. The influence of the energy-response of the detector is very important. We observed that the optimal SNR values obtained with an ideal detector and with a CdTe pixel detector whose response, showing significant tailing, has been determined at a synchrotron differ by factors of about two to three. The potentially very important impact of scattered X-ray radiation and pulse pile-up occurring at high photon rates on the sensitivity of the technique is qualitatively discussed.
Material separation in x-ray CT with energy resolved photon-counting detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Xiaolan; Meier, Dirk; Taguchi, Katsuyuki
Purpose: The objective of the study was to demonstrate that, in x-ray computed tomography (CT), more than two types of materials can be effectively separated with the use of an energy resolved photon-counting detector and classification methodology. Specifically, this applies to the case when contrast agents that contain K-absorption edges in the energy range of interest are present in the object. This separation is enabled via the use of recently developed energy resolved photon-counting detectors with multiple thresholds, which allow simultaneous measurements of the x-ray attenuation at multiple energies. Methods: To demonstrate this capability, we performed simulations and physical experimentsmore » using a six-threshold energy resolved photon-counting detector. We imaged mouse-sized cylindrical phantoms filled with several soft-tissue-like and bone-like materials and with iodine-based and gadolinium-based contrast agents. The linear attenuation coefficients were reconstructed for each material in each energy window and were visualized as scatter plots between pairs of energy windows. For comparison, a dual-kVp CT was also simulated using the same phantom materials. In this case, the linear attenuation coefficients at the lower kVp were plotted against those at the higher kVp. Results: In both the simulations and the physical experiments, the contrast agents were easily separable from other soft-tissue-like and bone-like materials, thanks to the availability of the attenuation coefficient measurements at more than two energies provided by the energy resolved photon-counting detector. In the simulations, the amount of separation was observed to be proportional to the concentration of the contrast agents; however, this was not observed in the physical experiments due to limitations of the real detector system. We used the angle between pairs of attenuation coefficient vectors in either the 5-D space (for non-contrast-agent materials using energy resolved photon-counting acquisition) or a 2-D space (for contrast agents using energy resolved photon-counting acquisition and all materials using dual-kVp acquisition) as a measure of the degree of separation. Compared to dual-kVp techniques, an energy resolved detector provided a larger separation and the ability to separate different target materials using measurements acquired in different energy window pairs with a single x-ray exposure. Conclusions: We concluded that x-ray CT with an energy resolved photon-counting detector with more than two energy windows allows the separation of more than two types of materials, e.g., soft-tissue-like, bone-like, and one or more materials with K-edges in the energy range of interest. Separating material types using energy resolved photon-counting detectors has a number of advantages over dual-kVp CT in terms of the degree of separation and the number of materials that can be separated simultaneously.« less
Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
NASA Astrophysics Data System (ADS)
Li, Qi; Tan, Jonathan C.; Christie, Duncan; Bisbas, Thomas G.; Wu, Benjamin
2018-05-01
We present a series of adaptive mesh refinement hydrodynamic simulations of flat rotation curve galactic gas disks, with a detailed treatment of the interstellar medium (ISM) physics of the atomic to molecular phase transition under the influence of diffuse far-ultraviolet (FUV) radiation fields and cosmic-ray backgrounds. We explore the effects of different FUV intensities, including a model with a radial gradient designed to mimic the Milky Way. The effects of cosmic rays, including radial gradients in their heating and ionization rates, are also explored. The final simulations in this series achieve 4 pc resolution across the ˜20 kpc global disk diameter, with heating and cooling followed down to temperatures of ˜10 K. The disks are evolved for 300 Myr, which is enough time for the ISM to achieve a quasi-statistical equilibrium. In particular, the mass fraction of molecular gas is stabilized by ˜200 Myr. Additional global ISM properties are analyzed. Giant molecular clouds (GMCs) are also identified and the statistical properties of their populations are examined. GMCs are tracked as the disks evolve. GMC collisions, which may be a means of triggering star cluster formation, are counted and their rates are compared with analytic models. Relatively frequent GMC collision rates are seen in these simulations, and their implications for understanding GMC properties, including the driving of internal turbulence, are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less
MONALISA for stochastic simulations of Petri net models of biochemical systems.
Balazki, Pavel; Lindauer, Klaus; Einloft, Jens; Ackermann, Jörg; Koch, Ina
2015-07-10
The concept of Petri nets (PN) is widely used in systems biology and allows modeling of complex biochemical systems like metabolic systems, signal transduction pathways, and gene expression networks. In particular, PN allows the topological analysis based on structural properties, which is important and useful when quantitative (kinetic) data are incomplete or unknown. Knowing the kinetic parameters, the simulation of time evolution of such models can help to study the dynamic behavior of the underlying system. If the number of involved entities (molecules) is low, a stochastic simulation should be preferred against the classical deterministic approach of solving ordinary differential equations. The Stochastic Simulation Algorithm (SSA) is a common method for such simulations. The combination of the qualitative and semi-quantitative PN modeling and stochastic analysis techniques provides a valuable approach in the field of systems biology. Here, we describe the implementation of stochastic analysis in a PN environment. We extended MONALISA - an open-source software for creation, visualization and analysis of PN - by several stochastic simulation methods. The simulation module offers four simulation modes, among them the stochastic mode with constant firing rates and Gillespie's algorithm as exact and approximate versions. The simulator is operated by a user-friendly graphical interface and accepts input data such as concentrations and reaction rate constants that are common parameters in the biological context. The key features of the simulation module are visualization of simulation, interactive plotting, export of results into a text file, mathematical expressions for describing simulation parameters, and up to 500 parallel simulations of the same parameter sets. To illustrate the method we discuss a model for insulin receptor recycling as case study. We present a software that combines the modeling power of Petri nets with stochastic simulation of dynamic processes in a user-friendly environment supported by an intuitive graphical interface. The program offers a valuable alternative to modeling, using ordinary differential equations, especially when simulating single-cell experiments with low molecule counts. The ability to use mathematical expressions provides an additional flexibility in describing the simulation parameters. The open-source distribution allows further extensions by third-party developers. The software is cross-platform and is licensed under the Artistic License 2.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan
2015-07-15
Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less
NASA Astrophysics Data System (ADS)
Enderlein, Joerg; Ruhlandt, Daja; Chithik, Anna; Ebrecht, René; Wouters, Fred S.; Gregor, Ingo
2016-02-01
Fluorescence lifetime microscopy has become an important method of bioimaging, allowing not only to record intensity and spectral, but also lifetime information across an image. One of the most widely used methods of FLIM is based on Time-Correlated Single Photon Counting (TCSPC). In TCSPC, one determines this curve by exciting molecules with a periodic train of short laser pulses, and then measuring the time delay between the first recorded fluorescence photon after each exciting laser pulse. An important technical detail of TCSPC measurements is the fact that the delay times between excitation laser pulses and resulting fluorescence photons are always measured between a laser pulse and the first fluorescence photon which is detected after that pulse. At high count rates, this leads to so-called pile-up: ``early'' photons eclipse long-delay photons, resulting in heavily skewed TCSPC histograms. To avoid pile-up, a rule of thumb is to perform TCSPC measurements at photon count rates which are at least hundred times smaller than the laser-pulse excitation rate. The downside of this approach is that the fluorescence-photon count-rate is restricted to a value below one hundredth of the laser-pulse excitation-rate, reducing the overall speed with which a fluorescence signal can be measured. We present a new data evaluation method which provides pile-up corrected fluorescence decay estimates from TCSPC measurements at high count rates, and we demonstrate our method on FLIM of fluorescently labeled cells.
Coalescence growth mechanism of ultrafine metal particles
NASA Astrophysics Data System (ADS)
Kasukabe, S.
1990-01-01
Ultrafine particles produced by a gas-evaporation technique show clear-cut crystal habits. The convection of an inert gas makes distinct growth zones in a metal smoke. The coalescence stages of hexagonal plates and multiply twinned particles are observed in the outer zone of a smoke. A model of the coalescence growth of particles with different crystal habits is proposed. Size distributions can be calculated by counting the ratio of the number of collisions by using the effective cross section of collisions and the existence probability of the volume of a particle. This simulation model makes clear the effect on the growth rate of coalescence growth derived from crystal habit.
200-300 N. Stetson, January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates throughout the grading ranged from 4,500 cpm to 8,000 cpm. No count rates were found at any time that exceeded the threshold limits of 17,246 cpm and 18,098 cpm.
0 - 36 W. Illinois St., January 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 3,700 cpm.No count rates were found at any time that exceeded the threshold limits of 6,738 cpm and 7,029 cpm.
400-449 N. State St, March 2017, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation. The count rates in the excavation ranged from 1,600 cpm to 4,300 cpm. No count rates were found at any time that exceeded the threshold limits of 6,338 cpm and 7,038 cpm.
Database crime to crime match rate calculation.
Buckleton, John; Bright, Jo-Anne; Walsh, Simon J
2009-06-01
Guidance exists on how to count matches between samples in a crime sample database but we are unable to locate a definition of how to estimate a match rate. We propose a method that does not proceed from the match counting definition but which has a strong logic.
The effects of hypobaric hypoxia (50.6 kPa) on blood components in guinea-pigs.
Osada, H
1991-06-01
One hundred and five male (Hartley) guinea-pigs weighing 350-380 g and 30 splenectomized guinea-pigs were exposed to simulated hypobaric hypoxia of 50.6 kPa (equal to an altitude of 5486 m) for 14 days. The partial pressure of oxygen was set at half that at sea level. The white blood cell count increased significantly on day 3 of the simulated high altitude experiment but returned to normal on day 7, whereas the red blood cell count increased continuously. To study the effect of high altitude exposure on platelets, the platelet count in the splenectomized group was compared to that in a non-splenectomized group. Investigation of the resistance of red blood cell membranes to osmotic pressure under hypobaric conditions revealed a shift of the onset of haemolysis in the hyperosmotic direction. These findings may help to increase our understanding of the biochemical mechanisms of adaptation to hypobaric hypoxia.
NASA Astrophysics Data System (ADS)
Kumpová, I.; Vavřík, D.; Fíla, T.; Koudelka, P.; Jandejsek, I.; Jakůbek, J.; Kytýř, D.; Zlámal, P.; Vopálenský, M.; Gantar, A.
2016-02-01
To overcome certain limitations of contemporary materials used for bone tissue engineering, such as inflammatory response after implantation, a whole new class of materials based on polysaccharide compounds is being developed. Here, nanoparticulate bioactive glass reinforced gelan-gum (GG-BAG) has recently been proposed for the production of bone scaffolds. This material offers promising biocompatibility properties, including bioactivity and biodegradability, with the possibility of producing scaffolds with directly controlled microgeometry. However, to utilize such a scaffold with application-optimized properties, large sets of complex numerical simulations using the real microgeometry of the material have to be carried out during the development process. Because the GG-BAG is a material with intrinsically very low attenuation to X-rays, its radiographical imaging, including tomographical scanning and reconstructions, with resolution required by numerical simulations might be a very challenging task. In this paper, we present a study on X-ray imaging of GG-BAG samples. High-resolution volumetric images of investigated specimens were generated on the basis of micro-CT measurements using a large area flat-panel detector and a large area photon-counting detector. The photon-counting detector was composed of a 010× 1 matrix of Timepix edgeless silicon pixelated detectors with tiling based on overlaying rows (i.e. assembled so that no gap is present between individual rows of detectors). We compare the results from both detectors with the scanning electron microscopy on selected slices in transversal plane. It has been shown that the photon counting detector can provide approx. 3× better resolution of the details in low-attenuating materials than the integrating flat panel detectors. We demonstrate that employment of a large area photon counting detector is a good choice for imaging of low attenuating materials with the resolution sufficient for numerical simulations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 2 2011-10-01 2011-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 2 2013-10-01 2012-10-01 true Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 2 2012-10-01 2012-10-01 false Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 2 2014-10-01 2012-10-01 true Are there any limitations in counting job search and job readiness assistance toward the participation rates? 261.34 Section 261.34 Public Welfare... Work Activities and How Do They Count? § 261.34 Are there any limitations in counting job search and...
Silva, H G; Lopes, I
Heliospheric modulation of galactic cosmic rays links solar cycle activity with neutron monitor count rate on earth. A less direct relation holds between neutron monitor count rate and atmospheric electric field because different atmospheric processes, including fluctuations in the ionosphere, are involved. Although a full quantitative model is still lacking, this link is supported by solid statistical evidence. Thus, a connection between the solar cycle activity and atmospheric electric field is expected. To gain a deeper insight into these relations, sunspot area (NOAA, USA), neutron monitor count rate (Climax, Colorado, USA), and atmospheric electric field (Lisbon, Portugal) are presented here in a phase space representation. The period considered covers two solar cycles (21, 22) and extends from 1978 to 1990. Two solar maxima were observed in this dataset, one in 1979 and another in 1989, as well as one solar minimum in 1986. Two main observations of the present study were: (1) similar short-term topological features of the phase space representations of the three variables, (2) a long-term phase space radius synchronization between the solar cycle activity, neutron monitor count rate, and potential gradient (confirmed by absolute correlation values above ~0.8). Finally, the methodology proposed here can be used for obtaining the relations between other atmospheric parameters (e.g., solar radiation) and solar cycle activity.
NASA Technical Reports Server (NTRS)
Kessel, R. L.; Armstrong, T. P.; Nuber, R.; Bandle, J.
1985-01-01
Data were examined from two experiments aboard the Explorer 50 (IMP 8) spacecraft. The Johns Hopkins University/Applied Lab Charged Particle Measurement Experiment (CPME) provides 10.12 second resolution ion and electron count rates as well as 5.5 minute or longer averages of the same, with data sampled in the ecliptic plane. The high time resolution of the data allows for an explicit, point by point, merging of the magnetic field and particle data and thus a close examination of the pre- and post-shock conditions and particle fluxes associated with large angle oblique shocks in the interplanetary field. A computer simulation has been developed wherein sample particle trajectories, taken from observed fluxes, are allowed to interact with a planar shock either forward or backward in time. One event, the 1974 Day 312 shock, is examined in detail.
Lopez, Ramon; Farber, Mark O.; Wong, Vincent; Lacey, Steven E.
2016-01-01
Objective We conducted an exposure chamber study in humans using a simulated clinical procedure lasing porcine tissue to demonstrate evidence of effects of exposure to laser generated particulate matter (LGPM). Methods We measured pre- and post-exposure changes in exhaled nitric oxide (eNO), spirometry, heart rate variability (HRV), and blood markers of inflammation in five volunteers. Results Change in pre- and post-exposure measurements of eNO and spirometry were unremarkable. Neutrophil and lymphocyte counts increased and fibrinogen levels decreased in four of the five subjects. Measures of HRV showed decreases in the standard deviation of normal between beat intervals and sequential five-minute intervals. Conclusion These data represent the first evidence of human physiologic response to LGPM exposure. Further exploration of coagulation effects and HRV are warranted. PMID:27465102
Interstitial ablation and imaging of soft tissue using miniaturized ultrasound arrays
NASA Astrophysics Data System (ADS)
Makin, Inder R. S.; Gallagher, Laura A.; Mast, T. Douglas; Runk, Megan M.; Faidi, Waseem; Barthe, Peter G.; Slayton, Michael H.
2004-05-01
A potential alternative to extracorporeal, noninvasive HIFU therapy is minimally invasive, interstitial ultrasound ablation that can be performed laparoscopically or percutaneously. Research in this area at Guided Therapy Systems and Ethicon Endo-Surgery has included development of miniaturized (~3 mm diameter) linear ultrasound arrays capable of high power for bulk tissue ablation as well as broad bandwidth for imaging. An integrated control system allows therapy planning and automated treatment guided by real-time interstitial B-scan imaging. Image quality, challenging because of limited probe dimensions and channel count, is aided by signal processing techniques that improve image definition and contrast. Simulations of ultrasonic heat deposition, bio-heat transfer, and tissue modification provide understanding and guidance for development of treatment strategies. Results from in vitro and in vivo ablation experiments, together with corresponding simulations, will be described. Using methods of rotational scanning, this approach is shown to be capable of clinically relevant ablation rates and volumes.
Molteni, Matteo; Weigel, Udo M; Remiro, Francisco; Durduran, Turgut; Ferri, Fabio
2014-11-17
We present a new hardware simulator (HS) for characterization, testing and benchmarking of digital correlators used in various optical correlation spectroscopy experiments where the photon statistics is Gaussian and the corresponding time correlation function can have any arbitrary shape. Starting from the HS developed in [Rev. Sci. Instrum. 74, 4273 (2003)], and using the same I/O board (PCI-6534 National Instrument) mounted on a modern PC (Intel Core i7-CPU, 3.07GHz, 12GB RAM), we have realized an instrument capable of delivering continuous streams of TTL pulses over two channels, with a time resolution of Δt = 50ns, up to a maximum count rate of 〈I〉 ∼ 5MHz. Pulse streams, typically detected in dynamic light scattering and diffuse correlation spectroscopy experiments were generated and measured with a commercial hardware correlator obtaining measured correlation functions that match accurately the expected ones.
Bulk Genotyping of Biopsies Can Create Spurious Evidence for Hetereogeneity in Mutation Content.
Kostadinov, Rumen; Maley, Carlo C; Kuhner, Mary K
2016-04-01
When multiple samples are taken from the neoplastic tissues of a single patient, it is natural to compare their mutation content. This is often done by bulk genotyping of whole biopsies, but the chance that a mutation will be detected in bulk genotyping depends on its local frequency in the sample. When the underlying mutation count per cell is equal, homogenous biopsies will have more high-frequency mutations, and thus more detectable mutations, than heterogeneous ones. Using simulations, we show that bulk genotyping of data simulated under a neutral model of somatic evolution generates strong spurious evidence for non-neutrality, because the pattern of tissue growth systematically generates differences in biopsy heterogeneity. Any experiment which compares mutation content across bulk-genotyped biopsies may therefore suggest mutation rate or selection intensity variation even when these forces are absent. We discuss computational and experimental approaches for resolving this problem.
Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.
2014-01-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010
Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J
2014-09-01
Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.
Pommé, S
2012-09-01
A software package is presented to calculate the total counting efficiency for the decay of radionuclides in a well-type γ-ray detector. It is specifically applied to primary standardisation of activity by means of 4πγ-counting with a NaI(Tl) well-type scintillation detector. As an alternative to Monte Carlo simulations, the software combines good accuracy with superior speed and ease-of-use. It is also well suited to investigate uncertainties associated with the 4πγ-counting method for a variety of radionuclides and detector dimensions. In this paper, the underlying analytical models for the radioactive decay and subsequent counting efficiency of the emitted radiation in the detector are summarised. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zhang, Yun; Baheti, Saurabh; Sun, Zhifu
2018-05-01
High-throughput bisulfite methylation sequencing such as reduced representation bisulfite sequencing (RRBS), Agilent SureSelect Human Methyl-Seq (Methyl-seq) or whole-genome bisulfite sequencing is commonly used for base resolution methylome research. These data are represented either by the ratio of methylated cytosine versus total coverage at a CpG site or numbers of methylated and unmethylated cytosines. Multiple statistical methods can be used to detect differentially methylated CpGs (DMCs) between conditions, and these methods are often the base for the next step of differentially methylated region identification. The ratio data have a flexibility of fitting to many linear models, but the raw count data take consideration of coverage information. There is an array of options in each datatype for DMC detection; however, it is not clear which is an optimal statistical method. In this study, we systematically evaluated four statistic methods on methylation ratio data and four methods on count-based data and compared their performances with regard to type I error control, sensitivity and specificity of DMC detection and computational resource demands using real RRBS data along with simulation. Our results show that the ratio-based tests are generally more conservative (less sensitive) than the count-based tests. However, some count-based methods have high false-positive rates and should be avoided. The beta-binomial model gives a good balance between sensitivity and specificity and is preferred method. Selection of methods in different settings, signal versus noise and sample size estimation are also discussed.
Ahmed, Anwar E; Ali, Yosra Z; Al-Suliman, Ahmad M; Albagshi, Jafar M; Al Salamah, Majid; Elsayid, Mohieldin; Alanazi, Wala R; Ahmed, Rayan A; McClish, Donna K; Al-Jahdali, Hamdan
2017-01-01
High white blood cell (WBC) count is an indicator of sickle cell disease (SCD) severity, however, there are limited studies on WBC counts in Saudi Arabian patients with SCD. The aim of this study was to estimate the prevalence of abnormal leukocyte count (either low or high) and identify factors associated with high WBC counts in a sample of Saudi patients with SCD. A cross-sectional and retrospective chart review study was carried out on 290 SCD patients who were routinely treated at King Fahad Hospital in Hofuf, Saudi Arabia. An interview was conducted to assess clinical presentations, and we reviewed patient charts to collect data on blood test parameters for the previous 6 months. Almost half (131 [45.2%]) of the sample had abnormal leukocyte counts: low WBC counts 15 (5.2%) and high 116 (40%). High WBC counts were associated with shortness of breath ( P =0.022), tiredness ( P =0.039), swelling in hands/feet ( P =0.020), and back pain ( P =0.007). The mean hemoglobin was higher in patients with normal WBC counts ( P =0.024), while the mean hemoglobin S was high in patients with high WBC counts ( P =0.003). After adjustment for potential confounders, predictors of high WBC counts were male gender (adjusted odds ratio [aOR]=3.63) and patients with cough (aOR=2.18), low hemoglobin (aOR=0.76), and low heart rate (aOR=0.97). Abnormal leukocyte count was common: approximately five in ten Saudi SCD patients assessed in this sample. Male gender, cough, low hemoglobin, and low heart rate were associated with high WBC count. Strategies targeting high WBC count could prevent disease complication and thus could be beneficial for SCD patients.
Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela
2017-10-17
To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
Hantavirus pulmonary syndrome, United States, 1993-2009.
MacNeil, Adam; Ksiazek, Thomas G; Rollin, Pierre E
2011-07-01
Hantavirus pulmonary syndrome (HPS) is a severe respiratory illness identified in 1993. Since its identification, the Centers for Disease Control and Prevention has obtained standardized information about and maintained a registry of all laboratory-confirmed HPS cases in the United States. During 1993-2009, a total of 510 HPS cases were identified. Case counts have varied from 11 to 48 per year (case-fatality rate 35%). However, there were no trends suggesting increasing or decreasing case counts or fatality rates. Although cases were reported in 30 states, most cases occurred in the western half of the country; annual case counts varied most in the southwestern United States. Increased hematocrits, leukocyte counts, and creatinine levels were more common in HPS case-patients who died. HPS is a severe disease with a high case-fatality rate, and cases continue to occur. The greatest potential for high annual HPS incidence exists in the southwestern United States.
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Bybee, R. L.
1978-01-01
The paper describes a new type of continuous channel multiplier (CEM) fabricated from a low-resistance glass to produce a high-conductivity channel section and thereby obtain a high count-rate capability. The flat-cone cathode configuration of the CEM is specifically designed for the detection of astigmatic exit images from grazing-incidence spectrometers at the optimum angle of illumination for high detection efficiencies at XUV wavelengths. Typical operating voltages are in the range of 2500-2900 V with stable counting plateau slopes in the range 3-6% per 100-V increment. The modal gain at 2800 V was typically in the range (50-80) million. The modal gain falls off at count rates in excess of about 20,000 per sec. The detection efficiency remains essentially constant to count rates in excess of 2 million per sec. Higher detection efficiencies (better than 20%) are obtained by coating the CEM with MgF2. In life tests of coated CEMs, no measurable change in detection efficiency was measured to a total accumulated signal of 2 times 10 to the 11th power counts.
2015-03-12
26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to
ERIC Educational Resources Information Center
University of South Florida, Tampa. Florida Center for Children and Youth.
This Kids Count report investigates statewide trends in the well-being of Florida's children. The statistical report is based on 19 indicators of child well being: (1) low birth weight infants; (2) infant mortality rate; (3) child death rate; (4) births to single teens; (5) juvenile violent crime arrest rate; (6) percent graduating from high…
1-99 W. Hubbard St, May 2018, Lindsay Light Radiological Survey
Radiological Survey of Right-of-Way Utility Excavation.The count rates in the excavation ranged from 2,100 cpm to 4,200 cpm.No count rates were found at any time that exceeded the instrument specific threshold limits of 7,366 and 6,415 cpm.
Is Parenting Child's Play? Kids Count in Missouri Report on Adolescent Pregnancy.
ERIC Educational Resources Information Center
Citizens for Missouri's Children, St. Louis.
This Kids Count report presents current information on adolescent pregnancy rates in Missouri. Part 1, "Overview of Adolescent Pregnancy in Missouri," discusses the changing pregnancy, abortion, and birth rates for 15- to 19-year-old adolescents, racial differences in pregnancy risk, regional differences suggesting a link between…
A prognostic pollen emissions model for climate models (PECM1.0)
NASA Astrophysics Data System (ADS)
Wozniak, Matthew C.; Steiner, Allison L.
2017-11-01
We develop a prognostic model called Pollen Emissions for Climate Models (PECM) for use within regional and global climate models to simulate pollen counts over the seasonal cycle based on geography, vegetation type, and meteorological parameters. Using modern surface pollen count data, empirical relationships between prior-year annual average temperature and pollen season start dates and end dates are developed for deciduous broadleaf trees (Acer, Alnus, Betula, Fraxinus, Morus, Platanus, Populus, Quercus, Ulmus), evergreen needleleaf trees (Cupressaceae, Pinaceae), grasses (Poaceae; C3, C4), and ragweed (Ambrosia). This regression model explains as much as 57 % of the variance in pollen phenological dates, and it is used to create a climate-flexible phenology that can be used to study the response of wind-driven pollen emissions to climate change. The emissions model is evaluated in the Regional Climate Model version 4 (RegCM4) over the continental United States by prescribing an emission potential from PECM and transporting pollen as aerosol tracers. We evaluate two different pollen emissions scenarios in the model using (1) a taxa-specific land cover database, phenology, and emission potential, and (2) a plant functional type (PFT) land cover, phenology, and emission potential. The simulated surface pollen concentrations for both simulations are evaluated against observed surface pollen counts in five climatic subregions. Given prescribed pollen emissions, the RegCM4 simulates observed concentrations within an order of magnitude, although the performance of the simulations in any subregion is strongly related to the land cover representation and the number of observation sites used to create the empirical phenological relationship. The taxa-based model provides a better representation of the phenology of tree-based pollen counts than the PFT-based model; however, we note that the PFT-based version provides a useful and climate-flexible emissions model for the general representation of the pollen phenology over the United States.
Blocking Losses With a Photon Counter
NASA Technical Reports Server (NTRS)
Moision, Burce E.; Piazzolla, Sabino
2012-01-01
It was not known how to assess accurately losses in a communications link due to photodetector blocking, a phenomenon wherein a detector is rendered inactive for a short time after the detection of a photon. When used to detect a communications signal, blocking leads to losses relative to an ideal detector, which may be measured as a reduction in the communications rate for a given received signal power, or an increase in the signal power required to support the same communications rate. This work involved characterizing blocking losses for single detectors and arrays of detectors. Blocking may be mitigated by spreading the signal intensity over an array of detectors, reducing the count rate on any one detector. A simple approximation was made to the blocking loss as a function of the probability that a detector is unblocked at a given time, essentially treating the blocking probability as a scaling of the detection efficiency. An exact statistical characterization was derived for a single detector, and an approximation for multiple detectors. This allowed derivation of several accurate approximations to the loss. Methods were also derived to account for a rise time in recovery, and non-uniform illumination due to diffraction and atmospheric distortion of the phase front. It was assumed that the communications signal is intensity modulated and received by an array of photon-counting photodetectors. For the purpose of this analysis, it was assumed that the detectors are ideal, in that they produce a signal that allows one to reproduce the arrival times of electrons, produced either as photoelectrons or from dark noise, exactly. For single detectors, the performance of the maximum-likelihood (ML) receiver in blocking is illustrated, as well as a maximum-count (MC) receiver, that, when receiving a pulse-position-modulated (PPM) signal, selects the symbol corresponding to the slot with the largest electron count. Whereas the MC receiver saturates at high count rates, the ML receiver may not. The loss in capacity, symbol-error-rate (SER), and count-rate were numerically computed. It was shown that the capacity and symbol-error-rate losses track, whereas the count-rate loss does not generally reflect the SER or capacity loss, as the slot-statistics at the detector output are no longer Poisson. It is also shown that the MC receiver loss may be accurately predicted for dead times on the order of a slot.
Simulations of deep galaxy fields. 1: Monte Carlo simulations of optical and near-infrared counts
NASA Technical Reports Server (NTRS)
Chokshi, Arati; Lonsdale, Carol J.; Mazzei, Paola; De Zotti, Gianfranco
1994-01-01
Monte Carlo simulations of three-dimensional galaxy distributions are performed, following the 1988 prescription of Chokshi & Wright, to study the photometric properties of evolving galaxy populations in the optical and near-infrared bands to high redshifts. In this paper, the first of a series, we present our baseline model in which galaxy numbers are conserved, and in which no explicit 'starburst' population is included. We use the model in an attempt to simultaneously fit published blue and near-infrared photometric and spectroscopic observations of deep fields. We find that our baseline models, with a formation redshift, z(sub f), of 1000, and H(sub 0) = 50, are able to reproduce the blue counts to b(sub j) = 22, independent of the value of Omega(sub 0), and also to provide a satisfactory fit to the observed blue-band redshift distributions, but for no value of Omega(sub 0) do we achieve an acceptable fit to the fainter blue counts. In the K band, we fit the number counts to the limit of the present-day surveys only for an Omega(sub 0) = 0 cosmology. We investigate the effect on the model fits of varying the cosmological parameters H(sub 0), the formation red-shift z(sub f), and the local luminosity function. Changing H(sub 0) does not improve the fits to the observations. However, reducing the epoch of a galaxy formation used in our simulations has a substantial effect. In particular, a model with z(sub f) approximately equal to 5 in a low Omega(sub 0) universe improves the fit to the faintest photometric blue data without any need to invoke a new population of galaxies, substantial merging, or a significant starburst galaxy population. For an Omega(sub 0) = 1 universe, however, reducing z(sub f) is less successful at fitting the blue-band counts and has little effect at all at K. Varying the parameters of the local luminosity function can also have a significant effect. In particular the steep low end slope of the local luminosity function of Franceschini et al. allows an acceptable fit to the b(sub j) less than or equal to 25 counts for Omega(sub 0) = 1, but is incompatible with Omega(sub 0) = 0.
A GATE evaluation of the sources of error in quantitative {sup 90}Y PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydhorst, Jared, E-mail: jared.strydhorst@gmail.
Purpose: Accurate reconstruction of the dose delivered by {sup 90}Y microspheres using a postembolization PET scan would permit the establishment of more accurate dose–response relationships for treatment of hepatocellular carcinoma with {sup 90}Y. However, the quality of the PET data obtained is compromised by several factors, including poor count statistics and a very high random fraction. This work uses Monte Carlo simulations to investigate what impact factors other than low count statistics have on the quantification of {sup 90}Y PET. Methods: PET acquisitions of two phantoms—a NEMA PET phantom and the NEMA IEC PET body phantom-containing either {sup 90}Y ormore » {sup 18}F were simulated using GATE. Simulated projections were created with subsets of the simulation data allowing the contributions of random, scatter, and LSO background to be independently evaluated. The simulated projections were reconstructed using the commercial software for the simulated scanner, and the quantitative accuracy of the reconstruction and the contrast recovery of the reconstructed images were evaluated. Results: The quantitative accuracy of the {sup 90}Y reconstructions were not strongly influenced by the high random fraction present in the projection data, and the activity concentration was recovered to within 5% of the known value. The contrast recovery measured for simulated {sup 90}Y data was slightly poorer than that for simulated {sup 18}F data with similar count statistics. However, the degradation was not strongly linked to any particular factor. Using a more restricted energy range to reduce the random fraction in the projections had no significant effect. Conclusions: Simulations of {sup 90}Y PET confirm that quantitative {sup 90}Y is achievable with the same approach as that used for {sup 18}F, and that there is likely very little margin for improvement by attempting to model aspects unique to {sup 90}Y, such as the much higher random fraction or the presence of bremsstrahlung in the singles data.« less
Gerber, Brian D.; Kendall, William L.
2017-01-01
Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.
Kim, Joo-Hwa; Lee, Ha-Baik; Kim, Seong-Won; Kang, Im-Joo; Kook, Myung-Hee; Kim, Bong-Seong; Park, Kang-Seo; Baek, Hey-Sung; Kim, Kyu-Rang; Choi, Young-Jean
2012-01-01
The prevalence of allergic diseases in children has increased for several decades. We evaluated the correlation between pollen count of weeds and their sensitization rate in Seoul, 1997-2009. Airborne particles carrying allergens were collected daily from 3 stations around Seoul. Skin prick tests to pollen were performed on children with allergic diseases. Ragweed pollen gradually increased between 1999 and 2005, decreased after 2005 and plateaued until 2009 (peak counts, 67 in 2003, 145 in 2005 and 83 grains/m3/day in 2007). Japanese hop pollen increased between 2002 and 2009 (peak counts, 212 in 2006 and 492 grains/m3/day in 2009). Sensitization rates to weed pollen, especially ragweed and Japanese hop in children with allergic diseases, increased annually (ragweed, 2.2% in 2000 and 2.8% in 2002; Japanese hop, 1.4% in 2000 and 1.9% in 2002). The age for sensitization to pollen gradually became younger since 2000 (4 to 6 yr of age, 3.5% in 1997 and 6.2% in 2009; 7 to 9 yr of age, 4.2% in 1997 and 6.4% in 2009). In conclusion, sensitization rates for weed pollens increase in Korean children given increasing pollen counts of ragweed and Japanese hop. PMID:22468096
Kim, Joo-Hwa; Oh, Jae-Won; Lee, Ha-Baik; Kim, Seong-Won; Kang, Im-Joo; Kook, Myung-Hee; Kim, Bong-Seong; Park, Kang-Seo; Baek, Hey-Sung; Kim, Kyu-Rang; Choi, Young-Jean
2012-04-01
The prevalence of allergic diseases in children has increased for several decades. We evaluated the correlation between pollen count of weeds and their sensitization rate in Seoul, 1997-2009. Airborne particles carrying allergens were collected daily from 3 stations around Seoul. Skin prick tests to pollen were performed on children with allergic diseases. Ragweed pollen gradually increased between 1999 and 2005, decreased after 2005 and plateaued until 2009 (peak counts, 67 in 2003, 145 in 2005 and 83 grains/m(3)/day in 2007). Japanese hop pollen increased between 2002 and 2009 (peak counts, 212 in 2006 and 492 grains/m(3)/day in 2009). Sensitization rates to weed pollen, especially ragweed and Japanese hop in children with allergic diseases, increased annually (ragweed, 2.2% in 2000 and 2.8% in 2002; Japanese hop, 1.4% in 2000 and 1.9% in 2002). The age for sensitization to pollen gradually became younger since 2000 (4 to 6 yr of age, 3.5% in 1997 and 6.2% in 2009; 7 to 9 yr of age, 4.2% in 1997 and 6.4% in 2009). In conclusion, sensitization rates for weed pollens increase in Korean children given increasing pollen counts of ragweed and Japanese hop.
2016-01-01
The recovery of Bald Eagles (Haliaeetus leucophalus), after DDT and other organochlorine insecticides were banned in the United States, can be regarded as one of the most iconic success stories resulting from the Endangered Species Act. Interest remains high in the recovery and growth of the Bald Eagle population. Common to evaluating growth and recovery rates are counts at nesting sites and analyses of individuals fledged per season. But this is merely one snapshot that ignores survival rates as eagles grow to maturity. By analyzing indices from migration counts, we get a different snapshot better reflecting the survival of young birds. Different populations of Bald Eagles breed at different sites at different times of the year. Typical migration count analyses do not separate the populations. A separation of two distinct populations can be achieved at spring count sites by taking advantage of the tendency for northern summer breeding birds to migrate north in spring earlier than southern winter breeding birds who disperse north later in spring. In this paper I analyze migratory indices at a spring site along Lake Ontario. The analysis shows that eagles considered to be primarily of the northern summer breeding population showed an estimated growth rate of 5.3 ± 0.85% (SE) per year with 49% of eagles tallied in adult plumage, whereas the migrants considered to be primarily of the southern breeding population had an estimated growth rate of 14.0 ± 1.79% with only 22% in adult plumage. Together these results argue that the populations of southern breeding Bald Eagles are growing at a substantially higher rate than northern breeding eagles. These findings suggest that aggregate population indices for a species at migration counting sites can sometimes obscure important differences among separate populations at any given site and that separating counts by time period can be a useful way to check for differences among sub-populations. PMID:27231647
Selective photon counter for digital x-ray mammography tomosynthesis
NASA Astrophysics Data System (ADS)
Goldan, Amir H.; Karim, Karim S.; Rowlands, J. A.
2006-03-01
Photon counting is an emerging detection technique that is promising for mammography tomosynthesis imagers. In photon counting systems, the value of each image pixel is equal to the number of photons that interact with the detector. In this research, we introduce the design and implementation of a low noise, novel selective photon counting pixel for digital mammography tomosynthesis in crystalline silicon CMOS (complementary metal oxide semiconductor) 0.18 micron technology. The design comprises of a low noise charge amplifier (CA), two low offset voltage comparators, a decision-making unit (DMU), a mode selector, and a pseudo-random counter. Theoretical calculations and simulation results of linearity, gain, and noise of the photon counting pixel are presented.
Hop limited epidemic-like information spreading in mobile social networks with selfish nodes
NASA Astrophysics Data System (ADS)
Wu, Yahui; Deng, Su; Huang, Hongbin
2013-07-01
Similar to epidemics, information can be transmitted directly among users in mobile social networks. Different from epidemics, we can control the spreading process by adjusting the corresponding parameters (e.g., hop count) directly. This paper proposes a theoretical model to evaluate the performance of an epidemic-like spreading algorithm, in which the maximal hop count of the information is limited. In addition, our model can be used to evaluate the impact of users’ selfish behavior. Simulations show the accuracy of our theoretical model. Numerical results show that the information hop count can have an important impact. In addition, the impact of selfish behavior is related to the information hop count.
ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.
2008-01-01
SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
A Framework for Validating Traffic Simulation Models at the Vehicle Trajectory Level
DOT National Transportation Integrated Search
2017-03-01
Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...
Zito, G.V.
1959-04-21
This patent relates to high voltage supply circuits adapted for providing operating voltages for GeigerMueller counter tubes, and is especially directed to an arrangement for maintaining uniform voltage under changing conditions of operation. In the usual power supply arrangement for counter tubes the counter voltage is taken from across the power supply output capacitor. If the count rate exceeds the current delivering capaciiy of the capacitor, the capacitor voltage will drop, decreasing the counter voltage. The present invention provides a multivibrator which has its output voltage controlled by a signal proportional to the counting rate. As the counting rate increases beyond the current delivering capacity of the capacitor, the rectified voltage output from the multivibrator is increased to maintain uniform counter voltage.
Jordan, D; McEwen, S A; Lammerding, A M; McNab, W B; Wilson, J B
1999-06-29
A Monte Carlo simulation model was constructed for assessing the quantity of microbial hazards deposited on cattle carcasses under different pre-slaughter management regimens. The model permits comparison of industry-wide and abattoir-based mitigation strategies and is suitable for studying pathogens such as Escherichia coli O157:H7 and Salmonella spp. Simulations are based on a hierarchical model structure that mimics important aspects of the cattle population prior to slaughter. Stochastic inputs were included so that uncertainty about important input assumptions (such as prevalence of a human pathogen in the live cattle-population) would be reflected in model output. Control options were built into the model to assess the benefit of having prior knowledge of animal or herd-of-origin pathogen status (obtained from the use of a diagnostic test). Similarly, a facility was included for assessing the benefit of re-ordering the slaughter sequence based on the extent of external faecal contamination. Model outputs were designed to evaluate the performance of an abattoir in a 1-day period and included outcomes such as the proportion of carcasses contaminated with a pathogen, the daily mean and selected percentiles of pathogen counts per carcass, and the position of the first infected animal in the slaughter run. A measure of the time rate of introduction of pathogen into the abattoir was provided by assessing the median, 5th percentile, and 95th percentile cumulative pathogen counts at 10 equidistant points within the slaughter run. Outputs can be graphically displayed as frequency distributions, probability densities, cumulative distributions or x-y plots. The model shows promise as an inexpensive method for evaluating pathogen control strategies such as those forming part of a Hazard Analysis and Critical Control Point (HACCP) system.
NASA Astrophysics Data System (ADS)
Sarria, David; Lebrun, Francois; Blelly, Pierre-Louis; Chipaux, Remi; Laurent, Philippe; Sauvaud, Jean-Andre; Prech, Lubomir; Devoto, Pierre; Pailot, Damien; Baronick, Jean-Pierre; Lindsey-Clark, Miles
2017-07-01
With a launch expected in 2018, the TARANIS microsatellite is dedicated to the study of transient phenomena observed in association with thunderstorms. On board the spacecraft, XGRE and IDEE are two instruments dedicated to studying terrestrial gamma-ray flashes (TGFs) and associated terrestrial electron beams (TEBs). XGRE can detect electrons (energy range: 1 to 10 MeV) and X- and gamma-rays (energy range: 20 keV to 10 MeV) with a very high counting capability (about 10 million counts per second) and the ability to discriminate one type of particle from another. The IDEE instrument is focused on electrons in the 80 keV to 4 MeV energy range, with the ability to estimate their pitch angles. Monte Carlo simulations of the TARANIS instruments, using a preliminary model of the spacecraft, allow sensitive area estimates for both instruments. This leads to an averaged effective area of 425 cm2 for XGRE, used to detect X- and gamma-rays from TGFs, and the combination of XGRE and IDEE gives an average effective area of 255 cm2 which can be used to detect electrons/positrons from TEBs. We then compare these performances to RHESSI, AGILE and Fermi GBM, using data extracted from literature for the TGF case and with the help of Monte Carlo simulations of their mass models for the TEB case. Combining this data with the help of the MC-PEPTITA Monte Carlo simulations of TGF propagation in the atmosphere, we build a self-consistent model of the TGF and TEB detection rates of RHESSI, AGILE and Fermi. It can then be used to estimate that TARANIS should detect about 200 TGFs yr-1 and 25 TEBs yr-1.
Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.
Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823
Extended range radiation dose-rate monitor
Valentine, Kenneth H.
1988-01-01
An extended range dose-rate monitor is provided which utilizes the pulse pileup phenomenon that occurs in conventional counting systems to alter the dynamic response of the system to extend the dose-rate counting range. The current pulses from a solid-state detector generated by radiation events are amplified and shaped prior to applying the pulses to the input of a comparator. The comparator generates one logic pulse for each input pulse which exceeds the comparator reference threshold. These pulses are integrated and applied to a meter calibrated to indicate the measured dose-rate in response to the integrator output. A portion of the output signal from the integrator is fed back to vary the comparator reference threshold in proportion to the output count rate to extend the sensitive dynamic detection range by delaying the asymptotic approach of the integrator output toward full scale as measured by the meter.
Huang, Hui-Ying; Tang, Yi-Ju; King, V An-Erl; Chou, Jen-Wei; Tsen, Jen-Horng
2015-03-01
The protective effects of encapsulation on the survival of Lactobacillus reuteri and the retention of the bacterium's probiotic properties under simulated gastrointestinal conditions were investigated. Viable counts and the remaining probiotic properties of calcium (Ca)-alginate encapsulated (A group), chitosan-Ca-alginate encapsulated (CA group), and unencapsulated, free L. reuteri (F group) were determined. Encapsulation improved the survival of L. reuteri subjected to simulated gastrointestinal conditions, with the greatest protective effect achieved in the CA group. The degree of cell membrane injury increased with increasing bile salt concentrations at constant pH, but the extent of injury was less in the encapsulated than in the free cells. Adherence rates were, in descending order: CA (0.524%)>A (0.360%)>F (0.275%). Lactobacillus reuteri cells retained their antagonistic activity toward Listeria monocytogenes even after incubation of the lactobacilli under simulated gastrointestinal conditions. Displacement of the pathogen by cells released from either of the encapsulation matrices was higher than that by free cells. The safety of L. reuteri was demonstrated in an in vitro invasion assay. Copyright© by the Spanish Society for Microbiology and Institute for Catalan Studies.
The Effects of Gamma and Proton Radiation Exposure on Hematopoietic Cell Counts in the Ferret Model
Sanzari, Jenine K.; Wan, X. Steven; Krigsfeld, Gabriel S.; Wroe, Andrew J.; Gridley, Daila S.; Kennedy, Ann R.
2014-01-01
Exposure to total-body radiation induces hematological changes, which can detriment one's immune response to wounds and infection. Here, the decreases in blood cell counts after acute radiation doses of γ-ray or proton radiation exposure, at the doses and dose-rates expected during a solar particle event (SPE), are reported in the ferret model system. Following the exposure to γ-ray or proton radiation, the ferret peripheral total white blood cell (WBC) and lymphocyte counts decreased whereas neutrophil count increased within 3 hours. At 48 hours after irradiation, the WBC, neutrophil, and lymphocyte counts decreased in a dose-dependent manner but were not significantly affected by the radiation type (γ-rays verses protons) or dose rate (0.5 Gy/minute verses 0.5 Gy/hour). The loss of these blood cells could accompany and contribute to the physiological symptoms of the acute radiation syndrome (ARS). PMID:25356435
NASA Astrophysics Data System (ADS)
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be €194 (range €116 to €448). This information is of use in the quest to improve government policy and PNPG management and conservation as well as promote nature-based tourism. To our knowledge, this is the first attempt to measure the average recreation net benefits of each day of stay generated by a national park by using truncated altered and truncated grouped count-data travel-cost models based on observing the individual number of days of stay.
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be
Geometric Representation of Association between Categories
ERIC Educational Resources Information Center
Heiser, Willem J.
2004-01-01
Categories can be counted, rated, or ranked, but they cannot be measured. Likewise, persons or individuals can be counted, rated, or ranked, but they cannot be measured either. Nevertheless, psychology has realized early on that it can take an indirect road to measurement: What can be measured is the strength of association between categories in…
The Objectivity, Reliability, and Validity of the OSU Step Test for College Males
ERIC Educational Resources Information Center
Santa Maria, D. L.; And Others
1976-01-01
The O.S.U. Step Test was administered to 68 male university students to determine the objectivity of three methods of monitering heart rate--subjects count, investigator's count, and ECG records--with results indicating that the investigator was significantly more accurate in heart rate determination than were the subjects. (MB)
NASA Astrophysics Data System (ADS)
Torii, T.; Sanada, Y.; Watanabe, A.
2017-12-01
In the vicinity of the tops of high mountains and in the coastal areas of the Sea of Japan in winter, the generation of high energy photons that lasts more than 100 seconds at the occurrence of thunderclouds has been reported. At the same time, 511 keV gamma rays are also detected. On the other hand, we irradiated a radiosonde equipped with gamma-ray detectors at the time of thunderstorm and observed fluctuation in gamma-ray count-rate. As a result, we found that the gamma-ray count-rate increases significantly near the top of the thundercloud. Therefore, in order to investigate the fluctuation of the energy of the gamma rays, we developed a radiation detector for radiosonde to observe the fluctuation of the low energy gamma-ray spectrum and observed the fluctuation of the gamma-ray spectrum. We will describe the counting rate and spectral fluctuation of gamma-ray detectors for radiosonde observed in the sky in Fukushima prefecture, Japan.
29 CFR 778.318 - Productive and nonproductive hours of work.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Special Problems Effect of Failure to Count Or Pay for Certain Working Hours § 778.318 Productive and... Act; such nonproductive working hours must be counted and paid for. (b) Compensation payable for... which such nonproductive hours are properly counted as working time but no special hourly rate is...
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
NASA Technical Reports Server (NTRS)
Norris, J.
1983-01-01
Gamma-ray burst data obtained from the ISEE-3 Gamma Ray Burst Spectrometer and the Solar Maximum Mission's Hard X-ray Burst Spectrometer (HXRBS) were analyzed to yield information on burst temporal and spectral characteristics. A Monte Carlo approach was used to simulate the HXRBS response to candidate spectral models. At energies above about 100 keV, the spectra are well fit by exponential forms. At lower energies, 30 keV to 60 keV, depressions below the model continua are apparent in some bursts. The depressions are not instrumental or data-reduction artifacts. The event selection criterion of the ISEE-3 experiment is based on the time to accumulate a present number of photons rather than the photon count per unit time and is consequently independent of event duration for a given burst intensity, unlike most conventional systems. As a result, a significantly greater percentage of fast, narrow events have been detected. The ratio of count rates from two ISEE-3 detectors indicates that bursts with durations or approx. one second have much softer spectra than longer bursts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarisien, M.; Plaisir, C.; Gobet, F.
2011-02-15
We present a stand-alone system to characterize the high-energy particles emitted in the interaction of ultrahigh intensity laser pulses with matter. According to the laser and target characteristics, electrons or protons are produced with energies higher than a few mega electron volts. Selected material samples can, therefore, be activated via nuclear reactions. A multidetector, named NATALIE, has been developed to count the {beta}{sup +} activity of these irradiated samples. The coincidence technique used, designed in an integrated system, results in very low background in the data, which is required for low activity measurements. It, therefore, allows a good precision onmore » the nuclear activation yields of the produced radionuclides. The system allows high counting rates and online correction of the dead time. It also provides, online, a quick control of the experiment. Geant4 simulations are used at different steps of the data analysis to deduce, from the measured activities, the energy and angular distributions of the laser-induced particle beams. Two applications are presented to illustrate the characterization of electrons and protons.« less
Hematologic responses to hypobaric hyperoxia.
NASA Technical Reports Server (NTRS)
Larkin, E. C.; Adams, J. D.; Williams, W. T.; Duncan, D. M.
1972-01-01
Study of the effects of hypoxia, activity, and G forces on human hematopoiesis in an attempt to elucidate these phenomena more precisely. Eight subjects were exposed to an atmosphere of 100% O2 at 258 mm Hg for 30 days, and thereafter immediately exposed to transverse G forces, simulating the Gemini flights' reentry profile. All subjects displayed a significant continuous decline in red cell mass during the exposure period, as measured by the carbon monoxide-dilution method. The Cr51 method also indicated a decline in red blood corpuscle mass. The decrease in red cell mass was due to suppression of erythropoiesis and to hemolysis. After exposure to hyperoxia, all subjects exhibited elevated plasma hemoglobin levels, decreased reticulocyte counts, and decreased red cell survivals. CO production rates and urine erythropoietin levels were unchanged. Two hours after termination of exposure to hyperoxia, all subjects exhibited increased reticulocyte counts which were sustained for longer than two weeks. The progressive decrease in red cell mass was promptly arrested on return to ground level atmospheres. Within 116 days after exposure to hyperoxia, the hematologic parameters of all eight subjects had returned to control levels.
ERIC Educational Resources Information Center
Liou, Pey-Yan
2009-01-01
The current study examines three regression models: OLS (ordinary least square) linear regression, Poisson regression, and negative binomial regression for analyzing count data. Simulation results show that the OLS regression model performed better than the others, since it did not produce more false statistically significant relationships than…
A novel method of personnel cooling in an operating theatre environment.
Casha, Aaron R; Manché, Alexander; Camilleri, Liberato; Gauci, Marilyn; Grima, Joseph N; Borg, Michael A
2014-10-01
An optimized theatre environment, including personal temperature regulation, can help maintain concentration, extend work times and may improve surgical outcomes. However, devices, such as cooling vests, are bulky and may impair the surgeon's mobility. We describe the use of a low-cost, low-energy 'bladeless fan' as a personal cooling device. The safety profile of this device was investigated by testing air quality using 0.5- and 5-µm particle counts as well as airborne bacterial counts on an operating table simulating a wound in a thoracic operation in a busy theatre environment. Particle and bacterial counts were obtained with both an empty and full theatre, with and without the 'bladeless fan'. The use of the 'bladeless fan' within the operating theatre during the simulated operation led to a minor, not statistically significant, lowering of both the particle and bacterial counts. In conclusion, the 'bladeless fan' is a safe, effective, low-cost and low-energy consumption solution for personnel cooling in a theatre environment that maintains the clean room conditions of the operating theatre. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Honda, Kohei; Saito, Hidekazu; Fukui, Naoko; Ito, Eiko; Ishikawa, Kazuo
2013-09-01
The prevalence of Japanese cedar (JC) pollinosis in Japanese children is increasing. However, few studies have reported the relationship between pollen count levels and the prevalence of pollinosis. To evaluate the relationship between JC pollen count levels and the prevalence of pollinosis in children, we investigated the sensitization and development of symptoms for JC pollen in two areas of Akita in northeast Japan with contrasting levels of exposure to JC pollen. The study population consisted of 339 elementary school students (10-11 years of age) from the coastal and mountainous areas of Akita in 2005-2006. A questionnaire about symptoms of allergic rhinitis was filled out by the students' parents. A blood sample was taken to determine specific IgE antibodies against five common aeroallergens. The mean pollen count in the mountainous areas was two times higher than that in the coastal areas in 1996-2006. The prevalence rates of nasal allergy symptoms and sensitization for mites were almost the same in both areas. On the other hand, the rates of nasal allergy symptoms and sensitization for JC pollen were significantly higher in the mountainous areas than in the coastal areas. The rate of the development of symptoms among children sensitized for JC pollen was almost the same in both areas. These results suggest that pollen count levels may correlate with the rate of sensitization for JC pollinosis, but may not affect the rate of onset among sensitized children in northeast Japan.
Proof of Concept for the Trajectory-Level Validation Framework for Traffic Simulation Models
DOT National Transportation Integrated Search
2017-10-30
Based on current practices, traffic simulation models are calibrated and validated using macroscopic measures such as 15-minute averages of traffic counts or average point-to-point travel times. For an emerging number of applications, including conne...
Exploring the Dynamics of Cell Processes through Simulations of Fluorescence Microscopy Experiments
Angiolini, Juan; Plachta, Nicolas; Mocskos, Esteban; Levi, Valeria
2015-01-01
Fluorescence correlation spectroscopy (FCS) methods are powerful tools for unveiling the dynamical organization of cells. For simple cases, such as molecules passively moving in a homogeneous media, FCS analysis yields analytical functions that can be fitted to the experimental data to recover the phenomenological rate parameters. Unfortunately, many dynamical processes in cells do not follow these simple models, and in many instances it is not possible to obtain an analytical function through a theoretical analysis of a more complex model. In such cases, experimental analysis can be combined with Monte Carlo simulations to aid in interpretation of the data. In response to this need, we developed a method called FERNET (Fluorescence Emission Recipes and Numerical routines Toolkit) based on Monte Carlo simulations and the MCell-Blender platform, which was designed to treat the reaction-diffusion problem under realistic scenarios. This method enables us to set complex geometries of the simulation space, distribute molecules among different compartments, and define interspecies reactions with selected kinetic constants, diffusion coefficients, and species brightness. We apply this method to simulate single- and multiple-point FCS, photon-counting histogram analysis, raster image correlation spectroscopy, and two-color fluorescence cross-correlation spectroscopy. We believe that this new program could be very useful for predicting and understanding the output of fluorescence microscopy experiments. PMID:26039162
Weston, Bronson; Fogal, Benjamin; Cook, Daniel; Dhurjati, Prasad
2015-04-01
The number of cases diagnosed with Autism Spectrum Disorders is rising at an alarming rate with the Centers for Disease Control estimating the 2014 incidence rate as 1 in 68. Recently, it has been hypothesized that gut bacteria may contribute to the development of autism. Specifically, the relative balances between the inflammatory microbes clostridia and desulfovibrio and the anti-inflammatory microbe bifidobacteria may become destabilized prior to autism development. The imbalance leads to a leaky gut, characterized by a more porous epithelial membrane resulting in microbial toxin release into the blood, which may contribute to brain inflammation and autism development. To test how changes in population dynamics of the gut microbiome may lead to the imbalanced microbial populations associated with autism patients, we constructed a novel agent-based model of clostridia, desulfovibrio, and bifidobacteria population interactions in the gut. The model demonstrates how changing physiological conditions in the gut can affect the population dynamics of the microbiome. Simulations using our agent-based model indicate that despite large perturbations to initial levels of bacteria, the populations robustly achieve a single steady-state given similar gut conditions. These simulation results suggests that disturbance such as a prebiotic or antibiotic treatment may only transiently affect the gut microbiome. However, sustained prebiotic treatments may correct low population counts of bifidobacteria. Furthermore, our simulations suggest that clostridia growth rate is a key determinant of risk of autism development. Treatment of high-risk infants with supra-physiological levels of lysozymes may suppress clostridia growth rate, resulting in a steep decrease in the clostridia population and therefore reduced risk of autism development. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hegenbart, L.; Na, Y. H.; Zhang, J. Y.; Urban, M.; Xu, X. George
2008-10-01
There are currently no physical phantoms available for calibrating in vivo counting devices that represent women with different breast sizes because such phantoms are difficult, time consuming and expensive to fabricate. In this work, a feasible alternative involving computational phantoms was explored. A series of new female voxel phantoms with different breast sizes were developed and ported into a Monte Carlo radiation transport code for performing virtual lung counting efficiency calibrations. The phantoms are based on the RPI adult female phantom, a boundary representation (BREP) model. They were created with novel deformation techniques and then voxelized for the Monte Carlo simulations. Eight models have been selected with cup sizes ranging from AA to G according to brassiere industry standards. Monte Carlo simulations of a lung counting system were performed with these phantoms to study the effect of breast size on lung counting efficiencies, which are needed to determine the activity of a radionuclide deposited in the lung and hence to estimate the resulting dose to the worker. Contamination scenarios involving three different radionuclides, namely Am-241, Cs-137 and Co-60, were considered. The results show that detector efficiencies considerably decrease with increasing breast size, especially for low energy photon emitting radionuclides. When the counting efficiencies of models with cup size AA were compared to those with cup size G, a difference of up to 50% was observed. The detector efficiencies for each radionuclide can be approximated by curve fitting in the total breast mass (polynomial of second order) or the cup size (power).
Neutral atom imaging at Mercury
NASA Astrophysics Data System (ADS)
Mura, A.; Orsini, S.; Milillo, A.; Di Lellis, A. M.; De Angelis, E.
2006-02-01
The feasibility of neutral atom detection and imaging in the Hermean environment is discussed in this study. In particular, we consider those energetic neutral atoms (ENA) whose emission is directly related to solar wind entrance into Mercury's magnetosphere. In fact, this environment is characterised by a weak magnetic field; thus, cusp regions are extremely large if compared to the Earth's ones, and intense proton fluxes are expected there. Our study includes a model of H + distribution in space, energy and pitch angle, simulated by means of a single-particle, Monte-Carlo simulation. Among processes that could generate neutral atom emission, we focus our attention on charge-exchange and ion sputtering, which, in principle, are able to produce directional ENA fluxes. Simulated neutral atom images are investigated in the frame of the neutral particle analyser-ion spectrometer (NPA-IS) SERENA experiment, proposed to fly on board the ESA mission BepiColombo/MPO. The ELENA (emitted low-energy neutral atoms) unit, which is part of this experiment, will be able to detect such fluxes; instrumental details and predicted count rates are given.
Li, Zan; Millan, Robyn M.; Hudson, Mary K.; ...
2014-12-23
Electromagnetic ion cyclotron (EMIC) waves were observed at multiple observatory locations for several hours on 17 January 2013. During the wave activity period, a duskside relativistic electron precipitation (REP) event was observed by one of the Balloon Array for Radiation belt Relativistic Electron Losses (BARREL) balloons and was magnetically mapped close to Geostationary Operational Environmental Satellite (GOES) 13. We simulate the relativistic electron pitch angle diffusion caused by gyroresonant interactions with EMIC waves using wave and particle data measured by multiple instruments on board GOES 13 and the Van Allen Probes. We show that the count rate, the energy distribution,more » and the time variation of the simulated precipitation all agree very well with the balloon observations, suggesting that EMIC wave scattering was likely the cause for the precipitation event. The event reported here is the first balloon REP event with closely conjugate EMIC wave observations, and our study employs the most detailed quantitative analysis on the link of EMIC waves with observed REP to date.« less
High Resolution Aerospace Applications using the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha
2005-01-01
This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
Ansari, Fereshteh; Pourjafar, Hadi; Jodat, Vahid; Sahebi, Javad; Ataei, Amir
2017-12-01
In this study, we examined a novel method of microencapsulation with calcium alginate-chitosan and Eudragit S100 nanoparticles for the improving viability of probiotic bacteria, Lactobacillus acidophilus and Lactobacillus rhamnosus. Extrusion technique was carried out in microencapsulation process. The viability of two probiotics in single coated beads (with only chitosan), double coated beads (with chitosan and Eudragit nanoparticles), and as free cells (unencapsulated) were conducted in simulated gastric juice (pH 1.55, without pepsin) followed by incubation in simulated intestinal juice (pH 7.5, with 1% bile salt). In case of single coated beads, presumably, lack of sufficient strength of chitosan under simulated gastric condition was the main reason of 4-log and 5-log reduction of the counts of the L. acidophilus and L. rhamnosus respectively. The results showed that with the second coat forming (Eudragit nanoparticles) over the first coat (chitosan), the strength of the beads and then viability rate of the bacteria were increased in comparison with the single coated beads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zan; Millan, Robyn M.; Hudson, Mary K.
Electromagnetic ion cyclotron (EMIC) waves were observed at multiple observatory locations for several hours on 17 January 2013. During the wave activity period, a duskside relativistic electron precipitation (REP) event was observed by one of the Balloon Array for Radiation belt Relativistic Electron Losses (BARREL) balloons and was magnetically mapped close to Geostationary Operational Environmental Satellite (GOES) 13. We simulate the relativistic electron pitch angle diffusion caused by gyroresonant interactions with EMIC waves using wave and particle data measured by multiple instruments on board GOES 13 and the Van Allen Probes. We show that the count rate, the energy distribution,more » and the time variation of the simulated precipitation all agree very well with the balloon observations, suggesting that EMIC wave scattering was likely the cause for the precipitation event. The event reported here is the first balloon REP event with closely conjugate EMIC wave observations, and our study employs the most detailed quantitative analysis on the link of EMIC waves with observed REP to date.« less
Dewji, S.; Hertel, N.; Ansari, A.
2017-01-01
The detonation of a radiological dispersion device may result in a situation where individuals inhale radioactive materials and require rapid assessment of internal contamination. The feasibility of using a 2×2-inch sodium-iodide detector to determine the committed effective dose to an individual following acute inhalation of gamma-emitting radionuclides was investigated. Experimental configurations of point sources with a polymethyl methacrylate slab phantom were used to validate Monte Carlo simulations. The validated detector model was used to simulate the responses for four detector positions on six different anthropomorphic phantoms. The nuclides examined included 241Am, 60Co, 137Cs, 131I and 192Ir. Biokinetic modelling was employed to determine the distributed activity in the body as a function of post-inhalation time. The simulation and biokinetic data were used to determine time-dependent count-rate values at optimal detector locations on the body for each radionuclide corresponding to a target committed effective dose (E50) value of 250 mSv. PMID:23436621
Preparing for ICESat-2: Simulated Geolocated Photon Data for Cryospheric Data Products
NASA Astrophysics Data System (ADS)
Harbeck, K.; Neumann, T.; Lee, J.; Hancock, D.; Brenner, A. C.; Markus, T.
2017-12-01
ICESat-2 will carry NASA's next-generation laser altimeter, ATLAS (Advanced Topographic Laser Altimeter System), which is designed to measure changes in ice sheet height, sea ice freeboard, and vegetation canopy height. There is a critical need for data that simulate what certain ICESat-2 science data products will "look like" post-launch in order to aid the data product development process. There are several sources for simulated photon-counting lidar data, including data from NASA's MABEL (Multiple Altimeter Beam Experimental Lidar) instrument, and M-ATLAS (MABEL data that has been scaled geometrically and radiometrically to be more similar to that expected from ATLAS). From these sources, we are able to develop simulated granules of the geolocated photon cloud product; also referred to as ATL03. These simulated ATL03 granules can be further processed into the upper-level data products that report ice sheet height, sea ice freeboard, and vegetation canopy height. For ice sheet height (ATL06) and sea ice height (ATL07) simulations, both MABEL and M-ATLAS data products are used. M-ATLAS data use ATLAS engineering design cases for signal and background noise rates over certain surface types, and also provides large vertical windows of data for more accurate calculations of atmospheric background rates. MABEL data give a more accurate representation of background noise rates over areas of water (i.e., melt ponds, crevasses or sea ice leads) versus land or solid ice. Through a variety of data manipulation procedures, we provide a product that mimics the appearance and parameter characterization of ATL03 data granules. There are three primary goals for generating this simulated ATL03 dataset: (1) allowing end users to become familiar with using the large photon cloud datasets that will be the primary science data product from ICESat-2, (2) the process ensures that ATL03 data can flow seamlessly through upper-level science data product algorithms, and (3) the process ensures parameter traceability through ATL03 and upper-level data products. We will present a summary of how simulated data products are generated, the cryospheric data product applications for this simulated data (specifically ice sheet height and sea ice freeboard), and where these simulated datasets are available to the ICESat-2 data user community.
Final Technical Report- Radiation Hard Tight Pitch GaInP SPAD Arrays for High Energy Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmon, Eric S.
The specialized photodetectors used in high energy physics experiments often need to remain extremely sensitive for years despite radiation induced damage caused by the constant bombardment of high energy particles. To solve this problem, LightSpin Technologies, Inc. in collaboration with Prof. Bradley Cox and the University of Virginia is developing radiation-hard GaInP photodetectors which are projected to be extraordinarily radiation hard, theoretically capable of withstanding a 100,000-fold higher radiation dose than silicon. In this Phase I SBIR project, LightSpin investigated the performance and radiation hardness of fifth generation GaInP SPAD arrays. These fifth generation devices used a new planar processingmore » approach that enables very tight pitch arrays to be produced. High performance devices with SPAD pitches of 11, 15, and 25 μm were successfully demonstrated, which greatly increased the dynamic range and maximum count rate of the devices. High maximum count rates are critical when considering radiation hardness, since radiation damage causes a proportional increase in the dark count rate, causing SPAD arrays with low maximum count rates (large SPAD pitches) to fail. These GaInP SPAD array Photomultiplier Chips™ were irradiated with protons, electrons, and neutrons. Initial irradiation results were disappointing, with the post-irradiation devices exhibiting excessively high dark currents. The degradation was traced to surface leakage currents that were largely eliminated through the use of trenches etched around the exterior of the Photomultiplier Chip™ (not between SPAD elements). A second round of irradiations on Photomultiplier Chips™ with trenches proved substantially more successful, with post-irradiation dark currents remaining relatively low, though dark count rates were observed to increase at the highest doses. Preliminary analysis of the post-irradiation devices is promising … many of the irradiated Photomultiplier Chips™ still exhibit good gain characteristics after 1E12/cm 2 – 1E13/cm 2 doses and have apparent dark count rates that are lower than the apparent dark count rates published for irradiation of silicon SPAD arrays (silicon photomultipliers or SiPMs). Some post-irradiation results are still pending because the samples will still too radioactive to be shipped back from the irradiation facility for post-irradiation testing.« less
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
PET Performance Evaluation of an MR-Compatible PET Insert
Wu, Yibao; Catana, Ciprian; Farrell, Richard; Dokhale, Purushottam A.; Shah, Kanai S.; Qi, Jinyi; Cherry, Simon R.
2010-01-01
A magnetic resonance (MR) compatible positron emission tomography (PET) insert has been developed in our laboratory for simultaneous small animal PET/MR imaging. This system is based on lutetium oxyorthosilicate (LSO) scintillator arrays with position-sensitive avalanche photodiode (PSAPD) photodetectors. The PET performance of this insert has been measured. The average reconstructed image spatial resolution was 1.51 mm. The sensitivity at the center of the field of view (CFOV) was 0.35%, which is comparable to the simulation predictions of 0.40%. The average photopeak energy resolution was 25%. The scatter fraction inside the MRI scanner with a line source was 12% (with a mouse-sized phantom and standard 35 mm Bruker 1H RF coil), 7% (with RF coil only) and 5% (without phantom or RF coil) for an energy window of 350–650 keV. The front-end electronics had a dead time of 390 ns, and a trigger extension dead time of 7.32 μs that degraded counting rate performance for injected doses above ~0.75 mCi (28 MBq). The peak noise-equivalent count rate (NECR) of 1.27 kcps was achieved at 290 μCi (10.7 MBq). The system showed good imaging performance inside a 7-T animal MRI system; however improvements in data acquisition electronics and reduction of the coincidence timing window are needed to realize improved NECR performance. PMID:21072320
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
Scan statistics with local vote for target detection in distributed system
NASA Astrophysics Data System (ADS)
Luo, Junhai; Wu, Qi
2017-12-01
Target detection has occupied a pivotal position in distributed system. Scan statistics, as one of the most efficient detection methods, has been applied to a variety of anomaly detection problems and significantly improves the probability of detection. However, scan statistics cannot achieve the expected performance when the noise intensity is strong, or the signal emitted by the target is weak. The local vote algorithm can also achieve higher target detection rate. After the local vote, the counting rule is always adopted for decision fusion. The counting rule does not use the information about the contiguity of sensors but takes all sensors' data into consideration, which makes the result undesirable. In this paper, we propose a scan statistics with local vote (SSLV) method. This method combines scan statistics with local vote decision. Before scan statistics, each sensor executes local vote decision according to the data of its neighbors and its own. By combining the advantages of both, our method can obtain higher detection rate in low signal-to-noise ratio environment than the scan statistics. After the local vote decision, the distribution of sensors which have detected the target becomes more intensive. To make full use of local vote decision, we introduce a variable-step-parameter for the SSLV. It significantly shortens the scan period especially when the target is absent. Analysis and simulations are presented to demonstrate the performance of our method.
NASA Technical Reports Server (NTRS)
Peille, Phillip; Ceballos, Maria Teresa; Cobo, Beatriz; Wilms, Joern; Bandler, Simon; Smith, Stephen J.; Dauser, Thomas; Brand, Thorsten; Den Haretog, Roland; de Plaa, Jelle;
2016-01-01
The X-ray Integral Field Unit (X-IFU) microcalorimeter, on-board Athena, with its focal plane comprising 3840 Transition Edge Sensors (TESs) operating at 90 mK, will provide unprecedented spectral-imaging capability in the 0.2-12 keV energy range. It will rely on the on-board digital processing of current pulses induced by the heat deposited in the TES absorber, as to recover the energy of each individual events. Assessing the capabilities of the pulse reconstruction is required to understand the overall scientific performance of the X-IFU, notably in terms of energy resolution degradation with both increasing energies and count rates. Using synthetic data streams generated by the X-IFU End-to-End simulator, we present here a comprehensive benchmark of various pulse reconstruction techniques, ranging from standard optimal filtering to more advanced algorithms based on noise covariance matrices. Beside deriving the spectral resolution achieved by the different algorithms, a first assessment of the computing power and ground calibration needs is presented. Overall, all methods show similar performances, with the reconstruction based on noise covariance matrices showing the best improvement with respect to the standard optimal filtering technique. Due to prohibitive calibration needs, this method might however not be applicable to the X-IFU and the best compromise currently appears to be the so-called resistance space analysis which also features very promising high count rate capabilities.
Toward Scalable Boson Sampling with Photon Loss
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Wei; Jiang, Xiao; He, Y.-M.; Li, Y.-H.; Ding, X.; Chen, M.-C.; Qin, J.; Peng, C.-Z.; Schneider, C.; Kamp, M.; Zhang, W.-J.; Li, H.; You, L.-X.; Wang, Z.; Dowling, J. P.; Höfling, S.; Lu, Chao-Yang; Pan, Jian-Wei
2018-06-01
Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16 ×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.
Toward Scalable Boson Sampling with Photon Loss.
Wang, Hui; Li, Wei; Jiang, Xiao; He, Y-M; Li, Y-H; Ding, X; Chen, M-C; Qin, J; Peng, C-Z; Schneider, C; Kamp, M; Zhang, W-J; Li, H; You, L-X; Wang, Z; Dowling, J P; Höfling, S; Lu, Chao-Yang; Pan, Jian-Wei
2018-06-08
Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisk, William J.; Sullivan, Douglas
This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were highmore » in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.« less
Nagasawa, Yuya; Kiku, Yoshio; Sugawara, Kazue; Tanabe, Fuyuko; Hayashi, Tomohito
2018-01-01
The exfoliation rate of mammary epithelial cells (MECs) in milk is affected by physiological, breeding and environmental factors. Little is known about the relationship between the MEC exfoliation into milk and mammary-infected Staphylococcus aureus (S. aureus) load on bovine mastitis caused by S. aureus. The aim of this study was to investigate the relationship between S. aureus load and the proportion of MEC exfoliation in milk using five substantial bovine mastitis models. In 64 randomly extracted milk samples from udders at 3-21 days after S. aureus infusion, there were various samples with different numbers of S. aureus counts and somatic cell counts. No significant correlations were found between the S. aureus counts and somatic cell count (r = 0.338). In contrast, a significant correlation was noted between S. aureus counts and the proportion of cytokeratin-positive cells in the milk from the infused udders (r = 0.734, P < 0.01). In conclusion, the increasing MEC exfoliation rate in milk from mastitis udders caused by S. aureus may contribute to reduced milk yield. © 2017 Japanese Society of Animal Science.
Addressing immunization registry population inflation in adolescent immunization rates.
Robison, Steve G
2015-01-01
While U.S. adolescent immunization rates are available annually at national and state levels, finding pockets of need may require county or sub-county information. Immunization information systems (IISs) are one tool for assessing local immunization rates. However, the presence of IIS records dating back to early childhood and challenges in capturing mobility out of IIS areas typically leads to denominator inflation. We examined the feasibility of weighting adolescent immunization records by length of time since last report to produce more accurate county adolescent counts and immunization rates. We compared weighted and unweighted adolescent denominators from the Oregon ALERT IIS, along with county-level Census Bureau estimates, with school enrollment counts from Oregon's annual review of seventh-grade school immunization compliance for public and private schools. Adolescent immunization rates calculated using weighted data, for the state as a whole, were also checked against comparable National Immunization Survey (NIS) rates. Weighting individual records by the length of time since last activity substantially improved the fit of IIS data to county populations for adolescents. A nonlinear logarithmic (ogive) weight produced the best fit to the school count data of all examined estimates. Overall, the ogive weighted results matched NIS adolescent rates for Oregon. The problem of mobility-inflated counts of teenagers can be addressed by weighting individual records based on time since last immunization. Well-populated IISs can rely on their own data to produce adolescent immunization rates and find pockets of need.
Borges, E; Setti, A S; Braga, D P A F; Figueira, R C S; Iaconelli, A
2016-09-01
The objective of this study was to compare (i) the intracytoplasmic sperm injection outcomes among groups with different total motile sperm count ranges, (ii) the intracytoplasmic sperm injection outcomes between groups with normal and abnormal total motile sperm count, and (iii) the predictive values of WHO 2010 cut-off values and pre-wash total motile sperm count for the intracytoplasmic sperm injection outcomes, in couples with male infertility. This study included data from 518 patients undergoing their first intracytoplasmic sperm injection cycle as a result of male infertility. Couples were divided into five groups according to their total motile sperm count: Group I, total motile sperm count <1 × 10(6) ; group II, total motile sperm count 1-5 × 10(6) ; group III, total motile sperm count 5-10 × 10(6) ; group IV, total motile sperm count 10-20 × 10(6) ; and group V, total motile sperm count >20 × 10(6) (which was considered a normal total motile sperm count value). Then, couples were grouped into an abnormal and normal total motile sperm count group. The groups were compared regarding intracytoplasmic sperm injection outcomes. The predictive values of WHO 2010 cut-off values and total motile sperm count for the intracytoplasmic sperm injection outcomes were also investigated. The fertilization rate was lower in total motile sperm count group I compared to total motile sperm count group V (72.5 ± 17.6 vs. 84.9 ± 14.4, p = 0.011). The normal total motile sperm count group had a higher fertilization rate (84.9 ± 14.4 vs. 81.1 ± 15.8, p = 0.016) and lower miscarriage rate (17.9% vs. 29.5%, p = 0.041) compared to the abnormal total motile sperm count group. The total motile sperm count was the only parameter that demonstrated a predictive value for the formation of high-quality embryos on D2 (OR: 1.18, p = 0.013), formation of high-quality embryos on D3 (OR: 1.12, p = 0.037), formation of blastocysts on D5 (OR: 1.16, p = 0.011), blastocyst expansion grade on D5 (OR: 1.27, p = 0.042), and the odds of miscarriage (OR: 0.52, p < 0.045). The total motile sperm count has a greater predictive value than the WHO 2010 cut-off values for laboratory results and pregnancy outcomes in couples undergoing intracytoplasmic sperm injection as a result of male infertility. © 2016 American Society of Andrology and European Academy of Andrology.
Humeniuk, Stephan; Büchler, Hans Peter
2017-12-08
We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.
NASA Astrophysics Data System (ADS)
Singh, R.; Olson, M. S.
2011-12-01
Low permeability regions sandwiched between high permeability regions such as clay lenses are difficult to treat using conventional treatment methods. Trace concentrations of contaminants such as non-aqueous phase liquids (NAPLs) remain trapped in these regions and over the time diffuse out into surrounding water thereby acting as a long term source of groundwater contamination. Bacterial chemotaxis (directed migration toward a contaminant source), may be helpful in enhancing bioremediation of such contaminated sites. This study is focused on simulating a two-dimensional dual-permeability groundwater contamination scenario using microfluidic devices and evaluating transverse chemotactic migration of bacteria from high to low permeability regions. A novel bi-layer polydimethylsiloxane (PDMS) microfluidic device was fabricated using photolithography and soft lithography techniques to simulate contamination of a dual- permeability region due to leakage from an underground storage tank into a low permeability region. This device consists of a porous channel through which a bacterial suspension (Escherchia Coli HCB33) is flown and another channel for injecting contaminant/chemo-attractant (DL-aspertic acid) into the porous channel. The pore arrangement in the porous channel contains a 2-D low permeability region surrounded by high permeability regions on both sides. Experiments were performed under chemotactic and non-chemotactic (replacing attractant with buffer solution in the non porous channel) conditions. Images were captured in transverse pore throats at cross-sections 4.9, 9.8, and 19.6 mm downstream from the attractant injection point and bacteria were enumerated in the middle of each pore throat. Bacterial chemotaxis was quantified in terms of the change in relative bacterial counts in each pore throat at cross-sections 9.8 and 19.6 mm with respect to counts at the cross-section at 4.9 mm. Under non-chemotactic conditions, relative bacterial count was observed to decrease at 9.8 mm and 19.6 mm cross-sections in low permeability regions due to dilution with the injectate from the non-porous channel (Figure 1). However, relative bacterial counts increased in the low permeability region at both downstream cross-sections under chemotactic conditions. A large increase in relative bacterial count in the pore throats just outside the low permeability region was also observed at both cross-sections (Figure 1). The bacterial chemotactic response was observed to decrease linearly with increasing Darcy velocity and at flow rate 0.220 mm/s the chemotactic effect was offset by the advective flow in the porous channel.
Simulation of neutron production using MCNPX+MCUNED.
Erhard, M; Sauvan, P; Nolte, R
2014-10-01
In standard MCNPX, the production of neutrons by ions cannot be modelled efficiently. The MCUNED patch applied to MCNPX 2.7.0 allows to model the production of neutrons by light ions down to energies of a few kiloelectron volts. This is crucial for the simulation of neutron reference fields. The influence of target properties, such as the diffusion of reactive isotopes into the target backing or the effect of energy and angular straggling, can be studied efficiently. In this work, MCNPX/MCUNED calculations are compared with results obtained with the TARGET code for simulating neutron production. Furthermore, MCUNED incorporates more effective variance reduction techniques and a coincidence counting tally. This allows the simulation of a TCAP experiment being developed at PTB. In this experiment, 14.7-MeV neutrons will be produced by the reaction T(d,n)(4)He. The neutron fluence is determined by counting alpha particles, independently of the reaction cross section. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Making Hawai'i's Kids Count. Issue Paper Number 3.
ERIC Educational Resources Information Center
Hawaii Univ., Manoa. Center on the Family.
This issue paper from Hawai'i Kids Count addresses the issue of teen pregnancy and birth rates. The paper notes that teen pregnancy and birth rates are declining both nationally and in Hawaii and describes key risk factors associated with having a baby before age 20: (1) early school failure; (2) early behavioral problems; (3) family dysfunction;…
Arkansas Kids Count Data Book 1995: A Portrait of Arkansas' Children.
ERIC Educational Resources Information Center
Arkansas Advocates for Children and Families, Little Rock.
This Kids Count report is the third to examine the well-being of Arkansas' children and the first to provide trend information. The statistical report is based on 10 core indicators of well-being: (1) unemployment rate and per capita personal income; (2) federal and state assistance program participation rates; (3) percent of high school students…
Pneumocystis jirovecii colonisation in HIV-positive and HIV-negative subjects in Cameroon.
Riebold, D; Enoh, D O; Kinge, T N; Akam, W; Bumah, M K; Russow, K; Klammt, S; Loebermann, M; Fritzsche, C; Eyong, J E; Eppel, G; Kundt, G; Hemmer, C J; Reisinger, E C
2014-06-01
To determine the prevalence of Pneumocystis pneumonia (PCP), a major opportunistic infection in AIDS patients in Europe and the USA, in Cameroon. Induced sputum samples from 237 patients without pulmonary symptoms (126 HIV-positive and 111 HIV-negative outpatients) treated at a regional hospital in Cameroon were examined for the prevalence of Pneumocystis jirovecii by specific nested polymerase chain reaction (nPCR) and staining methods. CD 4 counts and the history of antiretroviral therapy of the subjects were obtained through the ESOPE database system. Seventy-five of 237 study participants (31.6%) were colonised with Pneumocystis, but none showed active PCP. The Pneumocystis colonisation rate in HIV-positive subjects was more than double that of HIV-negative subjects (42.9% vs. 18.9%, P < 0.001). In the HIV-positive group, the colonisation rate corresponds to the reduction in the CD 4 lymphocyte counts. Subjects with CD 4 counts >500 cells/μl were colonised at a rate of 20.0%, subjects with CD 4 counts between 200 and 500 cells/μl of 42.5%, and subjects with CD 4 counts <200 cells/μl of 57.1%. Colonisation with Pneumocystis in Cameroon seems to be comparable to rates found in Western Europe. Prophylactic and therapeutic measures against Pneumocystis should be taken into account in HIV care in western Africa. © 2014 John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsell, Alison Victoria; Swinhoe, Martyn Thomas; Henzl, Vladimir
2014-09-22
Four helium-3 ( 3He) detector/preamplifier packages (¾”/KM200, DDSI/PDT-A111, DDA/PDT-A111, and DDA/PDT10A) were experimentally tested to determine the deadtime effects at different DT neutron generator output settings. At very high count rates, the ¾”/KM200 package performed best. At high count rates, the ¾”/KM200 and the DDSI/PDT-A111 packages performed very well, with the DDSI/PDT-A111 operating with slightly higher efficiency. All of the packages performed similarly at mid to low count rates. Proposed improvements include using a fast recovery LANL-made dual channel preamplifier, testing smaller diameter 3He tubes, and further investigating quench gases.
Clustering method for counting passengers getting in a bus with single camera
NASA Astrophysics Data System (ADS)
Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying
2010-03-01
Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.
Real-Time Counting People in Crowded Areas by Using Local Empirical Templates and Density Ratios
NASA Astrophysics Data System (ADS)
Hung, Dao-Huu; Hsu, Gee-Sern; Chung, Sheng-Luen; Saito, Hideo
In this paper, a fast and automated method of counting pedestrians in crowded areas is proposed along with three contributions. We firstly propose Local Empirical Templates (LET), which are able to outline the foregrounds, typically made by single pedestrians in a scene. LET are extracted by clustering foregrounds of single pedestrians with similar features in silhouettes. This process is done automatically for unknown scenes. Secondly, comparing the size of group foreground made by a group of pedestrians to that of appropriate LET captured in the same image patch with the group foreground produces the density ratio. Because of the local scale normalization between sizes, the density ratio appears to have a bound closely related to the number of pedestrians who induce the group foreground. Finally, to extract the bounds of density ratios for groups of different number of pedestrians, we propose a 3D human models based simulation in which camera viewpoints and pedestrians' proximity are easily manipulated. We collect hundreds of typical occluded-people patterns with distinct degrees of human proximity and under a variety of camera viewpoints. Distributions of density ratios with respect to the number of pedestrians are built based on the computed density ratios of these patterns for extracting density ratio bounds. The simulation is performed in the offline learning phase to extract the bounds from the distributions, which are used to count pedestrians in online settings. We reveal that the bounds seem to be invariant to camera viewpoints and humans' proximity. The performance of our proposed method is evaluated with our collected videos and PETS 2009's datasets. For our collected videos with the resolution of 320x240, our method runs in real-time with good accuracy and frame rate of around 30 fps, and consumes a small amount of computing resources. For PETS 2009's datasets, our proposed method achieves competitive results with other methods tested on the same datasets [1], [2].
Single photon detection in a waveguide-coupled Ge-on-Si lateral avalanche photodiode.
Martinez, Nicholas J D; Gehl, Michael; Derose, Christopher T; Starbuck, Andrew L; Pomerene, Andrew T; Lentine, Anthony L; Trotter, Douglas C; Davids, Paul S
2017-07-10
We examine gated-Geiger mode operation of an integrated waveguide-coupled Ge-on-Si lateral avalanche photodiode (APD) and demonstrate single photon detection at low dark count for this mode of operation. Our integrated waveguide-coupled APD is fabricated using a selective epitaxial Ge-on-Si growth process resulting in a separate absorption and charge multiplication (SACM) design compatible with our silicon photonics platform. Single photon detection efficiency and dark count rate is measured as a function of temperature in order to understand and optimize performance characteristics in this device. We report single photon detection of 5.27% at 1310 nm and a dark count rate of 534 kHz at 80 K for a Ge-on-Si single photon avalanche diode. Dark count rate is the lowest for a Ge-on-Si single photon detector in this range of temperatures while maintaining competitive detection efficiency. A jitter of 105 ps was measured for this device.
Chaffee, Bruce W; Lander, Michael J; Christen, Catherine; Redic, Kimberly A
2018-01-01
Purpose The primary aim was to determine if dispensing of cyclophosphamide tablets resulted in accumulated residue on pharmacy counting tools during a simulated outpatient dispensing process. Secondary objectives included determining if cyclophosphamide contamination exceeded a defined threshold level of 1 ng/cm 2 and if a larger number of prescriptions dispensed resulted in increased contamination. Methods Mock prescriptions of 40 cyclophosphamide 50 mg tablets were counted on clean trays in three scenarios using a simulated outpatient pharmacy after assaying five cleaned trays as controls. The three scenarios consisted of five simulated dispensings of one, three, or six prescriptions dispensed per scenario. Wipe samples of trays and spatulas were collected and assayed for all trays, including the five clean trays used as controls. Contamination was defined as an assayed cyclophosphamide level at or above 0.001 ng/cm 2 and levels above 1 ng/cm 2 were considered sufficient to cause risk of human uptake. Mean contamination for each scenario was calculated and compared using one-way analysis of variance. P-values of < 0.05 implied significance. Results Mean cyclophosphamide contamination on trays used to count one, three, and six cyclophosphamide prescriptions was 0.51 ± 0.10 (p=0.0003), 1.02 ± 0.10 (p < 0.0001), and 1.82 ± 0.10 ng/cm 2 (p < 0.0001), respectively. Control trays did not show detectable cyclophosphamide contamination. Increasing the number of prescriptions dispensed from 1 to 3, 1 to 6, and 3 to 6 counts increased contamination by 0.51 ± 0.15 (p = 0.0140), 1.31 + 0.15 (p < 0.0001), and 0.80 ± 0.15 ng/cm 2 (p = 0.0004), respectively. Conclusion Dispensing one or more prescriptions of 40 cyclophosphamide 50 mg tablets contaminates pharmacy counting tools, and an increased number of prescriptions dispensed correlates with increased level of contamination. Counting out three or more prescriptions leads to trays having contamination that surpasses the threshold at which worker exposure may be increased. Pharmacies should consider devoting a separate tray to cyclophosphamide tablets, as cross-contamination could occur with other drugs and the efficacy of decontamination methods is unclear. Employee exposure could be minimized with the use of personal protective equipment, environmental controls, and cleaning trays between uses. Future investigation should assess the extent of drug powder dispersion, the effects of various cleaning methods, and the potential extent of contamination with different oral cytotoxic drugs.
New methods to detect particle velocity and mass flux in arc-heated ablation/erosion facilities
NASA Technical Reports Server (NTRS)
Brayton, D. B.; Bomar, B. W.; Seibel, B. L.; Elrod, P. D.
1980-01-01
Arc-heated flow facilities with injected particles are used to simulate the erosive and ablative/erosive environments encountered by spacecraft re-entry through fog, clouds, thermo-nuclear explosions, etc. Two newly developed particle diagnostic techniques used to calibrate these facilities are discussed. One technique measures particle velocity and is based on the detection of thermal radiation and/or chemiluminescence from the hot seed particles in a model ablation/erosion facility. The second technique measures a local particle rate, which is proportional to local particle mass flux, in a dust erosion facility by photodetecting and counting the interruptions of a focused laser beam by individual particles.
Spatially-Aware Temporal Anomaly Mapping of Gamma Spectra
NASA Astrophysics Data System (ADS)
Reinhart, Alex; Athey, Alex; Biegalski, Steven
2014-06-01
For security, environmental, and regulatory purposes it is useful to continuously monitor wide areas for unexpected changes in radioactivity. We report on a temporal anomaly detection algorithm which uses mobile detectors to build a spatial map of background spectra, allowing sensitive detection of any anomalies through many days or months of monitoring. We adapt previously-developed anomaly detection methods, which compare spectral shape rather than count rate, to function with limited background data, allowing sensitive detection of small changes in spectral shape from day to day. To demonstrate this technique we collected daily observations over the period of six weeks on a 0.33 square mile research campus and performed source injection simulations.
Cazzaniga, C; Sundén, E Andersson; Binda, F; Croci, G; Ericsson, G; Giacomelli, L; Gorini, G; Griesmayer, E; Grosso, G; Kaveney, G; Nocente, M; Perelli Cippo, E; Rebai, M; Syme, B; Tardocchi, M
2014-04-01
First simultaneous measurements of deuterium-deuterium (DD) and deuterium-tritium neutrons from deuterium plasmas using a Single crystal Diamond Detector are presented in this paper. The measurements were performed at JET with a dedicated electronic chain that combined high count rate capabilities and high energy resolution. The deposited energy spectrum from DD neutrons was successfully reproduced by means of Monte Carlo calculations of the detector response function and simulations of neutron emission from the plasma, including background contributions. The reported results are of relevance for the development of compact neutron detectors with spectroscopy capabilities for installation in camera systems of present and future high power fusion experiments.
Low photon-count tip-tilt sensor
NASA Astrophysics Data System (ADS)
Saathof, Rudolf; Schitter, Georg
2016-07-01
Due to the low photon-count of dark areas of the universe, signal strength of tip-tilt sensor is low, limiting sky-coverage of reliable tip-tilt measurements. This paper presents the low photon-count tip-tilt (LPC-TT) sensor, which potentially achieves improved signal strength. Its optical design spatially samples and integrates the scene. This increases the probability that several individual sources coincide on a detector segment. Laboratory experiments show feasibility of spatial sampling and integration and the ability to measure tilt angles. By simulation an improvement of the SNR of 10 dB compared to conventional tip-tilt sensors is shown.
Adaptation of gastrointestinal nematode parasites to host genotype: single locus simulation models
2013-01-01
Background Breeding livestock for improved resistance to disease is an increasingly important selection goal. However, the risk of pathogens adapting to livestock bred for improved disease resistance is difficult to quantify. Here, we explore the possibility of gastrointestinal worms adapting to sheep bred for low faecal worm egg count using computer simulation. Our model assumes sheep and worm genotypes interact at a single locus, such that the effect of an A allele in sheep is dependent on worm genotype, and the B allele in worms is favourable for parasitizing the A allele sheep but may increase mortality on pasture. We describe the requirements for adaptation and test if worm adaptation (1) is slowed by non-genetic features of worm infections and (2) can occur with little observable change in faecal worm egg count. Results Adaptation in worms was found to be primarily influenced by overall worm fitness, viz. the balance between the advantage of the B allele during the parasitic stage in sheep and its disadvantage on pasture. Genetic variation at the interacting locus in worms could be from de novo or segregating mutations, but de novo mutations are rare and segregating mutations are likely constrained to have (near) neutral effects on worm fitness. Most other aspects of the worm infection we modelled did not affect the outcomes. However, the host-controlled mechanism to reduce faecal worm egg count by lowering worm fecundity reduced the selection pressure on worms to adapt compared to other mechanisms, such as increasing worm mortality. Temporal changes in worm egg count were unreliable for detecting adaptation, despite the steady environment assumed in the simulations. Conclusions Adaptation of worms to sheep selected for low faecal worm egg count requires an allele segregating in worms that is favourable in animals with improved resistance but less favourable in other animals. Obtaining alleles with this specific property seems unlikely. With support from experimental data, we conclude that selection for low faecal worm egg count should be stable over a short time frame (e.g. 20 years). We are further exploring model outcomes with multiple loci and comparing outcomes to other control strategies. PMID:23714384
Low cost digital electronics for isotope analysis with microcalorimeters - final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Hennig
2006-09-11
The overall goal of the Phase I research was to demonstrate that the digital readout electronics and filter algorithms developed by XIA for use with HPGe detectors can be adapted to high precision, cryogenic gamma detectors (microcalorimeters) and not only match the current state of the art in terms of energy resolution, but do so at a significantly reduced cost. This would make it economically feasible to instrument large arrays of microcalorimeters and would also allow automation of the setup, calibration and operation of large numbers of channels through software. We expected, and have demonstrated, that this approach would furthermore » allow much higher count rates than the optimum filter algorithms currently used. In particular, in measurements with a microcalorimeter at LLNL, the adapted Pixie-16 spectrometer achieved an energy resolution of 0.062%, significantly better than the targeted resolution of 0.1% in the Phase I proposal and easily matching resolutions obtained with LLNL readout electronics and optimum filtering (0.066%). The theoretical maximum output count rate for the filter settings used to achieve this resolution is about 120cps. If the filter is adjusted for maximum throughput with an energy resolution of 0.1% or better, rates of 260cps are possible. This is 20-50 times higher than the maximum count rates of about 5cps with optimum filters for this detector. While microcalorimeter measurements were limited to count rates of ~1.3cps due to the strength of available sources, pulser measurements demonstrated that measured energy resolutions were independent of counting rate to output counting rates well in excess of 200cps or more.. We also developed a preliminary hardware design of a spectrometer module, consisting of a digital processing core and several input options that can be implemented on daughter boards. Depending upon the daughter board, the total parts cost per channel ranged between $12 and $27, resulting in projected product prices of $80 to $160 per channel. This demonstrates that a price of $100 per channel is economically very feasible for large microcalorimeter arrays.« less
A COMPARISON OF GALAXY COUNTING TECHNIQUES IN SPECTROSCOPICALLY UNDERSAMPLED REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specian, Mike A.; Szalay, Alex S., E-mail: mspecia1@jhu.edu, E-mail: szalay@jhu.edu
2016-11-01
Accurate measures of galactic overdensities are invaluable for precision cosmology. Obtaining these measurements is complicated when members of one’s galaxy sample lack radial depths, most commonly derived via spectroscopic redshifts. In this paper, we utilize the Sloan Digital Sky Survey’s Main Galaxy Sample to compare seven methods of counting galaxies in cells when many of those galaxies lack redshifts. These methods fall into three categories: assigning galaxies discrete redshifts, scaling the numbers counted using regions’ spectroscopic completeness properties, and employing probabilistic techniques. We split spectroscopically undersampled regions into three types—those inside the spectroscopic footprint, those outside but adjacent to it,more » and those distant from it. Through Monte Carlo simulations, we demonstrate that the preferred counting techniques are a function of region type, cell size, and redshift. We conclude by reporting optimal counting strategies under a variety of conditions.« less
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Tsuji, Brian T; Harigaya, Yoriko; Lesse, Alan J; Forrest, Alan; Ngo, Dung
2013-01-01
AFN-1252, a potent enoyl-ACP reductase (FabI) inhibitor, is under development for the treatment of Staphylococcus aureus infections. The activity of AFN-1252 against two isolates of S. aureus, MSSA 26213 and MRSA S186, was studied in an in vitro pharmacodynamic model simulating AFN-1252 pharmacokinetics in man. Reductions in bacterial viable count over the first 6 hours were generally 1–2 logs and maximal reductions in viable count were generally achieved at fAUC/MIC ratios of 100–200. Maximum reductions in viable count against MSSA 29213 and MRSA S186 were approximately 4 logs, achieved by 450 mg q12h (fAUC/MIC = 1875) dosing at 28 hours. Staphylococcal resistance to AFN-1252 did not develop throughout the 48-hour experiments. As multidrug resistance continues to increase, these studies support the continued investigation of AFN-1252 as a targeted therapeutic for staphylococcal infections. PMID:23433442
SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W; Kappadath, S
Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less
Garrison, M W; Anderson, D E; Campbell, D M; Carroll, K C; Malone, C L; Anderson, J D; Hollis, R J; Pfaller, M A
1996-01-01
Emergence of Stenotrophomonas maltophilia as a nosocomial pathogen is becoming increasingly apparent. Pleiotropic resistance characterizes S. maltophilia. Furthermore, a slow growth rate and an increased mutation rate generate discordance between in vitro susceptibility testing and clinical outcome. Despite original susceptibility, drug-resistant strains of S. maltophilia are often recovered from patients receiving beta-lactams, quinolones, or aminoglycosides. Given the disparity among various in vitro susceptibility methods, this study incorporated a unique pharmacodynamic model to more accurately characterize the bacterial time-kill curves and mutation rates of four clinical isolates of S. maltophilia following exposure to simulated multidose regimens of ceftazidime, ciprofloxacin, gentamicin, and ticarcillin-clavulanate. Time-kill data demonstrated regrowth of S. maltophilia with all four agents. With the exception of ticarcillin-clavulanate, viable bacterial counts at the end of 24 h exceeded the starting inoculum. Ciprofloxacin only reduced bacterial counts by less than 1.0 log prior to rapid bacterial regrowth. Resistant mutant strains, identical to their parent strain by pulsed-field gel electrophoresis, were observed following exposure to each class of antibiotic. Mutant strains also had distinct susceptibility patterns. These data are consistent with previous reports which suggest that S. maltophilia, despite susceptibility data that imply that the organism is sensitive, develops multiple forms of resistance quickly and against several classes of antimicrobial agents. Standard in vitro susceptibility methods are not completely reliable for detecting resistant S. maltophilia strains; and therefore, interpretation of these results should be done with caution. In vivo studies are needed to determine optimal therapy against S. maltophilia infections. PMID:9124855
Upton, Richard G.
1978-01-01
A digital scale converter is provided for binary coded decimal (BCD) conversion. The converter may be programmed to convert a BCD value of a first scale to the equivalent value of a second scale according to a known ratio. The value to be converted is loaded into a first BCD counter and counted down to zero while a second BCD counter registers counts from zero or an offset value depending upon the conversion. Programmable rate multipliers are used to generate pulses at selected rates to the counters for the proper conversion ratio. The value present in the second counter at the time the first counter is counted to the zero count is the equivalent value of the second scale. This value may be read out and displayed on a conventional seven-segment digital display.
Integrating count and detection–nondetection data to model population dynamics
Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell
2017-01-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.
Gauran, Iris Ivy M; Park, Junyong; Lim, Johan; Park, DoHwan; Zylstra, John; Peterson, Thomas; Kann, Maricel; Spouge, John L
2017-09-22
In recent mutation studies, analyses based on protein domain positions are gaining popularity over gene-centric approaches since the latter have limitations in considering the functional context that the position of the mutation provides. This presents a large-scale simultaneous inference problem, with hundreds of hypothesis tests to consider at the same time. This article aims to select significant mutation counts while controlling a given level of Type I error via False Discovery Rate (FDR) procedures. One main assumption is that the mutation counts follow a zero-inflated model in order to account for the true zeros in the count model and the excess zeros. The class of models considered is the Zero-inflated Generalized Poisson (ZIGP) distribution. Furthermore, we assumed that there exists a cut-off value such that smaller counts than this value are generated from the null distribution. We present several data-dependent methods to determine the cut-off value. We also consider a two-stage procedure based on screening process so that the number of mutations exceeding a certain value should be considered as significant mutations. Simulated and protein domain data sets are used to illustrate this procedure in estimation of the empirical null using a mixture of discrete distributions. Overall, while maintaining control of the FDR, the proposed two-stage testing procedure has superior empirical power. 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials
Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda
2013-01-01
Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255
Integrating count and detection-nondetection data to model population dynamics.
Zipkin, Elise F; Rossman, Sam; Yackulic, Charles B; Wiens, J David; Thorson, James T; Davis, Raymond J; Grant, Evan H Campbell
2017-06-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture-recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection-nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection-nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection-nondetection data (1995-2014) with newly collected count data (2015-2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance. © 2017 by the Ecological Society of America.
Characterizations of double pulsing in neutron multiplicity and coincidence counting systems
Koehler, Katrina E.; Henzl, Vladimir; Croft, Stephen; ...
2016-06-29
Passive neutron coincidence/multiplicity counters are subject to non-ideal behavior, such as double pulsing and dead time. It has been shown in the past that double-pulsing exhibits a distinct signature in a Rossi-alpha distribution, which is not readily noticed using traditional Multiplicity Shift Register analysis. But, it has been assumed that the use of a pre-delay in shift register analysis removes any effects of double pulsing. Here, we use high-fidelity simulations accompanied by experimental measurements to study the effects of double pulsing on multiplicity rates. By exploiting the information from the double pulsing signature peak observable in the Rossi-alpha distribution, themore » double pulsing fraction can be determined. Algebraic correction factors for the multiplicity rates in terms of the double pulsing fraction have been developed. We also discuss the role of these corrections across a range of scenarios.« less
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Association of Mycoplasma pneumoniae infection with increased risk of asthma in children.
Yin, Sha-Sha; Ma, Feng-Lian; Gao, Xing
2017-05-01
The present study was conducted to investigate the relationship between Mycoplasma pneumoniae (MP) infection and the risk of asthma among children by detecting the rate of MP immunoglobulin M (MP-IgM) and the eosinophil (EOS) count. A total of 139 asthmatic children were enrolled as the case group and assigned into three groups: Group A (aged <3 years, n=42), group B (aged 3-8 years, n=45) and group C (aged >8 years, n=52). Additionally, 115 healthy children were enrolled in the control group. Enzyme-linked immunosorbent assay was used to measure the MP-IgM-positive rate. EOS count was detected in the experimental and control groups by using a hemocytometer analyzer. A meta-analysis was performed by using the Comprehensive Meta-Analysis version 2.0 software. The positive rates of the MP-IgM and EOS count in the experimental group were significantly higher than those in control group (both P<0.001). Furthermore, the asthmatic children in group C had a higher MP-IgM-positive rate and EOS count as compared to those in groups A and B, respectively (all P<0.05). Results from groups A and B were not statistically significant (all P>0.05). The meta-analysis further confirmed that asthmatic children had a higher MP-IgM-positive rate as compared to the healthy controls (P<0.001). Age-stratified analysis revealed that the MP-IgM-positive rate in asthmatic children aged ≥8 and <8 years was significantly higher than that in the healthy controls (P=0.003 and P<0.001). Asthmatic children had a higher MP-IgM-positive rate and EOS count as compared with controls, suggesting that the MP infection may be closely associated with the risk of asthma. Additionally, the positive rate of MP-IgM may indicate an important biological marker in predicting the development of asthma.
Fast maximum likelihood estimation of mutation rates using a birth-death process.
Wu, Xiaowei; Zhu, Hongxiao
2015-02-07
Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Signal to noise ratio of energy selective x-ray photon counting systems with pileup.
Alvarez, Robert E
2014-11-01
To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, "Near optimal energy selective x-ray imaging system performance with simple detectors," Med. Phys. 37, 822-841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr-Rao lower bound (CRLB) for larger counts is tested. The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated.
Signal to noise ratio of energy selective x-ray photon counting systems with pileup
Alvarez, Robert E.
2014-01-01
Purpose: To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. Methods: An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, “Near optimal energy selective x-ray imaging system performance with simple detectors,” Med. Phys. 37, 822–841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr–Rao lower bound (CRLB) for larger counts is tested. Results: The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. Conclusions: The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated. PMID:25370642
James F. Lynch
1995-01-01
Effects of count duration, time-of-day, and aural stimuli were studied in a series of unlimited-radius point counts conducted during winter in Quintana Roo, Mexico. The rate at which new species were detected was approximately three times higher during the first 5 minutes of each 15- minute count than in the final 5 minutes. The number of individuals and species...
Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates
NASA Astrophysics Data System (ADS)
Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael
2018-03-01
Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, J; Slaughter, D; Norman, E
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
Fast radio burst event rate counts - I. Interpreting the observations
NASA Astrophysics Data System (ADS)
Macquart, J.-P.; Ekers, R. D.
2018-02-01
The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.
NASA Technical Reports Server (NTRS)
Vallerga, J. V.; Vanderspek, R. K.; Ricker, G. R.
1983-01-01
To establish the expected sensitivity of a new hard X-ray telescope design, described by Ricker et al., an experiment was conducted to measure the background counting rate at balloon altitudes (40 km) of mercuric iodide, a room temperature solid state X-ray detector. The prototype detector consisted of two thin mercuric iodide (HgI2) detectors surrounded by a large bismuth germanate scintillator operated in anticoincidence. The bismuth germanate shield vetoed most of the background counting rate induced by atmospheric gamma-rays, neutrons and cosmic rays. A balloon-borne gondola containing a prototype detector assembly was designed, constructed and flown twice in the spring of 1982 from Palestine, TX. The second flight of this instrument established a differential background counting rate of 4.2 + or - 0.7 x 10 to the -5th counts/s sq cm keV over the energy range of 40-80 keV. This measurement was within 50 percent of the predicted value. The measured rate is about 5 times lower than previously achieved in shielded NaI/CsI or Ge systems operating in the same energy range.
The Money/Counting Kit. The Prospectus Series, Paper No. 6.
ERIC Educational Resources Information Center
Musumeci, Judith
The Money/Counting Kit for Handicapped Children and Youth, frees the teacher from lessons in money and counting concepts and enables a student to learn at his own rate with immediate feedback from activity cards, name cards, thermoformed coin cards (optional), and self-instructional booklets. The activity cards, which may be used individually or…
Correcting for particle counting bias error in turbulent flow
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Baratuci, W.
1985-01-01
An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.
Achieving a high mode count in the exact electromagnetic simulation of diffractive optical elements.
Junker, André; Brenner, Karl-Heinz
2018-03-01
The application of rigorous optical simulation algorithms, both in the modal as well as in the time domain, is known to be limited to the nano-optical scale due to severe computing time and memory constraints. This is true even for today's high-performance computers. To address this problem, we develop the fast rigorous iterative method (FRIM), an algorithm based on an iterative approach, which, under certain conditions, allows solving also large-size problems approximation free. We achieve this in the case of a modal representation by avoiding the computationally complex eigenmode decomposition. Thereby, the numerical cost is reduced from O(N 3 ) to O(N log N), enabling a simulation of structures like certain diffractive optical elements with a significantly higher mode count than presently possible. Apart from speed, another major advantage of the iterative FRIM over standard modal methods is the possibility to trade runtime against accuracy.