Sample records for background corrected count

  1. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Compton suppression gamma-counting: The effect of count rate

    USGS Publications Warehouse

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  3. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  4. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  5. General relativistic corrections in density-shear correlations

    NASA Astrophysics Data System (ADS)

    Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena

    2018-06-01

    We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.

  6. Electrets used in measuring rocket exhaust effluents from the space shuttle's solid rocket booster during static test firing, DM-3

    NASA Technical Reports Server (NTRS)

    Susko, M.

    1979-01-01

    The purpose of this experimental research was to compare Marshall Space Flight Center's electrets with Thiokol's fixed flow air samplers during the Space Shuttle Solid Rocket Booster Demonstration Model-3 static test firing on October 19, 1978. The measurement of rocket exhaust effluents by Thiokol's samplers and MSFC's electrets indicated that the firing of the Solid Rocket Booster had no significant effect on the quality of the air sampled. The highest measurement by Thiokol's samplers was obtained at Plant 3 (site 11) approximately 8 km at a 113 degree heading from the static test stand. At sites 11, 12, and 5, Thiokol's fixed flow air samplers measured 0.0048, 0.00016, and 0.00012 mg/m3 of CI. Alongside the fixed flow measurements, the electret counts from X-ray spectroscopy were 685, 894, and 719 counts. After background corrections, the counts were 334, 543, and 368, or an average of 415 counts. An additional electred, E20, which was the only measurement device at a site approximately 20 km northeast from the test site where no power was available, obtained 901 counts. After background correction, the count was 550. Again this data indicate there was no measurement of significant rocket exhaust effluents at the test site.

  7. Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H.

    2015-12-01

    This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.

  8. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  9. Aging and Visual Counting

    PubMed Central

    Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.

    2010-01-01

    Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149

  10. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  11. WDR-PK-AK-018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollister, R

    2009-08-26

    Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less

  12. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  13. A method for the in vivo measurement of americium-241 at long times post-exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neton, J.W.

    1988-01-01

    This study investigated an improved method for the quantitative measurement, calibration and calculation of {sup 241}Am organ burdens in humans. The techniques developed correct for cross-talk or count-rate contributions from surrounding and adjacent organ burdens and assures for the proper assignment of activity to the lungs, liver and skeleton. In order to predict the net count-rates for the measurement geometries of the skull, liver and lung, a background prediction method was developed. This method utilizes data obtained from the measurement of a group of control subjects. Based on this data, a linear prediction equation was developed for each measurement geometry.more » In order to correct for the cross-contributions among the various deposition loci, a series of surrogate human phantom structures were measured. The results of measurements of {sup 241}Am depositions in six exposure cases have been evaluated using these new techniques and have indicated that lung burden estimates could be in error by as much as 100 percent when corrections are not made for contributions to the count-rate from other organs.« less

  14. Learning to count begins in infancy: evidence from 18 month olds' visual preferences.

    PubMed

    Slaughter, Virginia; Itakura, Shoji; Kutsuki, Aya; Siegal, Michael

    2011-10-07

    We used a preferential looking paradigm to evaluate infants' preferences for correct versus incorrect counting. Infants viewed a video depicting six fish. In the correct counting sequence, a hand pointed to each fish in turn, accompanied by verbal counting up to six. In the incorrect counting sequence, the hand moved between two of the six fish while there was still verbal counting to six, thereby violating the one-to-one correspondence principle of correct counting. Experiment 1 showed that Australian 18 month olds, but not 15 month olds, significantly preferred to watch the correct counting sequence. In experiment 2, Australian infants' preference for correct counting disappeared when the count words were replaced by beeps or by Japanese count words. In experiment 3, Japanese 18 month olds significantly preferred the correct counting video only when counting was in Japanese. These results show that infants start to acquire the abstract principles governing correct counting prior to producing any counting behaviour.

  15. Learning to count begins in infancy: evidence from 18 month olds' visual preferences

    PubMed Central

    Slaughter, Virginia; Itakura, Shoji; Kutsuki, Aya; Siegal, Michael

    2011-01-01

    We used a preferential looking paradigm to evaluate infants' preferences for correct versus incorrect counting. Infants viewed a video depicting six fish. In the correct counting sequence, a hand pointed to each fish in turn, accompanied by verbal counting up to six. In the incorrect counting sequence, the hand moved between two of the six fish while there was still verbal counting to six, thereby violating the one-to-one correspondence principle of correct counting. Experiment 1 showed that Australian 18 month olds, but not 15 month olds, significantly preferred to watch the correct counting sequence. In experiment 2, Australian infants' preference for correct counting disappeared when the count words were replaced by beeps or by Japanese count words. In experiment 3, Japanese 18 month olds significantly preferred the correct counting video only when counting was in Japanese. These results show that infants start to acquire the abstract principles governing correct counting prior to producing any counting behaviour. PMID:21325331

  16. Techniques for the correction of topographical effects in scanning Auger electron microscopy

    NASA Technical Reports Server (NTRS)

    Prutton, M.; Larson, L. A.; Poppa, H.

    1983-01-01

    A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.

  17. Two-sample discrimination of Poisson means

    NASA Technical Reports Server (NTRS)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  18. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  19. Probability & Perception: The Representativeness Heuristic in Action

    ERIC Educational Resources Information Center

    Lu, Yun; Vasko, Francis J.; Drummond, Trevor J.; Vasko, Lisa E.

    2014-01-01

    If the prospective students of probability lack a background in mathematical proofs, hands-on classroom activities may work well to help them to learn to analyze problems correctly. For example, students may physically roll a die twice to count and compare the frequency of the sequences. Tools such as graphing calculators or Microsoft Excel®…

  20. CONCH: A Visual Basic program for interactive processing of ion-microprobe analytical data

    NASA Astrophysics Data System (ADS)

    Nelson, David R.

    2006-11-01

    A Visual Basic program for flexible, interactive processing of ion-microprobe data acquired for quantitative trace element, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni and U-Th-Pb geochronology applications is described. Default but editable run-tables enable software identification of secondary ion species analyzed and for characterization of the standard used. Counts obtained for each species may be displayed in plots against analysis time and edited interactively. Count outliers can be automatically identified via a set of editable count-rejection criteria and displayed for assessment. Standard analyses are distinguished from Unknowns by matching of the analysis label with a string specified in the Set-up dialog, and processed separately. A generalized routine writes background-corrected count rates, ratios and uncertainties, plus weighted means and uncertainties for Standards and Unknowns, to a spreadsheet that may be saved as a text-delimited file. Specialized routines process trace-element concentration, 26Al- 26Mg, 53Mn- 53Cr, 60Fe- 60Ni, and Th-U disequilibrium analysis types, and U-Th-Pb isotopic data obtained for zircon, titanite, perovskite, monazite, xenotime and baddeleyite. Correction to measured Pb-isotopic, Pb/U and Pb/Th ratios for the presence of common Pb may be made using measured 204Pb counts, or the 207Pb or 208Pb counts following subtraction from these of the radiogenic component. Common-Pb corrections may be made automatically, using a (user-specified) common-Pb isotopic composition appropriate for that on the sample surface, or for that incorporated within the mineral at the time of its crystallization, depending on whether the 204Pb count rate determined for the Unknown is substantially higher than the average 204Pb count rate for all session standards. Pb/U inter-element fractionation corrections are determined using an interactive log e-log e plot of common-Pb corrected 206Pb/ 238U ratios against any nominated fractionation-sensitive species pair (commonly 238U 16O +/ 238U +) for session standards. Also displayed with this plot are calculated Pb/U and Pb/Th calibration line regression slopes, y-intercepts, calibration uncertainties, standard 204Pb- and 208Pb-corrected 207Pb/ 206Pb dates and other parameters useful for assessment of the calibration-line data. Calibrated data for Unknowns may be automatically grouped according to calculated date and displayed in color on interactive Wetherill Concordia, Tera-Wasserburg Concordia, Linearized Gaussian ("Probability Paper") and Gaussian-summation probability density diagrams.

  1. Measuring 226Ra/228Ra in Oceanic Lavas by MC-ICPMS

    NASA Astrophysics Data System (ADS)

    Standish, J. J.; Sims, K.; Ball, L.; Blusztajn, J.

    2007-12-01

    238U-230Th-226Ra disequilibrium in volcanic rocks provides an important and unique tool to evaluate timescales of recent magmatic processes. Determination of 230Th-226Ra disequilibria requires measurement of U and Th isotopes and concentrations as well as measurement of 226Ra. While measurement of U and Th by ICPMS is now well established, few published studies documenting 226Ra measurement via ICPMS exist. Using 228Ra as an isotope spike we have investigated two ion-counting methods; a 'peak-hopping' routine, where 226Ra and 228Ra are measured in sequence on the central discrete dynode ETP secondary electron multiplier (SEM), and simultaneous measurement of 226Ra and 228Ra on two multiple ion-counter system (MICS) channeltron type detectors mounted on the low end of the collector block. Here we present 226Ra measurement by isotope dilution using the Thermo Fisher NEPTUNE MC-ICPMS. Analysis of external rock standards TML and AThO along with mid-ocean ridge basalt (MORB) and ocean island basalt (OIB) samples show three issues that need to be considered when making precise and accurate Ra measurements: 1) mass bias, 2) background, and 3) relative efficiencies of the detectors when measuring in MICS mode. Due to the absence of an established 226Ra/228Ra standard, we have used U reference material NBL-112A to monitor mass bias. Although Ball et. al., (in press) have shown that U does not serve as an adequate proxy for Th (and thus not likely for Ra either), measurements of rock standards TML and AThO are repeatedly in equilibrium within the uncertainty of the measurements (where total uncertainty includes propagation of the uncertainty in the 226Ra standard used for calibrating the 228Ra spike). For this application, U is an adequate proxy for Ra mass bias at the 1% uncertainly level. The more important issue is the background correction. Because of the extensive chemistry required to separate and purify Ra (typically fg/g level in volcanic rocks), we observe large ambient backgrounds using both ion-counting techniques, which can significantly influence the measured 226Ra/228Ra ratio. Ra off-peak backgrounds need to be measured explicitly and quantitatively corrected. One advantage of using a 'peak-hopping' routine on the central SEM is the optional use of the high abundance sensitivity lens or repelling potential quadrapole (RPQ). This lens virtually eliminates the ambient background and significantly enhances the signal to noise ratio with only a small decrease in Ra ion transmission. Even with the diminished background levels observed using 'peak-hopping' on the SEM with the RPQ, accurate measurement of Ra isotopes requires off-peak background measurement. Finally, when using MICS it is important to account for the relative efficiency of the detectors. Multiple ion counting is, in principle, preferable to 'peak-hopping' because more time is spent counting each individual isotope. However, our results illustrate that proper calibration of detector yields requires dynamic switching of 226Ra between the two ion counters. This negates the inherent advantage of multiple ion counting. Therefore, when considering mass bias, background correction, and detector gain calibration, we conclude that 'peak-hopping' on the central SEM with the RPQ abundance filter is the preferred technique for 226Ra/228Ra isotopic measurement on the Neptune MC-ICPMS.

  2. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  3. Statistical inference of static analysis rules

    NASA Technical Reports Server (NTRS)

    Engler, Dawson Richards (Inventor)

    2009-01-01

    Various apparatus and methods are disclosed for identifying errors in program code. Respective numbers of observances of at least one correctness rule by different code instances that relate to the at least one correctness rule are counted in the program code. Each code instance has an associated counted number of observances of the correctness rule by the code instance. Also counted are respective numbers of violations of the correctness rule by different code instances that relate to the correctness rule. Each code instance has an associated counted number of violations of the correctness rule by the code instance. A respective likelihood of the validity is determined for each code instance as a function of the counted number of observances and counted number of violations. The likelihood of validity indicates a relative likelihood that a related code instance is required to observe the correctness rule. The violations may be output in order of the likelihood of validity of a violated correctness rule.

  4. A bismuth activation counter for high sensitivity pulsed 14 MeV neutrons

    NASA Astrophysics Data System (ADS)

    Burns, E. J. T.; Thacher, P. D.; Hassig, G. J.; Decker, R. D.; Romero, J. A.; Barrett, K. P.

    2011-08-01

    We have built a fast neutron bismuth activation counter that measures activation counts from pulsed 14-MeV neutron generators for incident neutron fluences between 30 and 300 neutrons/cm2 at 15.2 cm (6 in.). The activation counter consists of a large bismuth germanate (BGO) detector surrounded by a bismuth metal shield in front of and concentric with the cylindrical detector housing. The 14 MeV neutrons activate the 2.6-millisecond (ms) isomer in the shield and the detector by the reaction 209Bi (n,2nγ) 208mBi. The use of millisecond isomers and activation counting times minimizes the background from other activated materials and the environment. In addition to activation, the bismuth metal shields against other outside radiation sources. We have tested the bismuth activation counter, simultaneously, with two data acquisition systems (DASs) and both give similar results. The two-dimensional (2D) DAS and three dimensional (3D) DAS both consist of pulse height analysis (PHA) systems that can be used to discriminate against gamma radiations below 300 keV photon energy, so that the detector can be used strictly as a counter. If the counting time is restricted to less than 25 ms after the neutron pulse, there are less than 10 counts of background for single pulse operation in all our operational environments tested so far. High-fluence neutron generator operations are restricted by large dead times and pulse height saturation. When we operate our 3D DAS PHA system in list mode acquisition (LIST), real-time corrections to dead time or live time can be made on the scale of 1 ms time windows or dwell times. The live time correction is consistent with nonparalyzable models for dead time of 1.0±0.2 μs for our 3D DAS and 1.5±0.3 μs for our 2D DAS dominated by our fixed time width analog to digital converters (ADCs). With the same solid angle, we have shown that the bismuth activation counter has a factor of 4 increase in sensitivity over our lead activation counter, because of higher counts and negligible backgrounds.

  5. Tidal radii of the globular clusters M 5, M 12, M 13, M 15, M 53, NGC 5053 and NGC 5466 from automated star counts.

    NASA Astrophysics Data System (ADS)

    Lehmann, I.; Scholz, R.-D.

    1997-04-01

    We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962) we derived the following structural parameters: tidal radius r_t_, core radius r_c_ and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al. 1995).

  6. Programmable calculator software for computation of the plasma binding of ligands.

    PubMed

    Conner, D P; Rocci, M L; Larijani, G E

    1986-01-01

    The computation of the extent of plasma binding of a ligand to plasma constituents using radiolabeled ligand and equilibrium dialysis is complex and tedious. A computer program for the HP-41C Handheld Computer Series (Hewlett-Packard) was developed to perform these calculations. The first segment of the program constructs a standard curve for quench correction of post-dialysis plasma and buffer samples, using either external standard ratio (ESR) or sample channels ratio (SCR) techniques. The remainder of the program uses the counts per minute, SCR or ESR, and post-dialysis volume of paired plasma and buffer samples generated from the dialysis procedure to compute the extent of binding after correction for background radiation, counting efficiency, and intradialytic shifts of fluid between plasma and buffer compartments during dialysis. This program greatly simplifies the analysis of equilibrium dialysis data and has been employed in the analysis of dexamethasone binding in normal and uremic sera.

  7. Increasing the Efficiency of Electron Microprobe Measurements of Minor and Trace Elements in Rutile

    NASA Astrophysics Data System (ADS)

    Neill, O. K.; Mattinson, C. G.; Donovan, J.; Hernández Uribe, D.; Sains, A.

    2016-12-01

    Minor and trace element contents of rutile, an accessory mineral found in numerous lithologic settings, has many applications for interpreting earth systems. While these applications vary widely, they share a need for precise and accurate elemental measurements. The electron microprobe can be used to measure rutile compositions, although long X-ray counting times are necessary to achieve acceptable precision. Continuum ("background") intensity can be estimated using the iterative Mean Atomic Number (MAN) method of Donovan and Tingle (1996), obviating the need for direct off-peak background measurements, and reducing counting times by half. For this study, several natural and synthetic rutiles were measured by electron microprobe. Data was collected once but reduced twice, using off-peak and an MAN background corrections, allowing direct comparison of the two methods without influence of other variables (counting time, analyte homogeneity, beam current, calibration standards, etc.). These measurements show that, if a "blank" correction (Donovan et al., 2011, 2016) is used, minor and trace elements of interest can be measured in rutile using the MAN background method in half the time of traditional off-peak measurements, without sacrificing accuracy or precision (Figure 1). This method has already been applied to Zr-in-rutile thermometry of ultra-high pressure metamorphic rocks from the North Qaidam terrane in northwest China. Finally, secondary fluorescence of adjacent phases by continuum X-rays can lead to artificially elevated concentrations. For example, when measuring Zr, care should be taken to avoid analytical spots within 100 microns of zircon or baddeleyite crystals. References: 1) J.J. Donovan and T.N Tingle (1996) J. Microscopy, 2(1), 1-7 2) J.J. Donovan, H.A. Lowers, and B.G. Rusk (2011) Am. Mineral., 96, 274­282 3) J.J. Donovan, J.W. Singer and J.T. Armstrong (2016) Am. Mineral., 101, 1839-1853 4) G.L. Lovizotto et al. (2009) Chem. Geol., 261, 346-369

  8. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  9. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  10. Which button will I press? Preference for correctly ordered counting sequences in 18-month-olds.

    PubMed

    Ip, Martin Ho Kwan; Imuta, Kana; Slaughter, Virginia

    2018-04-16

    Correct counting respects the stable order principle whereby the count terms are recited in a fixed order every time. The 4 experiments reported here tested whether precounting infants recognize and prefer correct stable-ordered counting. The authors introduced a novel preference paradigm in which infants could freely press two buttons to activate videos of counting events. In the "correct" counting video, number words were always recited in the canonical order ("1, 2, 3, 4, 5, 6"). The "incorrect" counting video was identical except that the number words were recited in a random order (e.g., "5, 3, 1, 6, 4, 2"). In Experiment 1, 18-month-olds (n = 21), but not 15-month-olds (n = 24), significantly preferred to press the button that activated correct counting events. Experiment 2 revealed that English-learning 18-month-olds' (n = 21) preference for stable-ordered counting disappeared when the counting was done in Japanese. By contrast, Experiment 3 showed that multilingual 18-month-olds (n = 24) preferred correct stable-ordered counting in an unfamiliar foreign language. In Experiment 4, multilingual 18-month-olds (N = 21) showed no preference for stable-ordered alphabet sequences, ruling out some alternative explanations for the Experiment 3 results. Overall these findings are consistent with the idea that implicit recognition of the stable order principle of counting is acquired by 18 months of age, and that learning more than one language may accelerate infants' understanding of abstract counting principles. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Color quench correction for low level Cherenkov counting.

    PubMed

    Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B

    2009-05-01

    The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.

  12. Automated Image Analysis for the Detection of Benthic Crustaceans and Bacterial Mat Coverage Using the VENUS Undersea Cabled Network

    PubMed Central

    Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S. Kim; Menesatti, Paolo

    2011-01-01

    The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches. PMID:22346657

  13. Whole body counter calibration using Monte Carlo modeling with an array of phantom sizes based on national anthropometric reference data

    NASA Astrophysics Data System (ADS)

    Shypailo, R. J.; Ellis, K. J.

    2011-05-01

    During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.

  14. Central Stars of Planetary Nebulae in the LMC

    NASA Technical Reports Server (NTRS)

    Bianchi, Luciana

    2004-01-01

    In FUSE cycle 2's program B001 we studied Central Stars of Planetary Nebulae (CSPN) in the Large Magellanic Could. All FUSE observations have been successfully completed and have been reduced, analyzed and published. The analysis and the results are summarized below. The FUSE data were reduced using the latest available version of the FUSE calibration pipeline (CALFUSE v2.2.2). The flux of these LMC post-AGB objects is at the threshold of FUSE's sensitivity, and thus special care in the background subtraction was needed during the reduction. Because of their faintness, the targets required many orbit-long exposures, each of which typically had low (target) count-rates. Each calibrated extracted sequence was checked for unacceptable count-rate variations (a sign of detector drift), misplaced extraction windows, and other anomalies. All the good calibrated exposures were combined using FUSE pipeline routines. The default FUSE pipeline attempts to model the background measured off-target and subtracts it from the target spectrum. We found that, for these faint objects, the background appeared to be over-estimated by this method, particularly at shorter wavelengths (i.e., < 1000 A). We therefore tried two other reductions. In the first method, subtraction of the measured background is turned off and and the background is taken to be the model scattered-light scaled by the exposure time. In the second one, the first few steps of the pipeline were run on the individual exposures (correcting for effects unique to each exposure such as Doppler shift, grating motions, etc). Then the photon lists from the individual exposures were combined, and the remaining steps of the pipeline run on the combined file. Thus, more total counts for both the target and background allowed for a better extraction.

  15. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  16. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  17. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  18. Image segregation in strabismic amblyopia.

    PubMed

    Levi, Dennis M

    2007-06-01

    Humans with naturally occurring amblyopia show deficits thought to involve mechanisms downstream of V1. These include excessive crowding, abnormal global image processing, spatial sampling and symmetry detection and undercounting. Several recent studies suggest that humans with naturally occurring amblyopia show deficits in global image segregation. The current experiments were designed to study figure-ground segregation in amblyopic observers with documented deficits in crowding, symmetry detection, spatial sampling and counting, using similar stimuli. Observers had to discriminate the orientation of a figure (an "E"-like pattern made up of 17 horizontal Gabor patches), embedded in a 7x7 array of Gabor patches. When the 32 "background" patches are vertical, the "E" pops-out, due to segregation by orientation and performance is perfect; however, if the background patches are all, or mostly horizontal, the "E" is camouflaged, and performance is random. Using a method of constant stimuli, we varied the number of "background" patches that were vertical and measured the probability of correct discrimination of the global orientation of the E (up/down/left/right). Surprisingly, amblyopes who showed strong crowding and deficits in symmetry detection and counting, perform normally or very nearly so in this segregation task. I therefore conclude that these deficits are not a consequence of abnormal segregation of figure from background.

  19. ISO deep far-infrared survey in the Lockman Hole

    NASA Astrophysics Data System (ADS)

    Kawara, K.; Sato, Y.; Matsuhara, H.; Taniguchi, Y.; Okuda, H.; Sofue, Y.; Matsumoto, T.; Wakamatsu, K.; Cowie, L. L.; Joseph, R. D.; Sanders, D. B.

    1999-03-01

    Two 44 arcmin x 44 arcmin fields in the Lockman Hole were mapped at 95 and 175 μm using ISOPHOT. A simple program code combined with PIA works well to correct for the drift in the detector responsivity. The number density of 175 μm sources is 3 - 10 times higher than expected from the no-evolution model. The source counts at 95 and 175 μm are consistent with the cosmic infrared background.

  20. PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon

    2018-02-01

    One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR  +  PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.

  1. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  2. The 2-24 μm source counts from the AKARI North Ecliptic Pole survey

    NASA Astrophysics Data System (ADS)

    Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.

    2014-11-01

    We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.

  3. Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.

    PubMed

    Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min

    2018-03-01

    High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.

  4. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  5. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where the expected number of counts obtained per scan may be very low. Such an analysis system is discussed and compared to the method previously used.

  6. VizieR Online Data Catalog: Tidal radii of 7 globular clusters (Lehmann+ 1997)

    NASA Astrophysics Data System (ADS)

    Lehmann, I.; Scholz, R.-D.

    1998-02-01

    We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962AJ.....67..471K) we derived the following structural parameters: tidal radius rt, core radius rc and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al., 1995AJ....109.2553G). (1 data file).

  7. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    PubMed Central

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  8. Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.

    PubMed

    Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger

    2016-09-01

    Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully used in animal studies. A fully operational microfluidic blood-counting system for preclinical pharmacokinetic studies was developed. Microfluidics enabled reliable and high-efficiency measurement of the blood concentration of most common PET and SPECT radiotracers with high temporal resolution in small blood volume. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  9. Apparatus and method for temperature correction and expanded count rate of inorganic scintillation detectors

    DOEpatents

    Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM

    2006-07-25

    The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.

  10. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less

  11. A software package to improve image quality and isolation of objects of interest for quantitative stereology studies of rat hepatocarcinogenesis.

    PubMed

    Xu, Yihua; Pitot, Henry C

    2006-03-01

    In the studies of quantitative stereology of rat hepatocarcinogenesis, we have used image analysis technology (automatic particle analysis) to obtain data such as liver tissue area, size and location of altered hepatic focal lesions (AHF), and nuclei counts. These data are then used for three-dimensional estimation of AHF occurrence and nuclear labeling index analysis. These are important parameters for quantitative studies of carcinogenesis, for screening and classifying carcinogens, and for risk estimation. To take such measurements, structures or cells of interest should be separated from the other components based on the difference of color and density. Common background problems seen on the captured sample image such as uneven light illumination or color shading can cause severe problems in the measurement. Two application programs (BK_Correction and Pixel_Separator) have been developed to solve these problems. With BK_Correction, common background problems such as incorrect color temperature setting, color shading, and uneven light illumination background, can be corrected. With Pixel_Separator different types of objects can be separated from each other in relation to their color, such as seen with different colors in immunohistochemically stained slides. The resultant images of such objects separated from other components are then ready for particle analysis. Objects that have the same darkness but different colors can be accurately differentiated in a grayscale image analysis system after application of these programs.

  12. Background characterization of an ultra-low background liquid scintillation counter

    DOE PAGES

    Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.; ...

    2017-01-26

    The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.

  13. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less

  14. Background compensation for a radiation level monitor

    DOEpatents

    Keefe, D.J.

    1975-12-01

    Background compensation in a device such as a hand and foot monitor is provided by digital means using a scaler. With no radiation level test initiated, a scaler is down-counted from zero according to the background measured. With a radiation level test initiated, the scaler is up-counted from the previous down-count position according to the radiation emitted from the monitored object and an alarm is generated if, with the scaler having crossed zero in the positive going direction, a particular number is exceeded in a specific time period after initiation of the test. If the test is initiated while the scale is down-counting, the background count from the previous down- count stored in a memory is used as the initial starting point for the up-count.

  15. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.

    The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.

  17. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  18. Guide for SDEC Set up

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bibby, R; Guthrie, E

    2009-01-30

    The instrument has four collection vials that must be filled with ethylene glycol before operation. Each of the four vials should be labeled 1 through 4 and the empty weights recorded. Fill each vial with 80 mL of ethylene glycol and record the weight again. In order for the instrument to operate properly, the collection vials should always have less than 160 mL of total liquid in them. After completing a sample run, remove the collection vials, use a transfer pipette to remove any liquid that might still be on the air paddler, wipe off any condensation from the exteriormore » of the collection vial and record weight. From the instrument, record the ending volume and the time of operation. The solution mixed in the scintillation vial will be 2 ml of a 95% to 50% ethylene glycol to water mixture. To determine the efficiency of counting at all of these concentrations, a series of vials should be set up that consist of 18 ml of Ultima Gold LLT cocktail mixed with standard, regular deionized water and ethylene glycol. The efficiency curve should be counted in the 'Low Level' count mode with the Luminescence Correction ON and the Color Quench Correction ON. Once the tSIE values are determined, chart the cpm against the tSIE numbers and find the best fit for the data. The resulting equation is to be used to converting tSIE values from the collection vials to efficiency. To determine the background cpm value of the ethylene glycol, count a 2 ml sample of ethylene glycol with 18 ml of Ultima Gold for 100 minutes. To determine the total activity of the sample, take two 2 ml aliquots of sample from the first vial and place in separate scintillation vials. Record the weight of each aliquot. Determine the percentage of total sample each aliquot represents by dividing the aliquot weight by the total solution weight from the vial. Also, determine the percentage of ethylene glycol in the sample by dividing the initial solution weight by the final solution weight and multiplying by 100. Add 18 ml of Ultima Gold to each vial and proceed to count for 100 minutes in a 'Low Level' count mode. Before performing a calculation on the dpm value of each aliquot, a subtraction should be made for the background count rate of the ethylene glycol. Based on the background cpm, multiply the background cpm value by the percentage of ethylene glycol in the collection vial. Once the background value is subtracted, calculate the dpm value of the sample based on the tSIE conversion to efficiency. This will produce a dpm value. To convert this to a total activity of the sample, divide the aliquot dpm value by the decimal percentage of total sample the aliquot represents. This gives the total activity of the sample solution. Take the average of both aliquots as a final result. To convert the total activity from the solution in vial one to activity in air, an empirical formula is used to convert activity/gram from vial one to total activity introduced into the system. After calculation the final result for the vial, divide the total by the mass of the sample in vial one. This gives dpm/g (labeled C{sub m}). To convert this to total dpm measured, C = (128.59 * Cm + 10.837)/V Where: C = Tritium concentration in air (dpm/m{sup 3}) C{sub m} = measured tritium concentration from vial 1 (dpm/g) V = Volume of air sampled through instrument (m{sup 3}). C is the final value of tritium concentration in air.« less

  19. Double counting in the density functional plus dynamical mean-field theory of transition metal oxides

    NASA Astrophysics Data System (ADS)

    Dang, Hung

    2015-03-01

    Recently, the combination of density functional theory (DFT) and dynamical mean-field theory (DMFT) has become a widely-used beyond-mean-field approach for strongly correlated materials. However, not only is the correlation treated in DMFT but also in DFT to some extent, a problem arises as the correlation is counted twice in the DFT+DMFT framework. The correction for this problem is still not well-understood. To gain more understanding of this ``double counting'' problem, I provide a detailed study of the metal-insulator transition in transition metal oxides in the subspace of oxygen p and transition metal correlated d orbitals using DFT+DMFT. I will show that the fully charge self-consistent DFT+DMFT calculations with the standard ``fully-localized limit'' (FLL) double counting correction fail to predict correctly materials such as LaTiO3, LaVO3, YTiO3 and SrMnO3 as insulators. Investigations in a wide range of the p- d splitting, the d occupancy, the lattice structure and the double counting correction itself will be presented to understand the reason behind this failure. I will also show that if the double counting correction is chosen to reproduce the p- d splitting consistent with experimental data, the DFT+DMFT approach can still give reasonable results in comparison with experiments.

  20. Poisson mixture model for measurements using counting.

    PubMed

    Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz

    2010-03-01

    Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.

  1. The relationship between tree growth patterns and likelihood of mortality: A study of two tree species in the Sierra Nevada

    USGS Publications Warehouse

    Das, A.J.; Battles, J.J.; Stephenson, N.L.; van Mantgem, P.J.

    2007-01-01

    We examined mortality of Abies concolor (Gord. & Glend.) Lindl. (white fir) and Pinus lambertiana Dougl. (sugar pine) by developing logistic models using three growth indices obtained from tree rings: average growth, growth trend, and count of abrupt growth declines. For P. lambertiana, models with average growth, growth trend, and count of abrupt declines improved overall prediction (78.6% dead trees correctly classified, 83.7% live trees correctly classified) compared with a model with average recent growth alone (69.6% dead trees correctly classified, 67.3% live trees correctly classified). For A. concolor, counts of abrupt declines and longer time intervals improved overall classification (trees with DBH ???20 cm: 78.9% dead trees correctly classified and 76.7% live trees correctly classified vs. 64.9% dead trees correctly classified and 77.9% live trees correctly classified; trees with DBH <20 cm: 71.6% dead trees correctly classified and 71.0% live trees correctly classified vs. 67.2% dead trees correctly classified and 66.7% live trees correctly classified). In general, count of abrupt declines improved live-tree classification. External validation of A. concolor models showed that they functioned well at stands not used in model development, and the development of size-specific models demonstrated important differences in mortality risk between understory and canopy trees. Population-level mortality-risk models were developed for A. concolor and generated realistic mortality rates at two sites. Our results support the contention that a more comprehensive use of the growth record yields a more robust assessment of mortality risk. ?? 2007 NRC.

  2. Bunch mode specific rate corrections for PILATUS3 detectors

    DOE PAGES

    Trueb, P.; Dejoie, C.; Kobas, M.; ...

    2015-04-09

    PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less

  3. Advances toward fully automated in vivo assessment of oral epithelial dysplasia by nuclear endomicroscopy-A pilot study.

    PubMed

    Liese, Jan; Winter, Karsten; Glass, Änne; Bertolini, Julia; Kämmerer, Peer Wolfgang; Frerich, Bernhard; Schiefke, Ingolf; Remmerbach, Torsten W

    2017-11-01

    Uncertainties in detection of oral epithelial dysplasia (OED) frequently result from sampling error especially in inflammatory oral lesions. Endomicroscopy allows non-invasive, "en face" imaging of upper oral epithelium, but parameters of OED are unknown. Mucosal nuclei were imaged in 34 toluidine blue-stained oral lesions with a commercial endomicroscopy. Histopathological diagnosis showed four biopsies in "dys-/neoplastic," 23 in "inflammatory," and seven in "others" disease groups. Strength of different assessment strategies of nuclear scoring, nuclear count, and automated nuclear analysis were measured by area under ROC curve (AUC) to identify histopathological "dys-/neoplastic" group. Nuclear objects from automated image analysis were visually corrected. Best-performing parameters of nuclear-to-image ratios were the count of large nuclei (AUC=0.986) and 6-nearest neighborhood relation (AUC=0.896), and best parameters of nuclear polymorphism were the count of atypical nuclei (AUC=0.996) and compactness of nuclei (AUC=0.922). Excluding low-grade OED, nuclear scoring and count reached 100% sensitivity and 98% specificity for detection of dys-/neoplastic lesions. In automated analysis, combination of parameters enhanced diagnostic strength. Sensitivity of 100% and specificity of 87% were seen for distances of 6-nearest neighbors and aspect ratios even in uncorrected objects. Correction improved measures of nuclear polymorphism only. The hue of background color was stronger than nuclear density (AUC=0.779 vs 0.687) to detect dys-/neoplastic group indicating that macroscopic aspect is biased. Nuclear-to-image ratios are applicable for automated optical in vivo diagnostics for oral potentially malignant disorders. Nuclear endomicroscopy may promote non-invasive, early detection of dys-/neoplastic lesions by reducing sampling error. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Monitoring of left ventricular ejection fraction with a miniature, nonimaging nuclear detector: accuracy and reliability over time with special reference to blood labeling.

    PubMed

    Lindhardt, T B; Hesse, B; Gadsbøll, N

    1997-01-01

    The purpose of this study was to determine the accuracy of determinations of left ventricular ejection fraction (LVEF) by a nonimaging miniature nuclear detector system (Cardioscint) and to evaluate the feasibility of long-term LVEF monitoring in patients admitted to the coronary care unit, with special reference to the blood-labeling technique. Cardioscint LVEF values were compared with measurements of LVEF by conventional gamma camera radionuclide ventriculography in 33 patients with a wide range of LVEF values. In 21 of the 33 patients, long-term monitoring was carried out for 1 to 4 hours (mean 186 minutes), with three different kits: one for in vivo and two for in vitro red blood cell labeling. The stability of the labeling was assessed by determination of the activity of blood samples taken during the first 24 hours after blood labeling. The agreement between Cardioscint LVEF and gamma camera LVEF was good with automatic background correction (r = 0.82; regression equation y = 1.04x + 3.88) but poor with manual background correction (r = 0.50; y = 0.88x - 0.55). The agreement was highest in patients without wall motion abnormalities. The long-term monitoring showed no difference between morning and afternoon Cardioscint LVEF values. Short-lasting fluctuations in LVEFs greater than 10 EF units were observed in the majority of the patients. After 24 hours, the mean reduction in the physical decay-corrected count rate of the blood samples was most pronounced for the two in vitro blood-labeling kits (57% +/- 9% and 41% +/- 3%) and less for the in vivo blood-labeling kit (32% +/- 26%). This "biologic decay" had a marked influence on the Cardioscint monitoring results, demanding frequent background correction. A fairly accurate estimate of LVEF can be obtained with the nonimaging Cardioscint system, and continuous bedside LVEF monitoring can proceed for hours with little inconvenience to the patients. Instability of the red blood cell labeling during long-term monitoring necessitates frequent background correction.

  5. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  6. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  7. Quantitative basis for component factors of gas flow proportional counting efficiencies

    NASA Astrophysics Data System (ADS)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  8. Automatic vehicle counting system for traffic monitoring

    NASA Astrophysics Data System (ADS)

    Crouzil, Alain; Khoudour, Louahdi; Valiere, Paul; Truong Cong, Dung Nghy

    2016-09-01

    The article is dedicated to the presentation of a vision-based system for road vehicle counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure, which allows correctly location of moving vehicles in space and time. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out and tested. Furthermore, the method developed in this work is capable of managing shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large datasets show that our method can count and classify vehicles in real time with a high level of performance (>98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

  9. Correcting X-ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, K.; Hughes, J. P.; Kellogg, E. M.; Nguyen, D.; Zombek, M.; Joy, M.; Kolodziejezak, J.

    1993-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters, correcting for the reflectivity of the mirror and convolving with the detector response.

  10. Correcting x ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, Kathy; Hughes, John P.; Kellogg, Edwin M.; Nguyen, D.; Zombeck, M.; Joy, M.; Kolodziejezak, J.

    1992-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters correcting for the reflectivity of the mirror and convolving with the detector response.

  11. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  12. SCATHA-Analysis System

    DTIC Science & Technology

    1981-01-31

    geometric factor. For the low energy FSA detectors, the background counts must be subtracted from the measured (actual) counts before the geometric factor...and high energy) each provide a background measure - ment. The background counts for the low energy ESA (LE ESA) were subtracted from the other four LE...perpendicular to the spacecraft +X reference spin axis and 189.660 around from the +Z axis (with this angle measured from the +Z axis in the direction

  13. Photon counting, censor corrections, and lifetime imaging for improved detection in two-photon microscopy

    PubMed Central

    Driscoll, Jonathan D.; Shih, Andy Y.; Iyengar, Satish; Field, Jeffrey J.; White, G. Allen; Squier, Jeffrey A.; Cauwenberghs, Gert

    2011-01-01

    We present a high-speed photon counter for use with two-photon microscopy. Counting pulses of photocurrent, as opposed to analog integration, maximizes the signal-to-noise ratio so long as the uncertainty in the count does not exceed the gain-noise of the photodetector. Our system extends this improvement through an estimate of the count that corrects for the censored period after detection of an emission event. The same system can be rapidly reconfigured in software for fluorescence lifetime imaging, which we illustrate by distinguishing between two spectrally similar fluorophores in an in vivo model of microstroke. PMID:21471395

  14. The edge artifact in the point-spread function-based PET reconstruction at different sphere-to-background ratios of radioactivity.

    PubMed

    Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki

    2016-02-01

    The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.

  15. Impact of high 131I-activities on quantitative 124I-PET

    NASA Astrophysics Data System (ADS)

    Braad, P. E. N.; Hansen, S. B.; Høilund-Carlsen, P. F.

    2015-07-01

    Peri-therapeutic 124 I-PET/CT is of interest as guidance for radioiodine therapy. Unfortunately, image quality is complicated by dead time effects and increased random coincidence rates from high 131 I-activities. A series of phantom experiments with clinically relevant 124 I/131 I-activities were performed on a clinical PET/CT-system. Noise equivalent count rate (NECR) curves and quantitation accuracy were determined from repeated scans performed over several weeks on a decaying NEMA NU-2 1994 cylinder phantom initially filled with 25 MBq 124 I and 1250 MBq 131 I. Six spherical inserts with diameters 10-37 mm were filled with 124 I (0.45 MBq ml-1 ) and 131 I (22 MBq ml-1 ) and placed inside the background of the NEMA/IEC torso phantom. Contrast recovery, background variability and the accuracy of scatter and attenuation corrections were assessed at sphere-to-background activity ratios of 20, 10 and 5. Results were compared to pure 124 I-acquisitions. The quality of 124 I-PET images in the presence of high 131 I-activities was good and image quantification unaffected except at very high count rates. Quantitation accuracy and contrast recovery were uninfluenced at 131 I-activities below 1000 MBq, whereas image noise was slightly increased. The NECR peaked at 550 MBq of 131 I, where it was 2.8 times lower than without 131 I in the phantom. Quantitative peri-therapeutic 124 I-PET is feasible.

  16. A high dynamic range pulse counting detection system for mass spectrometry.

    PubMed

    Collings, Bruce A; Dima, Martian D; Ivosev, Gordana; Zhong, Feng

    2014-01-30

    A high dynamic range pulse counting system has been developed that demonstrates an ability to operate at up to 2e8 counts per second (cps) on a triple quadrupole mass spectrometer. Previous pulse counting detection systems have typically been limited to about 1e7 cps at the upper end of the systems dynamic range. Modifications to the detection electronics and dead time correction algorithm are described in this paper. A high gain transimpedance amplifier is employed that allows a multi-channel electron multiplier to be operated at a significantly lower bias potential than in previous pulse counting systems. The system utilises a high-energy conversion dynode, a multi-channel electron multiplier, a high gain transimpedance amplifier, non-paralysing detection electronics and a modified dead time correction algorithm. Modification of the dead time correction algorithm is necessary due to a characteristic of the pulse counting electronics. A pulse counting detection system with the capability to count at ion arrival rates of up to 2e8 cps is described. This is shown to provide a linear dynamic range of nearly five orders of magnitude for a sample of aprazolam with concentrations ranging from 0.0006970 ng/mL to 3333 ng/mL while monitoring the m/z 309.1 → m/z 205.2 transition. This represents an upward extension of the detector's linear dynamic range of about two orders of magnitude. A new high dynamic range pulse counting system has been developed demonstrating the ability to operate at up to 2e8 cps on a triple quadrupole mass spectrometer. This provides an upward extension of the detector's linear dynamic range by about two orders of magnitude over previous pulse counting systems. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Study of activation of metal samples from LDEF-1 and Spacelab-2

    NASA Technical Reports Server (NTRS)

    Laird, C. E.

    1991-01-01

    The activation of metal samples and other material orbited onboard the Long Duration Exposure Facility (LDEF) and Spacelab-2 were studied. Measurements of the radioactivities of spacecraft materials were made, and corrections for self-absorption and efficiency were calculated. Activation cross sections for specific metal samples were updated while cross sections for other materials were tabulated from the scientific literature. Activation cross sections for 200 MeV neutrons were experimentally determined. Linear absorption coefficients, half lives, branching ratios and other pertinent technical data needed for LDEF sample analyses were tabulated. The status of the sample counting at low background facilities at national laboratories is reported.

  18. Dynamic pulse difference circuit

    DOEpatents

    Erickson, Gerald L.

    1978-01-01

    A digital electronic circuit of especial use for subtracting background activity pulses in gamma spectrometry comprises an up-down counter connected to count up with signal-channel pulses and to count down with background-channel pulses. A detector responsive to the count position of the up-down counter provides a signal when the up-down counter has completed one scaling sequence cycle of counts in the up direction. In an alternate embodiment, a detector responsive to the count position of the up-down counter provides a signal upon overflow of the counter.

  19. Probing Jupiter's Radiation Environment with Juno-UVS

    NASA Astrophysics Data System (ADS)

    Kammer, J.; Gladstone, R.; Greathouse, T. K.; Hue, V.; Versteeg, M. H.; Davis, M. W.; Santos-Costa, D.; Becker, H. N.; Bolton, S. J.; Connerney, J. E. P.; Levin, S.

    2017-12-01

    While primarily designed to observe photon emission from the Jovian aurora, Juno's Ultraviolet Spectrograph (Juno-UVS) has also measured background count rates associated with penetrating high-energy radiation. These background counts are distinguishable from photon events, as they are generally spread evenly across the entire array of the Juno-UVS detector, and as the spacecraft spins, they set a baseline count rate higher than the sky background rate. During eight perijove passes, this background radiation signature has varied significantly on both short (spin-modulated) timescales, as well as longer timescales ( minutes to hours). We present comparisons of the Juno-UVS data across each of the eight perijove passes, with a focus on the count rate that can be clearly attributed to radiation effects rather than photon events. Once calibrated to determine the relationship between count rate and penetrating high-energy radiation (e.g., using existing GEANT models), these in situ measurements by Juno-UVS will provide additional constraints to radiation belt models close to the planet.

  20. Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grosso, Gabriele; Moon, Hyowon; Lienhard, Benjamin

    Two-dimensional van der Waals materials have emerged as promising platforms for solid-state quantum information processing devices with unusual potential for heterogeneous assembly. Recently, bright and photostable single photon emitters were reported from atomic defects in layered hexagonal boron nitride (hBN), but controlling inhomogeneous spectral distribution and reducing multi-photon emission presented open challenges. Here, we demonstrate that strain control allows spectral tunability of hBN single photon emitters over 6 meV, and material processing sharply improves the single photon purity. We observe high single photon count rates exceeding 7 × 10 6 counts per second at saturation, after correcting for uncorrelated photonmore » background. Furthermore, these emitters are stable to material transfer to other substrates. High-purity and photostable single photon emission at room temperature, together with spectral tunability and transferability, opens the door to scalable integration of high-quality quantum emitters in photonic quantum technologies.« less

  1. Tunable and high-purity room temperature single-photon emission from atomic defects in hexagonal boron nitride

    DOE PAGES

    Grosso, Gabriele; Moon, Hyowon; Lienhard, Benjamin; ...

    2017-09-26

    Two-dimensional van der Waals materials have emerged as promising platforms for solid-state quantum information processing devices with unusual potential for heterogeneous assembly. Recently, bright and photostable single photon emitters were reported from atomic defects in layered hexagonal boron nitride (hBN), but controlling inhomogeneous spectral distribution and reducing multi-photon emission presented open challenges. Here, we demonstrate that strain control allows spectral tunability of hBN single photon emitters over 6 meV, and material processing sharply improves the single photon purity. We observe high single photon count rates exceeding 7 × 10 6 counts per second at saturation, after correcting for uncorrelated photonmore » background. Furthermore, these emitters are stable to material transfer to other substrates. High-purity and photostable single photon emission at room temperature, together with spectral tunability and transferability, opens the door to scalable integration of high-quality quantum emitters in photonic quantum technologies.« less

  2. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, T; Graham, C L; Sundsmo, T

    This procedure provides instructions for the calibration and use of the Canberra iSolo Low Background Alpha/Beta Counting System (iSolo) that is used for counting air filters and swipe samples. This detector is capable of providing radioisotope identification (e.g., it can discriminate between radon daughters and plutonium). This procedure includes step-by-step instructions for: (1) Performing periodic or daily 'Background' and 'Efficiency QC' checks; (2) Setting-up the iSolo for counting swipes and air filters; (3) Counting swipes and air filters for alpha and beta activity; and (4) Annual calibration.

  4. Significance of Maternal and Cord Blood Nucleated Red Blood Cell Count in Pregnancies Complicated by Preeclampsia

    PubMed Central

    Misha, Mehak; Rai, Lavanya

    2014-01-01

    Objectives. To evaluate the effect of preeclampsia on the cord blood and maternal NRBC count and to correlate NRBC count and neonatal outcome in preeclampsia and control groups. Study Design. This is a prospective case control observational study. Patients and Methods. Maternal and cord blood NRBC counts were studied in 50 preeclamptic women and 50 healthy pregnant women. Using automated cell counter total leucocyte count was obtained and peripheral smear was prepared to obtain NRBC count. Corrected WBC count and NRBC count/100 leucocytes in maternal venous blood and in cord blood were compared between the 2 groups. Results. No significant differences were found in corrected WBC count in maternal and cord blood in cases and controls. Significant differences were found in mean cord blood NRBC count in preeclampsia and control groups (40.0 ± 85.1 and 5.9 ± 6.3, P = 0.006). The mean maternal NRBC count in two groups was 2.4 ± 9.0 and 0.8 ± 1.5, respectively (P = 0.214). Cord blood NRBC count cut off value ≤13 could rule out adverse neonatal outcome with a sensitivity of 63% and specificity of 89%. Conclusion. Cord blood NRBC are significantly raised in preeclampsia. Neonates with elevated cord blood NRBC counts are more likely to have IUGR, low birth weight, neonatal ICU admission, respiratory distress syndrome, and assisted ventilation. Below the count of 13/100 leucocytes, adverse neonatal outcome is quite less likely. PMID:24734183

  5. Mapping the acquisition of the number word sequence in the first year of school

    NASA Astrophysics Data System (ADS)

    Gould, Peter

    2017-03-01

    Learning to count and to produce the correct sequence of number words in English is not a simple process. In NSW government schools taking part in Early Action for Success, over 800 students in each of the first 3 years of school were assessed every 5 weeks over the school year to determine the highest correct oral count they could produce. Rather than displaying a steady increase in the accurate sequence of the number words produced, the kindergarten data reported here identified clear, substantial hurdles in the acquisition of the counting sequence. The large-scale, longitudinal data also provided evidence of learning to count through the teens being facilitated by the semi-regular structure of the number words in English. Instead of occurring as hurdles to starting the next counting sequence, number words corresponding to some multiples of ten (10, 20 and 100) acted as if they were rest points. These rest points appear to be artefacts of how the counting sequence is acquired.

  6. High-rate dead-time corrections in a general purpose digital pulse processing system

    PubMed Central

    Abbene, Leonardo; Gerardi, Gaetano

    2015-01-01

    Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270

  7. Characterization of spectrometric photon-counting X-ray detectors at different pitches

    NASA Astrophysics Data System (ADS)

    Jurdit, M.; Brambilla, A.; Moulin, V.; Ouvrier-Buffet, P.; Radisson, P.; Verger, L.

    2017-09-01

    There is growing interest in energy-sensitive photon-counting detectors based on high flux X-ray imaging. Their potential applications include medical imaging, non-destructive testing and security. Innovative detectors of this type will need to count individual photons and sort them into selected energy bins, at several million counts per second and per mm2. Cd(Zn)Te detector grade materials with a thickness of 1.5 to 3 mm and pitches from 800 μm down to 200 μm were assembled onto interposer boards. These devices were tested using in-house-developed full-digital fast readout electronics. The 16-channel demonstrators, with 256 energy bins, were experimentally characterized by determining spectral resolution, count rate, and charge sharing, which becomes challenging at low pitch. Charge sharing correction was found to efficiently correct X-ray spectra up to 40 × 106 incident photons.s-1.mm-2.

  8. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  9. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems. PMID:19149886

  10. The JET neutron time-of-flight spectrometer

    NASA Astrophysics Data System (ADS)

    Elevant, T.; Aronsson, D.; van Belle, P.; Grosshoeg, G.; Hoek, M.; Olsson, M.; Sadler, G.

    1991-08-01

    An instrument for measuring neutron energy spectra over the interval 1 to 20 MeV has been developed and tested. It is based on time-of-flight measurements in between correlated events in two spatially separated sets of plastic scintillators. This instrument has been installed at the Joint European Tours (JET). We describe here the required operating conditions, performance tests and results of three years of operation during which neutron energy spectra in the 2-3 MeV range from D(d, n) 3He reactions in JET were studied. Some technical details are given and the results from Monte Carlo and analytical model calculations of the spectrometer energy resolution and response function are presented. The efficiency of the system is ≈ 1 × 10 -2 cm 2 counted at the position of the first detector. Together with the geometry conditions at JET this yields 6 × 10 2 counts per 10 15 neutrons emitted. The energy resolution is in the interval from 125 to 133 keV (FWHM) depending on conditions and is known to an accuracy of ±5 keV. Correction for the inevitable random background is dealt with in detail and a reduction procedure valid for fast variations in neutron count-rates is provided. Plasma ion temperatures deduced from the neutron spectra agree within statistical limits with the results from other diagnostic techniques in use at JET. Stable behaviour up to useful count-rates of 3 × 10 3 counts/s have been obtained, making possible the study of neutron spectra on the short time-scales typical of fusion plasmas.

  11. YALINA-booster subcritical assembly pulsed-neutron e xperiments: detector dead time and apatial corrections.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Gohar, Y.; Nuclear Engineering Division

    In almost every detector counting system, a minimal dead time is required to record two successive events as two separated pulses. Due to the random nature of neutron interactions in the subcritical assembly, there is always some probability that a true neutron event will not be recorded because it occurs too close to the preceding event. These losses may become rather severe for counting systems with high counting rates, and should be corrected before any utilization of the experimental data. This report examines the dead time effects for the pulsed neutron experiments of the YALINA-Booster subcritical assembly. The nonparalyzable modelmore » is utilized to correct the experimental data due to dead time. Overall, the reactivity values are increased by 0.19$ and 0.32$ after the spatial corrections for the YALINA-Booster 36% and 21% configurations respectively. The differences of the reactivities obtained with He-3 long or short detectors at the same detector channel diminish after the dead time corrections of the experimental data for the 36% YALINA-Booster configuration. In addition, better agreements between reactivities obtained from different experimental data sets are also observed after the dead time corrections for the 21% YALINA-Booster configuration.« less

  12. Background Conditions for the October 29, 2003 Solar Flare by the AVS-F Apparatus Data

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Lyapin, A. R.; Troitskaya, E. V.

    The background model for AVS-F apparatus onboard CORONAS-F satellite for the October 29, 2003 X10-class solar flare is discussed in the presented work. This background model developed for AVS-F counts rate in the low- and high-energy spectral ranges in both individual channels and summarized. Count rate were approximated by polynomials of high order taking into account the mean count rate in the geomagnetic equatorial region at the different orbits parts and Kp-index averaged on 5 bins in time interval from -24 to -12 hours before the time of geomagnetic equator passing. The observed averaged counts rate on equator in the region of geomagnetic latitude ±5o and estimated minimum count rate values are in coincidence within statistical errors for all selected orbits parts used for background modeling. This model will used to refine the estimated energy of registered during the solar flare spectral features and detailed analysis of their temporal profiles behavior both in corresponding energy bands and in summarized energy range.

  13. Contribution to the G 0 violation of parity experience: calculation and simulation of radiative corrections and the background noise study; Contribution a l'experience G0 de violation de la parite : calcul et simulation des corrections radiatives et etude du bruit de fond (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guler, Hayg

    2003-12-17

    In the framework of quantum chromodynamics, the nucleon is made of three valence quarks surrpounded by a sea of gluons and quark-antiquark pairs. Only the only lightest quarks (u, d and s) contribute significantly to the nucleon properties. In Go we using the property of weak interaction to violate parity symmetry, in order to determine separately the contributions of the three types of quarks to nucleon form factors. The experiment, which takes place at Thomas Jefferson laboratory (USA), aims at measuring parity violation asymmetry in electron-proton scattering. By doing several measurements at different momentum squared of the exchanged photons andmore » for different kinematics (forward angle when the proton is detected and backward angle it will be the electron) will permit to determine separately strange quarks electric and magnetic contributions to nucleon form factors. To extract an asymmetry with small errors, it is necessary to correct all the beam parameters, and to have high enough counting rates in detectors. A special electronics was developed to treat information coming from 16 scintillator pairs for each of the 8 sectors of the Go spectrometer. A complete calculation of radiative corrections has been clone and Monte Carlo simulations with the GEANT program has permitted to determine the shape of the experimental spectra including inelastic background. This work will allow to do a comparison between experimental data and theoretical calculations based on the Standard Model.« less

  14. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  15. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  16. Micro-Pulse Lidar Signals: Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Welton, Ellsworth J.; Campbell, James R.; Starr, David OC. (Technical Monitor)

    2002-01-01

    Micro-pulse lidar (MPL) systems are small, autonomous, eye-safe lidars used for continuous observations of the vertical distribution of cloud and aerosol layers. Since the construction of the first MPL in 1993, procedures have been developed to correct for various instrument effects present in MPL signals. The primary instrument effects include afterpulse, laser-detector cross-talk, and overlap, poor near-range (less than 6 km) focusing. The accurate correction of both afterpulse and overlap effects are required to study both clouds and aerosols. Furthermore, the outgoing energy of the laser pulses and the statistical uncertainty of the MPL detector must also be correctly determined in order to assess the accuracy of MPL observations. The uncertainties associated with the afterpulse, overlap, pulse energy, detector noise, and all remaining quantities affecting measured MPL signals, are determined in this study. The uncertainties are propagated through the entire MPL correction process to give a net uncertainty on the final corrected MPL signal. The results show that in the near range, the overlap uncertainty dominates. At altitudes above the overlap region, the dominant source of uncertainty is caused by uncertainty in the pulse energy. However, if the laser energy is low, then during mid-day, high solar background levels can significantly reduce the signal-to-noise of the detector. In such a case, the statistical uncertainty of the detector count rate becomes dominant at altitudes above the overlap region.

  17. The effect of chewing-gum on dose rate of salivary gland in differentiated thyroid carcinoma patients treated with radioiodine.

    PubMed

    Haghighatafshar, Mahdi; Nowshad, Reza; Etemadi, Zahra; Ghaedian, Tahereh

    2018-04-26

    Although, different methods have been suggested on reducing salivary gland radiation after radioiodine administration, an effective preventive or therapeutic measure is still debateful. To the best of our knowledge this is the second study that aimed to evaluate the effect of chewing-gum as a sialagogue on the radioiodine content of salivary gland, and radioiodine- induced symptoms of salivary gland dysfunction. Twenty two patients who were referred to radioiodine therapy were randomized into chewing-gum (group A) and control (group B) groups. Anterior and posterior planar images including both head and neck were obtained 2, 6, 12, 24 and 48 hours after the administration of radioiodine in all patients and round regions of interest (ROI) were drawn for both left and right parotid glands with a rectangular ROI in the region of cerebrum as the background. All patients were followed once, 6 months after radioiodine administration via a phone call for subjective evaluation of symptoms related to salivary gland damage. There was no significant difference between the two groups regarding the mean age, gender and initial iodine activity. The geometric mean of background-corrected count per administrated dose and acquisition time was calculated for bilateral parotid glands. This normalized parotid count showed a significant reduction in net parotid count in both groups during the first 48 hours after the radioiodine administration. However, no significant difference was found between the groups according to the amount and pattern of dose reduction in this time period. This study revealed that chewing-gum had no significant effect on the radioiodine content of parotid glands during the first 48 hours after radioiodine administration. Also, no significant difference was found in the incidence of relevant symptoms after 6 months comparing both groups.

  18. Measuring Patient Adherence to Malaria Treatment: A Comparison of Results from Self-Report and a Customised Electronic Monitoring Device

    PubMed Central

    Bruxvoort, Katia; Festo, Charles; Cairns, Matthew; Kalolella, Admirabilis; Mayaya, Frank; Kachur, S. Patrick; Schellenberg, David; Goodman, Catherine

    2015-01-01

    Background Self-report is the most common and feasible method for assessing patient adherence to medication, but can be prone to recall bias and social desirability bias. Most studies assessing adherence to artemisinin-based combination therapies (ACTs) have relied on self-report. In this study, we use a novel customised electronic monitoring device—termed smart blister packs—to examine the validity of self-reported adherence to artemether-lumefantrine (AL) in southern Tanzania. Methods Smart blister packs were designed to look identical to locally available AL blister packs and to record the date and time each tablet was removed from packaging. Patients obtaining AL at randomly selected health facilities and drug stores were followed up at home three days later and interviewed about each dose of AL taken. Blister packs were requested for pill count and extraction of smart blister pack data. Results Data on adherence from both self-report verified by pill count and smart blister packs were available for 696 of 1,204 patients. There was no difference between methods in the proportion of patients assessed to have completed treatment (64% and 67%, respectively). However, the percentage taking the correct number of pills for each dose at the correct times (timely completion) was higher by self-report than smart blister packs (37% vs. 24%; p<0.0001). By smart blister packs, 64% of patients completing treatment did not take the correct number of pills per dose or did not take each dose at the correct time interval. Conclusion Smart blister packs resulted in lower estimates of timely completion of AL and may be less prone to recall and social desirability bias. They may be useful when data on patterns of adherence are desirable to evaluate treatment outcomes. Improved methods of collecting self-reported data are needed to minimise bias and maximise comparability between studies. PMID:26214848

  19. Application of chlorine-36 technique in determining the age of modern groundwater in the Al-Zulfi province, Saudi Arabia.

    PubMed

    Challan, Mohsen B

    2016-06-01

    The present study aims to estimate the residence time of groundwater based on bomb-produced (36)Cl. (36)Cl/Cl ratios in the water samples are determined by inductively coupled plasma mass spectrometry and liquid scintillation counting. (36)Cl/Cl ratios in the groundwater were estimated to be 1.0-2.0 × 10(-12). Estimates of residence time were obtained by comparing the measured bomb-derived (36)Cl concentrations in groundwater with the background reference. Dating based on a (36)Cl bomb pulse may be more reliable and sensitive for groundwater recharged before 1975, back as far as the mid-1950s. The above (36)Cl background concentration was deduced by determining the background-corrected Dye-3 ice core data from the frozen Arctic data, according to the estimated total (36)Cl resources. The residence time of 7.81 × 10(4) y is obtained from extrapolated groundwater flow velocity. (36)Cl concentration in groundwater does not reflect the input of bomb pulse (36)Cl, and it belongs to the era before 1950.

  20. Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates

    NASA Astrophysics Data System (ADS)

    Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael

    2018-03-01

    Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.

  1. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We compare the detection performance of this algorithm to the W2 method recently implemented at CDC. The modified RLS method gives consistently better sensitivity at multiple background alert rates, and we recommend that it should be considered for routine application in bio-surveillance systems.

  2. Manners of Speaking: Linguistic Capital and the Rhetoric of Correctness in Late-Nineteenth-Century America

    ERIC Educational Resources Information Center

    Herring, William Rodney, Jr.

    2009-01-01

    A number of arguments appeared in the late-nineteenth-century United States about "correctness" in language, arguments for and against enforcing a standard of correctness and arguments about what should count as correct in language. Insofar as knowledge about and facility with "correct" linguistic usage could affect one's standing in the social…

  3. The effect of blood cell count on coronary flow in patients with coronary slow flow phenomenon

    PubMed Central

    Soylu, Korhan; Gulel, Okan; Yucel, Huriye; Yuksel, Serkan; Aksan, Gokhan; Soylu, Ayşegül İdil; Demircan, Sabri; Yılmaz, Özcan; Sahin, Mahmut

    2014-01-01

    Background and Objective: The coronary slow flow phenomenon (CSFP) is a coronary artery disease with a benign course, but its pathological mechanisms are not yet fully understood.The purpose of this controlled study was to investigate the cellular content of blood in patients diagnosed with CSFP and the relationship of this with coronary flow rates. Methods: Selective coronary angiographies of 3368 patients were analyzed to assess Thrombolysis in Myocardial Infarction (TIMI) frame count (TFC) values. Seventy eight of them had CSFP, and their demographic and laboratory findings were compared with 61 patients with normal coronary flow. Results: Patients’ demographic characteristics were similar in both groups. Mean corrected TFC (cTFC) values were significantly elevated in CSFP patients (p<0.001). Furthermore, hematocrit and hemoglobin values, and eosinophil and basophil counts of the CSFP patients were significantly elevated compared to the values obtained in the control group (p=0.005, p=0.047, p=0.001 and p=0.002, respectively). The increase observed in hematocrit and eosinophil levels showed significant correlations with increased TFC values (r=0.288 and r=0.217, respectively). Conclusion: Significant changes have been observed in the cellular composition of blood in patients diagnosed with CSFP as compared to the patients with normal coronary blood flow. The increases in hematocrit levels and in the eosinophil and basophil counts may have direct or indirect effects on the rate of coronary blood flow. PMID:25225502

  4. Self-Monitoring and Verbal Feedback to Reduce Stereotypic Body Rocking in a Congenitally Blind Adult.

    ERIC Educational Resources Information Center

    McAdam, David B.; And Others

    1993-01-01

    A self-management approach (utilizing self-counting of behaviors, corrective verbal feedback, and contingent verbal praise) was effectively used to reduce stereotypical body rocking in a congenitally blind young adult. Positive results were maintained, with replacement of overt counting with covert counting and immediate with delayed feedback as…

  5. Combinatorial Tasks and Outcome Listing: Examining Productive Listing among Undergraduate Students

    ERIC Educational Resources Information Center

    Lockwood, Elise; Gibson, Bryan R.

    2016-01-01

    Although counting problems are easy to state and provide rich, accessible problem-solving situations, there is much evidence that students struggle with solving counting problems correctly. With combinatorics (and the study of counting problems) becoming increasingly prevalent in K-12 and undergraduate curricula, there is a need for researchers to…

  6. Application of the backward extrapolation method to pulsed neutron sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamo, Alberto; Gohar, Yousry

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  7. Application of the backward extrapolation method to pulsed neutron sources

    DOE PAGES

    Talamo, Alberto; Gohar, Yousry

    2017-09-23

    We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less

  8. Effect of Non-Alignment/Alignment of Attenuation Map Without/With Emission Motion Correction in Cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Dey, Joyoni; Segars, W. Paul; Pretorius, P. Hendrik; King, Michael A.

    2015-08-01

    Purpose: We investigate the differences without/with respiratory motion correction in apparent imaging agent localization induced in reconstructed emission images when the attenuation maps used for attenuation correction (from CT) are misaligned with the patient anatomy during emission imaging due to differences in respiratory state. Methods: We investigated use of attenuation maps acquired at different states of a 2 cm amplitude respiratory cycle (at end-expiration, at end-inspiration, the center map, the average transmission map, and a large breath-hold beyond range of respiration during emission imaging) to correct for attenuation in MLEM reconstruction for several anatomical variants of the NCAT phantom which included both with and without non-rigid motion between heart and sub-diaphragmatic regions (such as liver, kidneys etc). We tested these cases with and without emission motion correction and attenuation map alignment/non-alignment. Results: For the NCAT default male anatomy the false count-reduction due to breathing was largely removed upon emission motion correction for the large majority of the cases. Exceptions (for the default male) were for the cases when using the large-breathhold end-inspiration map (TI_EXT), when we used the end-expiration (TE) map, and to a smaller extent, the end-inspiration map (TI). However moving the attenuation maps rigidly to align the heart region, reduced the remaining count-reduction artifacts. For the female patient count-reduction remained post motion correction using rigid map-alignment due to the breast soft-tissue misalignment. Quantitatively, after the transmission (rigid) alignment correction, the polar-map 17-segment RMS error with respect to the reference (motion-less case) reduced by 46.5% on average for the extreme breathhold case. The reductions were 40.8% for end-expiration map and 31.9% for end-inspiration cases on the average, comparable to the semi-ideal case where each state uses its own attenuation map for correction. Conclusions: Two main conclusions are that even rigid emission motion correction to rigidly align the heart region to the attenuation map helps in average cases to reduce the count-reduction artifacts and secondly, within the limits of the study (ex. rigid correction) when there is lung tissue inferior to the heart as with the NCAT phantom employed in this study end-expiration maps (TE) might best be avoided as they may create more artifacts than the end-inspiration (TI) maps.

  9. Influence of the partial volume correction method on 18F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM

    PubMed Central

    Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian

    2014-01-01

    Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For OSEM, image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-fluorodeoxyglucose dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation GTM PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in CMRGlc estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters. PMID:24052021

  10. Pixel-based CTE Correction of ACS/WFC: New Constraints from Short Darks

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; ACS Team

    2012-01-01

    The original Anderson & Bedin (2010) pixel-based correction for imperfect charge-transfer efficiency (CTE) in HST's ACS was based on a study of Warm Pixels (WPs) in a series of 1000s dark exposures. WPs with more than about 25 electrons were sufficiently isolated in these images that we could examine and model their trails. However, WPs with fewer electrons than this were more plentiful and suffered from significant crowding. To remedy this, we have taken a series of shorter dark exposures: 30s, 100s, and 339s. These supplemental exposures have two benefits. The first is that in the shorter exposures, 10 electron WPs are more sparse and their trails can be measured in isolation. The second benefit is that we can now get a handle on the absolute CTE losses, since the long-dark exposures can be used to accurately predict how many counts the WPs in the short-dark exposures should see. Any missing counts are a reflection of imperfect CTE. This new absolute handle on the CTE losses allows us to probe CTE even for very low charge packets. We find that CTE losses reach a nearly pathological level for charge packets with fewer than 20 electrons. Most ACS observations have backgrounds that are higher than this, so this does not have a large impact on science. Nevertheless, understanding CTE losses at all charge-packet levels is still important, as biases and darks often have low backgrounds. We note that these WP-based approaches to understanding CTE losses could be used in laboratory studies, as well. At present, many laboratory studies focus on Iron-55 sources, which all have 1620 electrons. Astronomical sources of interest are often fainter than this. By varying the dark exposure time, a wide diversity of WP intensities can be generated and cross-checked.

  11. Initial Characterization of Unequal-Length, Low-Background Proportional Counters for Absolute Gas-Counting Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Emily K.; Aalseth, Craig E.; Bonicalzi, Ricco

    Abstract. Characterization of two sets of custom unequal length proportional counters is underway at Pacific Northwest National Laboratory (PNNL). These detectors will be used in measurements to determine the absolute activity concentration of gaseous radionuclides (e.g., 37Ar). A set of three detectors has been fabricated based on previous PNNL ultra-low-background proportional counters (ULBPC) designs and now operate in PNNL’s shallow underground counting laboratory. A second set of four counters has also been fabricated using clean assembly of OFHC copper components for use in an above-ground counting laboratory. Characterization of both sets of detectors is underway with measurements of background rates,more » gas gain, energy resolution, and shielding considerations. These results will be presented along with uncertainty estimates of future absolute gas counting measurements.« less

  12. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    NASA Astrophysics Data System (ADS)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C]UCB-J dataset. Direct reconstruction of dynamic brain PET with event-by-event motion correction is achievable and dramatically more robust to noise in V T images than the indirect method.

  13. NATALIE: A 32 detector integrated acquisition system to characterize laser produced energetic particles with nuclear techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tarisien, M.; Plaisir, C.; Gobet, F.

    2011-02-15

    We present a stand-alone system to characterize the high-energy particles emitted in the interaction of ultrahigh intensity laser pulses with matter. According to the laser and target characteristics, electrons or protons are produced with energies higher than a few mega electron volts. Selected material samples can, therefore, be activated via nuclear reactions. A multidetector, named NATALIE, has been developed to count the {beta}{sup +} activity of these irradiated samples. The coincidence technique used, designed in an integrated system, results in very low background in the data, which is required for low activity measurements. It, therefore, allows a good precision onmore » the nuclear activation yields of the produced radionuclides. The system allows high counting rates and online correction of the dead time. It also provides, online, a quick control of the experiment. Geant4 simulations are used at different steps of the data analysis to deduce, from the measured activities, the energy and angular distributions of the laser-induced particle beams. Two applications are presented to illustrate the characterization of electrons and protons.« less

  14. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  15. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  16. Material screening with HPGe counting station for PandaX experiment

    NASA Astrophysics Data System (ADS)

    Wang, X.; Chen, X.; Fu, C.; Ji, X.; Liu, X.; Mao, Y.; Wang, H.; Wang, S.; Xie, P.; Zhang, T.

    2016-12-01

    A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.

  17. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klumpp, John

    We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less

  18. Dynamic time-correlated single-photon counting laser ranging

    NASA Astrophysics Data System (ADS)

    Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang

    2018-03-01

    We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.

  19. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing of the spurious extrahepatic counts was theoretically explained by a better robustness against emission-transmission inconsistency. A novel random correction method was proposed to overcome the issue in non-TOF PET. Further studies are needed to assess the novel random correction method robustness.

  20. ΛCDM Cosmology for Astronomers

    NASA Astrophysics Data System (ADS)

    Condon, J. J.; Matthews, A. M.

    2018-07-01

    The homogeneous, isotropic, and flat ΛCDM universe favored by observations of the cosmic microwave background can be described using only Euclidean geometry, locally correct Newtonian mechanics, and the basic postulates of special and general relativity. We present simple derivations of the most useful equations connecting astronomical observables (redshift, flux density, angular diameter, brightness, local space density, ...) with the corresponding intrinsic properties of distant sources (lookback time, distance, spectral luminosity, linear size, specific intensity, source counts, ...). We also present an analytic equation for lookback time that is accurate within 0.1% for all redshifts z. The exact equation for comoving distance is an elliptic integral that must be evaluated numerically, but we found a simple approximation with errors <0.2% for all redshifts up to z ≈ 50.

  1. Half-life of 51Mn

    NASA Astrophysics Data System (ADS)

    Graves, Stephen A.; Ellison, Paul A.; Valdovinos, Hector F.; Barnhart, Todd E.; Nickles, Robert J.; Engle, Jonathan W.

    2017-07-01

    The half-life of 51Mn was measured by serial gamma spectrometry of the 511-keV annihilation photon following decay by β+ emission. Data were collected every 100 seconds for 100,000-230,000 seconds within each measurement (n =4 ). The 511-keV incidence rate was calculated from the 511-keV spectral peak area and count duration, corrected for detector dead time and radioactive decay. Least-squares regression analysis was used to determine the half-life of 51Mn while accounting for the presence of background contaminants, notably 55Co. The result was 45.59 ±0.07 min, which is the highest precision measurement to date and disagrees with the current Nuclear Data Sheets value by over 6 σ .

  2. Fission cross section of 239Th and 232Th relative to 235U

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meadows, J. W.

    1979-01-01

    The fission cross sections of /sup 230/Th and /sup 232/Th were measured relative to /sup 235/U from near threshold to near 10 MeV. The weights of the thorium samples were determined by isotopic dilution. The weight of the uranium deposit was based on specific activity measurements of a /sup 234/U-/sup 235/U mixture and low geometry alpha counting. Corrections were made for thermal background, loss of fragments in the deposits, neutron scattering in the detector assembly, sample geometry, sample composition and the spectrum of the neutron source. Generally the systematic errors were approx. 1%. The combined systematic and statistical errors weremore » typically 1.5%. 17 references.« less

  3. WE-AB-204-10: Evaluation of a Novel Dedicated Breast PET System (Mammi-PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Z; Swanson, T; O’Connor, M

    2015-06-15

    Purpose: To evaluate the performance characteristics of a novel dedicated breast PET system (Mammi-PET, Oncovision). The system has 2 detector rings giving axial/transaxial field of view of 8/17 cm. Each ring consists of 12 monolithic LYSO modules coupled to PSPMTs. Methods: Uniformity, sensitivity, energy and spatial resolution were measured according to NEMA standards. Count rate performance was investigated using a source of F-18 (1384uCi) decayed over 5 half-lives. A prototype PET phantom was imaged for 20 min to evaluate image quality, recovery coefficients and partial volume effects. Under an IRB-approved protocol, 11 patients who just underwent whole body PET/CT examsmore » were imaged prone with the breast pendulant at 5–10 minutes/breast. Image quality was assessed with and without scatter/attenuation correction and using different reconstruction algorithms. Results: Integral/differential uniformity were 9.8%/6.0% respectively. System sensitivity was 2.3% on axis, 2.2% and 2.8% at 3.8 cm and 7.8 cm off-axis. Mean energy resolution of all modules was 23.3%. Spatial resolution (FWHM) was 1.82 mm and 2.90 mm on axis and 5.8 cm off axis. Three cylinders (14 mm diameter) in the PET phantom were filled with activity concentration ratios of 4:1, 3:1, and 2:1 relative to the background. Measured cylinder to background ratios were 2.6, 1.8 and 1.5 (without corrections) and 3.6, 2.3 and 1.5 (with attenuation/scatter correction). Five cylinders (14, 10, 6, 4 and 2 mm diameter) each with an activity ratio of 4:1 were measured and showed recovery coefficients of 1, 0.66, 0.45, 0.18 and 0.18 (without corrections), and 1, 0.53, 0.30, 0.13 and 0 (with attenuation/scatter correction). Optimal phantom image quality was obtained with 3D MLEM algorithm, >20 iterations and without attenuation/scatter correction. Conclusion: The MAMMI system demonstrated good performance characteristics. Further work is needed to determine the optimal reconstruction parameters for qualitative and quantitative applications.« less

  4. The number counts and infrared backgrounds from infrared-bright galaxies

    NASA Technical Reports Server (NTRS)

    Hacking, P. B.; Soifer, B. T.

    1991-01-01

    Extragalactic number counts and diffuse backgrounds at 25, 60, and 100 microns are predicted using new luminosity functions and improved spectral-energy distribution density functions derived from IRAS observations of nearby galaxies. Galaxies at redshifts z less than 3 that are like those in the local universe should produce a minimum diffuse background of 0.0085, 0.038, and 0.13 MJy/sr at 25, 60, and 100 microns, respectively. Models with significant luminosity evolution predict backgrounds about a factor of 4 greater than this minimum.

  5. Breast tissue decomposition with spectral distortion correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee

    2014-01-01

    Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953

  6. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    USGS Publications Warehouse

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  7. Comparison of gene expression microarray data with count-based RNA measurements informs microarray interpretation.

    PubMed

    Richard, Arianne C; Lyons, Paul A; Peters, James E; Biasci, Daniele; Flint, Shaun M; Lee, James C; McKinney, Eoin F; Siegel, Richard M; Smith, Kenneth G C

    2014-08-04

    Although numerous investigations have compared gene expression microarray platforms, preprocessing methods and batch correction algorithms using constructed spike-in or dilution datasets, there remains a paucity of studies examining the properties of microarray data using diverse biological samples. Most microarray experiments seek to identify subtle differences between samples with variable background noise, a scenario poorly represented by constructed datasets. Thus, microarray users lack important information regarding the complexities introduced in real-world experimental settings. The recent development of a multiplexed, digital technology for nucleic acid measurement enables counting of individual RNA molecules without amplification and, for the first time, permits such a study. Using a set of human leukocyte subset RNA samples, we compared previously acquired microarray expression values with RNA molecule counts determined by the nCounter Analysis System (NanoString Technologies) in selected genes. We found that gene measurements across samples correlated well between the two platforms, particularly for high-variance genes, while genes deemed unexpressed by the nCounter generally had both low expression and low variance on the microarray. Confirming previous findings from spike-in and dilution datasets, this "gold-standard" comparison demonstrated signal compression that varied dramatically by expression level and, to a lesser extent, by dataset. Most importantly, examination of three different cell types revealed that noise levels differed across tissues. Microarray measurements generally correlate with relative RNA molecule counts within optimal ranges but suffer from expression-dependent accuracy bias and precision that varies across datasets. We urge microarray users to consider expression-level effects in signal interpretation and to evaluate noise properties in each dataset independently.

  8. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  9. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  10. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Favalli, Andrea

    2017-10-01

    Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  11. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  12. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2017-07-16

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  13. Advantages and challenges in automated apatite fission track counting

    NASA Astrophysics Data System (ADS)

    Enkelmann, E.; Ehlers, T. A.

    2012-04-01

    Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for each individual surface area and grain counted. This manual correction procedure significantly increases (up to four times) the time required to analyze a sample with the automated counting procedure compared to the conventional approach.

  14. Characterization of 176Lu background in LSO-based PET scanners

    NASA Astrophysics Data System (ADS)

    Conti, Maurizio; Eriksson, Lars; Rothfuss, Harold; Sjoeholm, Therese; Townsend, David; Rosenqvist, Göran; Carlier, Thomas

    2017-05-01

    LSO and LYSO are today the most common scintillators used in positron emission tomography. Lutetium contains traces of 176Lu, a radioactive isotope that decays β - with a cascade of γ photons in coincidence. Therefore, Lutetium-based scintillators are characterized by a small natural radiation background. In this paper, we investigate and characterize the 176Lu radiation background via experiments performed on LSO-based PET scanners. LSO background was measured at different energy windows and different time coincidence windows, and by using shields to alter the original spectrum. The effect of radiation background in particularly count-starved applications, such as 90Y imaging, is analysed and discussed. Depending on the size of the PET scanner, between 500 and 1000 total random counts per second and between 3 and 5 total true coincidences per second were measured in standard coincidence mode. The LSO background counts in a Siemens mCT in the standard PET energy and time windows are in general negligible in terms of trues, and are comparable to that measured in a BGO scanner of similar size.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Church, J; Slaughter, D; Norman, E

    Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guinn, I.; Buuck, M.; Cuesta, C.

    The MAJORANA Collaboration will seek neutrinoless double beta decay (0νββ) in {sup 76}Ge using isotopically enriched p-type point contact (PPC) high purity Germanium (HPGe) detectors. A tonne-scale array of HPGe detectors would require background levels below 1 count/ROI-tonne-year in the 4 keV region of interest (ROI) around the 2039 keV Q-value of the decay. In order to demonstrate the feasibility of such an experiment, the MAJORANA DEMONSTRATOR, a 40 kg HPGe detector array, is being constructed with a background goal of < 3 count/ROI-tonne-year, which is expected to scale down to < 1 count/ROI-tonne-year for a tonne-scale experiment. The signalmore » readout electronics, which must be placed in close proximity to the detectors, present a challenge toward reaching this background goal. This talk will discuss the materials and design used to construct signal readout electronics with low enough backgrounds for the MAJORANA DEMONSTRATOR.« less

  17. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  18. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  19. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  20. The Chandra Source Catalog: Algorithms

    NASA Astrophysics Data System (ADS)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  1. Rigorous quantitative elemental microanalysis by scanning electron microscopy/energy dispersive x-ray spectrometry (SEM/EDS) with spectrum processing by NIST DTSA-II

    NASA Astrophysics Data System (ADS)

    Newbury, Dale E.; Ritchie, Nicholas W. M.

    2014-09-01

    Quantitative electron-excited x-ray microanalysis by scanning electron microscopy/silicon drift detector energy dispersive x-ray spectrometry (SEM/SDD-EDS) is capable of achieving high accuracy and high precision equivalent to that of the high spectral resolution wavelength dispersive x-ray spectrometer even when severe peak interference occurs. The throughput of the SDD-EDS enables high count spectra to be measured that are stable in calibration and resolution (peak shape) across the full deadtime range. With this high spectral stability, multiple linear least squares peak fitting is successful for separating overlapping peaks and spectral background. Careful specimen preparation is necessary to remove topography on unknowns and standards. The standards-based matrix correction procedure embedded in the NIST DTSA-II software engine returns quantitative results supported by a complete error budget, including estimates of the uncertainties from measurement statistics and from the physical basis of the matrix corrections. NIST DTSA-II is available free for Java-platforms at: http://www.cstl.nist.gov/div837/837.02/epq/dtsa2/index.html).

  2. Counting on fine motor skills: links between preschool finger dexterity and numerical skills.

    PubMed

    Fischer, Ursula; Suggate, Sebastian P; Schmirl, Judith; Stoeger, Heidrun

    2017-10-26

    Finger counting is widely considered an important step in children's early mathematical development. Presumably, children's ability to move their fingers during early counting experiences to aid number representation depends in part on their early fine motor skills (FMS). Specifically, FMS should link to children's procedural counting skills through consistent repetition of finger-counting procedures. Accordingly, we hypothesized that (a) FMS are linked to early counting skills, and (b) greater FMS relate to conceptual counting knowledge (e.g., cardinality, abstraction, order irrelevance) via procedural counting skills (i.e., one-one correspondence and correctness of verbal counting). Preschool children (N = 177) were administered measures of procedural counting skills, conceptual counting knowledge, FMS, and general cognitive skills along with parent questionnaires on home mathematics and fine motor environment. FMS correlated with procedural counting skills and conceptual counting knowledge after controlling for cognitive skills, chronological age, home mathematics and FMS environments. Moreover, the relationship between FMS and conceptual counting knowledge was mediated by procedural counting skills. Findings suggest that FMS play a role in early counting and therewith conceptual counting knowledge. © 2017 John Wiley & Sons Ltd.

  3. Pile-up corrections for high-precision superallowed β decay half-life measurements via γ-ray photopeak counting

    NASA Astrophysics Data System (ADS)

    Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hyland, B.; Kulp, W. D.; Leach, K. G.; Leslie, J. R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Williams, S. J.; Wong, J.; Wood, J. L.; Zganjar, E. F.

    2007-09-01

    A general technique that corrects γ-ray gated β decay-curve data for detector pulse pile-up is presented. The method includes corrections for non-zero time-resolution and energy-threshold effects in addition to a special treatment of saturating events due to cosmic rays. This technique is verified through a Monte Carlo simulation and experimental data using radioactive beams of Na26 implanted at the center of the 8π γ-ray spectrometer at the ISAC facility at TRIUMF in Vancouver, Canada. The β-decay half-life of Na26 obtained from counting 1809-keV γ-ray photopeaks emitted by the daughter Mg26 was determined to be T=1.07167±0.00055 s following a 27σ correction for detector pulse pile-up. This result is in excellent agreement with the result of a previous measurement that employed direct β counting and demonstrates the feasibility of high-precision β-decay half-life measurements through the use of high-purity germanium γ-ray detectors. The technique presented here, while motivated by superallowed-Fermi β decay studies, is general and can be used for all half-life determinations (e.g. α-, β-, X-ray, fission) in which a γ-ray photopeak is used to select the decays of a particular isotope.

  4. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  5. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE PAGES

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...

    2018-03-03

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  6. A case study of the impact of inaccurate cause-of-death reporting on health disparity tracking: New York City premature cardiovascular mortality.

    PubMed

    Johns, Lauren E; Madsen, Ann M; Maduro, Gil; Zimmerman, Regina; Konty, Kevin; Begier, Elizabeth

    2013-04-01

    Heart disease death overreporting is problematic in New York City (NYC) and other US jurisdictions. We examined whether overreporting affects the premature (< 65 years) heart disease death rate disparity between non-Hispanic Blacks and non-Hispanic Whites in NYC. We identified overreporting hospitals and used counts of premature heart disease deaths at reference hospitals to estimate corrected counts. We then corrected citywide, age-adjusted premature heart disease death rates among Blacks and Whites and a White-Black premature heart disease death disparity. At overreporting hospitals, 51% of the decedents were White compared with 25% at reference hospitals. Correcting the heart disease death counts at overreporting hospitals decreased the age-adjusted premature heart disease death rate 10.1% (from 41.5 to 37.3 per 100,000) among Whites compared with 4.2% (from 66.2 to 63.4 per 100,000) among Blacks. Correction increased the White-Black disparity 6.1% (from 24.6 to 26.1 per 100,000). In 2008, NYC's White-Black premature heart disease death disparity was underestimated because of overreporting by hospitals serving larger proportions of Whites. Efforts to reduce overreporting may increase the observed disparity, potentially obscuring any programmatic or policy-driven advances.

  7. Influence of electrolytes in the QCM response: discrimination and quantification of the interference to correct microgravimetric data.

    PubMed

    Encarnação, João M; Stallinga, Peter; Ferreira, Guilherme N M

    2007-02-15

    In this work we demonstrate that the presence of electrolytes in solution generates desorption-like transients when the resonance frequency is measured. Using impedance spectroscopy analysis and Butterworth-Van Dyke (BVD) equivalent electrical circuit modeling we demonstrate that non-Kanazawa responses are obtained in the presence of electrolytes mainly due to the formation of a diffuse electric double layer (DDL) at the sensor surface, which also causes a capacitor like signal. We extend the BVD equivalent circuit by including additional parallel capacitances in order to account for such capacitor like signal. Interfering signals from electrolytes and DDL perturbations were this way discriminated. We further quantified as 8.0+/-0.5 Hz pF-1 the influence of electrolytes to the sensor resonance frequency and we used this factor to correct the data obtained by frequency counting measurements. The applicability of this approach is demonstrated by the detection of oligonucleotide sequences. After applying the corrective factor to the frequency counting data, the mass contribution to the sensor signal yields identical values when estimated by impedance analysis and frequency counting.

  8. Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis

    PubMed Central

    Hagen, Nils T.

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201

  9. Stellar populations in the outskirts of M31: the mid-infrared view

    NASA Astrophysics Data System (ADS)

    Barmby, P.; Ravandi, M. Rafiei

    2017-03-01

    The mid-infrared provides a unique view of galaxy stellar populations, sensitive to both the integrated light of old, low-mass stars and to individual dusty mass-losing stars. We present results from an extended Spitzer/IRAC survey of M31 with total lengths of 6.6 and 4.4 degrees along the major and minor axes, respectively. The integrated surface brightness profile proves to be surprisingly difficult to trace in the outskirts of the galaxy, but we can also investigate the disk/halo transition via a star count profile, with careful correction for foreground and background contamination. Our point-source catalog allows us to report on mid-infrared properties of individual objects in the outskirts of M31, via cross-correlation with PAndAS, WISE, and other catalogs.

  10. Optical Design Considerations for Efficient Light Collection from Liquid Scintillation Counters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernacki, Bruce E.; Douglas, Matthew; Erchinger, Jennifer L.

    2015-01-01

    Liquid scintillation counters measure charged particle-emitting radioactive isotopes and are used for environmental studies, nuclear chemistry, and life science. Alpha and beta emissions arising from the material under study interact with the scintillation cocktail to produce light. The prototypical liquid scintillation counter employs low-level photon-counting detectors to measure the arrival of the scintillation light produced as a result of the dissolved material under study interacting with the scintillation cocktail. For reliable operation the counting instrument must convey the scintillation light to the detectors efficiently and predictably. Current best practices employ the use of two or more detectors for coincidence processingmore » to discriminate true scintillation events from background events due to instrumental effects such as photomultiplier tube dark rates, tube flashing, or other light emission not generated in the scintillation cocktail vial. In low background liquid scintillation counters additional attention is paid to shielding the scintillation cocktail from naturally occurring radioactive material (NORM) present in the laboratory and within the instruments construction materials. Low background design is generally at odds with optimal light collection. This study presents the evolution of a light collection design for liquid scintillation counting in a low background shield. The basic approach to achieve both good light collection and a low background measurement is described. The baseline signals arising from the scintillation vial are modeled and methods to efficiently collect scintillation light are presented as part of the development of a customized low-background, high sensitivity liquid scintillation counting system.« less

  11. Characterization of a spectroscopic detector for application in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Dooraghi, Alex A.; Fix, Brian J.; Smith, Jerel A.; Brown, William D.; Azevedo, Stephen G.; Martz, Harry E.

    2017-09-01

    Recent advances in cadmium telluride (CdTe) energy-discriminating pixelated detectors have enabled the possibility of Multi-Spectral X-ray Computed Tomography (MSXCT) to incorporate spectroscopic information into CT. MultiX ME 100 V2 is a CdTe-based spectroscopic x-ray detector array capable of recording energies from 20 to 160 keV in 1.1 keV energy bin increments. Hardware and software have been designed to perform radiographic and computed tomography tasks with this spectroscopic detector. Energy calibration is examined using the end-point energy of a bremsstrahlung spectrum and radioisotope spectral lines. When measuring the spectrum from Am-241 across 500 detector elements, the standard deviation of the peak-location and FWHM measurements are +/- 0.4 and +/- 0.6 keV, respectively. As these values are within the energy bin size (1.1 keV), detector elements are consistent with each other. The count rate is characterized, using a nonparalyzable model with a dead time of 64 +/- 5 ns. This is consistent with the manufacturer's quoted per detector-element linear-deviation at 2 Mpps (million photons per sec) of 8.9 % (typical) and 12 % (max). When comparing measured and simulated spectra, a low-energy tail is visible in the measured data due to the spectral response of the detector. If no valid photon detections are expected in the low-energy tail, then a background subtraction may be applied to allow for a possible first-order correction. If photons are expected in the low-energy tail, a detailed model must be implemented. A radiograph of an aluminum step wedge with a maximum height of 20 mm shows an underestimation of attenuation by about 10 % at 60 keV. This error is due to partial energy deposition from higher energy (>60 keV) photons into a lower-energy ( 60 keV) bin, reducing the apparent attenuation. A radiograph of a polytetrafluoroethylene (PTFE) cylinder taken using a bremsstrahlung spectrum from an x-ray voltage of 100 kV filtered by 1.3 mm Cu is reconstructed using Abel inversion. As no counts are expected in the low energy tail, a first order background correction is applied to the spectrum. The measured linear attenuation coefficient (LAC) is within 10% of the expected value in the 60 to 100 keV range. Below 60 keV, low counts in the corrected spectrum and partial energy deposition from incident photons of energy greater than 60 keV into energy bins below 60 keV impact the LAC measurements. This report ends with a demonstration of the tomographic capability of the system. The quantitative understanding of the detector developed in this report will enable further study in evaluating the system for characterization of an object's chemical make-up for industrial and security purposes.

  12. Characterization of a spectroscopic detector for application in x-ray computed tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dooraghi, A. A.; Fix, B. J.; Smith, J. A.

    Recent advances in cadmium telluride (CdTe) energy-discriminating pixelated detectors have enabled the possibility of Multi-Spectral X-ray Computed Tomography (MSXCT) to incorporate spectroscopic information into CT. MultiX ME 100 V2 is a CdTe-based spectroscopic x-ray detector array capable of recording energies from 20 to 160 keV in 1.1 keV energy bin increments. Hardware and software have been designed to perform radiographic and computed tomography tasks with this spectroscopic detector. Energy calibration is examined using the end-point energy of a bremsstrahlung spectrum and radioisotope spectral lines. When measuring the spectrum from Am-241 across 500 detector elements, the standard deviation of the peak-locationmore » and FWHM measurements are ±0.4 and ±0.6 keV, respectively. As these values are within the energy bin size (1.1 keV), detector elements are consistent with each other. The count rate is characterized, using a nonparalyzable model with a dead time of 64 ± 5 ns. This is consistent with the manufacturer’s quoted per detector-element linear-deviation at 2 Mpps (million photons per sec) of 8.9% (typical) and 12% (max). When comparing measured and simulated spectra, a low-energy tail is visible in the measured data due to the spectral response of the detector. If no valid photon detections are expected in the low-energy tail, then a background subtraction may be applied to allow for a possible first-order correction. If photons are expected in the low-energy tail, a detailed model must be implemented. A radiograph of an aluminum step wedge with a maximum height of about 20 mm shows an underestimation of attenuation by about 10% at 60 keV. This error is due to partial energy deposition from higher-energy (> 60 keV) photons into a lower-energy (~60 keV) bin, reducing the apparent attenuation. A radiograph of a PTFE cylinder taken using a bremsstrahlung spectrum from an x-ray voltage of 100 kV filtered by 1.3 mm Cu is reconstructed using Abel inversion. As no counts are expected in the low energy tail, a first order background correction is applied to the spectrum. The measured linear attenuation coefficient (LAC) is within 10% of the expected value in the 60 to 100 keV range. Below 60 keV, low counts in the corrected spectrum and partial energy deposition from incident photons of energy greater than 60 keV into energy bins below 60 keV impact the LAC measurements. This report ends with a demonstration of the tomographic capability of the system. The quantitative understanding of the detector developed in this report will enable further study in evaluating the system for characterization of an object’s chemical make-up for industrial and security purposes.« less

  13. 77 FR 33017 - Qualification of Drivers; Exemption Applications; Vision

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-04

    ... complete loss of vision in his left eye due to a traumatic injury sustained at age 29. The best corrected... traumatic injury sustained in 1981. The best corrected visual acuity in his right eye is finger count vision... left eye due to a traumatic incident sustained 25 years ago. The best corrected visual acuity in his...

  14. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  15. Pilocarpine effect on dose rate of salivary gland in differentiated thyroid carcinoma patients treated with radioiodine.

    PubMed

    Haghighatafshar, Mahdi; Ghaedian, Mehrnaz; Etemadi, Zahra; Entezarmahdi, Seyed M; Ghaedian, Tahereh

    2018-05-01

    Although different methods have been suggested on reducing salivary gland radiation after radioiodine administration, an effective preventive or therapeutic measure is still up for debate. The aim of this study was to evaluate the effect of pilocarpine, as a sialagogue drug on the radioiodine content of the salivary gland, and radioiodine-induced symptoms of salivary gland dysfunction. Patients who were referred for radioiodine therapy were randomized into pilocarpine and placebo groups. The patients as well as the nurse who administered the tablets, and the specialist who analyzed the images, were all unaware of the patients' group. Anterior and posterior planar images including that of both the head and neck were obtained 2, 6, 12, 24, and 48 h after the administration of radioiodine in all patients, and round regions of interest were drawn for both left and right parotid glands, with a rectangular region of interest in the region of the cerebrum as background. All patients were interrogated once, 6 months after radioiodine administration, by a phone call for subjective evaluation of symptoms related to salivary gland damage. There was no significant difference between the two groups with regard to the mean age, sex, and initial iodine activity. The geometric mean of background-corrected count per administered dose and acquisition time was calculated for the bilateral parotid glands. This normalized parotid count showed a significant reduction in net parotid count in both groups during the first 48 h after radioiodine administration. However, no significant difference was found between the groups according to the amount and pattern of dose reduction in this time period. This study revealed that pilocarpine had no significant effect on the radioiodine content of parotid glands during the first 48 h after radioiodine administration. No significant difference was found in the incidence of symptoms between the two groups treated with placebo and pilocarpine.

  16. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  17. 40 CFR 1065.650 - Emission calculations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... following sequence of preliminary calculations on recorded concentrations: (i) Correct all THC and CH4.... (iii) Calculate all THC and NMHC concentrations, including dilution air background concentrations, as... NMHC to background corrected mass of THC. If the background corrected mass of NMHC is greater than 0.98...

  18. Reevaluation of pollen quantitation by an automatic pollen counter.

    PubMed

    Muradil, Mutarifu; Okamoto, Yoshitaka; Yonekura, Syuji; Chazono, Hideaki; Hisamitsu, Minako; Horiguchi, Shigetoshi; Hanazawa, Toyoyuki; Takahashi, Yukie; Yokota, Kunihiko; Okumura, Satoshi

    2010-01-01

    Accurate and detailed pollen monitoring is useful for selection of medication and for allergen avoidance in patients with allergic rhinitis. Burkard and Durham pollen samplers are commonly used, but are labor and time intensive. In contrast, automatic pollen counters allow simple real-time pollen counting; however, these instruments have difficulty in distinguishing pollen from small nonpollen airborne particles. Misidentification and underestimation rates for an automatic pollen counter were examined to improve the accuracy of the pollen count. The characteristics of the automatic pollen counter were determined in a chamber study with exposure to cedar pollens or soil grains. The cedar pollen counts were monitored in 2006 and 2007, and compared with those from a Durham sampler. The pollen counts from the automatic counter showed a good correlation (r > 0.7) with those from the Durham sampler when pollen dispersal was high, but a poor correlation (r < 0.5) when pollen dispersal was low. The new correction method, which took into account the misidentification and underestimation, improved this correlation to r > 0.7 during the pollen season. The accuracy of automatic pollen counting can be improved using a correction to include rates of underestimation and misidentification in a particular geographical area.

  19. Leucocyte count in young adults with first-ever ischaemic stroke: associated factors and association on prognosis.

    PubMed

    Heikinheimo, Terttu; Putaala, Jukka; Haapaniemi, Elena; Kaste, Markku; Tatlisumak, Turgut

    2015-02-01

    Limited data exist on the associated factors and correlation of leucocyte count to outcome in young adults with first-ever ischaemic stroke. Our objectives were to investigate factors associated with elevated leucocyte count and whether there is correlation between leucocyte count and short- and long-term outcomes. Of our database of 1008 consecutive patients aged 15 to 49, we included those with leucocyte count measured within the first two days from stroke onset. Outcomes were three-month and long-term disability, death, and vascular events. Linear regression was used to explore baseline variables associated with leucocyte count. Logistic regression and Cox proportional models studied the association between leucocyte count and clinical outcomes. In our study cohort of 781 patients (61.7% males; mean age 41.4 years), mean leucocyte count was high: 8.8 ± 3.1 × 10(9) cells/L (Reference range: 3.4-8.2 × 10(9) cells/L). Higher leucocyte levels were associated with dyslipidaemia, smoking, peripheral arterial disease, stroke severity, and lesion size. After adjustment for age, gender, relevant risk factors, both continuous leucocyte count and the highest quartile of leucocyte count were independently associated with unfavourable three-month outcome. Regarding events in the long-term (follow-up 8.1 ± 4.2 years in survivors), no association between leucocyte count and the event risks appeared. Among young stroke patients, high leucocyte count was a common finding. It was associated with vascular disease and its risk factors as well as severity of stroke, but it was also independently associated with unfavourable three-month outcome in these patients. There was no association with the long-term outcome. [Correction added on 31 October 2013 after first online publication: In the Results section of the Abstract, the cohort of 797 patients in this study was corrected to 781 patients.]. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  20. Reference analysis of the signal + background model in counting experiments

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  1. Data-based Considerations in Portal Radiation Monitoring of Cargo Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weier, Dennis R.; O'Brien, Robert F.; Ely, James H.

    2004-07-01

    Radiation portal monitoring of cargo vehicles often includes a configuration of four-panel monitors that record gamma and neutron counts from vehicles transporting cargo. As vehicles pass the portal monitors, they generate a count profile over time that can be compared to the average panel background counts obtained just prior to the time the vehicle entered the area of the monitors. Pacific Northwest National Laboratory has accumulated considerable data regarding such background radiation and vehicle profiles from portal installations, as well as in experimental settings using known sources and cargos. Several considerations have a bearing on how alarm thresholds are setmore » in order to maintain sensitivity to radioactive sources while also controlling to a manageable level the rate of false or nuisance alarms. False alarms are statistical anomalies while nuisance alarms occur due to the presence of naturally occurring radioactive material (NORM) in cargo, for example, kitty litter. Considerations to be discussed include: • Background radiation suppression due to the shadow shielding from the vehicle. • The impact of the relative placement of the four panels on alarm decision criteria. • Use of plastic scintillators to separate gamma counts into energy windows. • The utility of using ratio criteria for the energy window counts rather than simply using total window counts. • Detection likelihood for these various decision criteria based on computer simulated injections of sources into vehicle profiles.« less

  2. Monitoring trends in bird populations: addressing background levels of annual variability in counts

    Treesearch

    Jared Verner; Kathryn L. Purcell; Jennifer G. Turner

    1996-01-01

    Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...

  3. Influence of the partial volume correction method on 18F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM

    NASA Astrophysics Data System (ADS)

    Bowen, Spencer L.; Byars, Larry G.; Michel, Christian J.; Chonde, Daniel B.; Catana, Ciprian

    2013-10-01

    Kinetic parameters estimated from dynamic 18F-fluorodeoxyglucose (18F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting 18F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.

  4. Influence of the partial volume correction method on (18)F-fluorodeoxyglucose brain kinetic modelling from dynamic PET images reconstructed with resolution model based OSEM.

    PubMed

    Bowen, Spencer L; Byars, Larry G; Michel, Christian J; Chonde, Daniel B; Catana, Ciprian

    2013-10-21

    Kinetic parameters estimated from dynamic (18)F-fluorodeoxyglucose ((18)F-FDG) PET acquisitions have been used frequently to assess brain function in humans. Neglecting partial volume correction (PVC) for a dynamic series has been shown to produce significant bias in model estimates. Accurate PVC requires a space-variant model describing the reconstructed image spatial point spread function (PSF) that accounts for resolution limitations, including non-uniformities across the field of view due to the parallax effect. For ordered subsets expectation maximization (OSEM), image resolution convergence is local and influenced significantly by the number of iterations, the count density, and background-to-target ratio. As both count density and background-to-target values for a brain structure can change during a dynamic scan, the local image resolution may also concurrently vary. When PVC is applied post-reconstruction the kinetic parameter estimates may be biased when neglecting the frame-dependent resolution. We explored the influence of the PVC method and implementation on kinetic parameters estimated by fitting (18)F-FDG dynamic data acquired on a dedicated brain PET scanner and reconstructed with and without PSF modelling in the OSEM algorithm. The performance of several PVC algorithms was quantified with a phantom experiment, an anthropomorphic Monte Carlo simulation, and a patient scan. Using the last frame reconstructed image only for regional spread function (RSF) generation, as opposed to computing RSFs for each frame independently, and applying perturbation geometric transfer matrix PVC with PSF based OSEM produced the lowest magnitude bias kinetic parameter estimates in most instances, although at the cost of increased noise compared to the PVC methods utilizing conventional OSEM. Use of the last frame RSFs for PVC with no PSF modelling in the OSEM algorithm produced the lowest bias in cerebral metabolic rate of glucose estimates, although by less than 5% in most cases compared to the other PVC methods. The results indicate that the PVC implementation and choice of PSF modelling in the reconstruction can significantly impact model parameters.

  5. A physics investigation of deadtime losses in neutron counting at low rates with Cf252

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Louise G; Croft, Stephen

    2009-01-01

    {sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less

  6. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  7. Association of pulmonary, cardiovascular, and hematologic metrics with carbon nanotube and nanofiber exposure among U.S. workers: a cross-sectional study.

    PubMed

    Schubauer-Berigan, Mary K; Dahm, Matthew M; Erdely, Aaron; Beard, John D; Eileen Birch, M; Evans, Douglas E; Fernback, Joseph E; Mercer, Robert R; Bertke, Stephen J; Eye, Tracy; de Perio, Marie A

    2018-05-16

    Commercial use of carbon nanotubes and nanofibers (CNT/F) in composites and electronics is increasing; however, little is known about health effects among workers. We conducted a cross-sectional study among 108 workers at 12 U.S. CNT/F facilities. We evaluated chest symptoms or respiratory allergies since starting work with CNT/F, lung function, resting blood pressure (BP), resting heart rate (RHR), and complete blood count (CBC) components. We conducted multi-day, full-shift sampling to measure background-corrected elemental carbon (EC) and CNT/F structure count concentrations, and collected induced sputum to measure CNT/F in the respiratory tract. We measured (nonspecific) fine and ultrafine particulate matter mass and count concentrations. Concurrently, we conducted physical examinations, BP measurement, and spirometry, and collected whole blood. We evaluated associations between exposures and health measures, adjusting for confounders related to lifestyle and other occupational exposures. CNT/F air concentrations were generally low, while 18% of participants had evidence of CNT/F in sputum. Respiratory allergy development was positively associated with inhalable EC (p=0.040) and number of years worked with CNT/F (p=0.008). No exposures were associated with spirometry-based metrics or pulmonary symptoms, nor were CNT/F-specific metrics related to BP or most CBC components. Systolic BP was positively associated with fine particulate matter (p-values: 0.015-0.054). RHR was positively associated with EC, at both the respirable (p=0.0074) and inhalable (p=0.0026) size fractions. Hematocrit was positively associated with the log of CNT/F structure counts (p=0.043). Most health measures were not associated with CNT/F. The positive associations between CNT/F exposure and respiratory allergies, RHR, and hematocrit counts may not be causal and require examination in other studies.

  8. A study of reconstruction accuracy for a cardiac SPECT system with multi-segmental collimation

    NASA Astrophysics Data System (ADS)

    Yu, D.-C.; Chang, W.; Pan, T.-S.

    1997-06-01

    To improve the geometric efficiency of cardiac SPECT imaging, the authors previously proposed to use a multi-segmental collimation with a cylindrical geometry. The proposed collimator consists of multiple parallel-hole collimators with most of the segments directed toward a small central region, where the patient's heart should be positioned. This technique provides a significantly increased detection efficiency for the central region, but at the expense of reduced efficiency for the surrounding region. The authors have used computer simulations to evaluate the implication of this technique on the accuracy of the reconstructed cardiac images. Two imaging situations were simulated: 1) the heart well placed inside the central region, and 2) the heart shifted and partially outside the central region. A neighboring high-uptake liver was simulated for both imaging situations. The images were reconstructed and corrected for attenuation with ML-EM and OS-FM methods using a complete attenuation map. The results indicate that errors caused by projection truncation are not significant and are not strongly dependent on the activity of the liver when the heart is well positioned within the central region. When the heart is partially outside the central region, hybrid emission data (a combination of high-count projections from the central region and low-count projections from the background region) can be used to restore the activity of the truncated section of the myocardium. However, the variance of the image in the section of the myocardium outside the central region is increased by 2-3 times when 10% of the collimator segments are used to image the background region.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch

    We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less

  10. Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?

    ERIC Educational Resources Information Center

    Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard

    1997-01-01

    Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…

  11. Quasar X-Ray Spectra At z=1.5

    NASA Technical Reports Server (NTRS)

    Siemiginowska, Aneta

    2001-01-01

    The predicted counts for ASCA observation was much higher than actually observed counts in the quasar. However, there are three weak hard x-ray sources in the GIS field. We are adding them to the source counts in modeling of hard x-ray background. The work is in progress. We have published a paper in Ap.J. on the luminosity function and the quasar evolution. Based on the theory described in this paper we are predicting a number of sources and their contribution to the x-ray background at different redshifts. These model predictions will be compared to the observed data in the final paper.

  12. Radiation analysis devices, radiation analysis methods, and articles of manufacture

    DOEpatents

    Roybal, Lyle Gene

    2010-06-08

    Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.

  13. Software electron counting for low-dose scanning transmission electron microscopy.

    PubMed

    Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C

    2018-05-01

    The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Fingerprint Ridge Count: A Polygenic Trait Useful in Classroom Instruction.

    ERIC Educational Resources Information Center

    Mendenhall, Gordon; And Others

    1989-01-01

    Describes the use of the polygenic trait of total fingerprint ridge count in the classroom as a laboratory investigation. Presents information on background of topic, fingerprint patterns which are classified into three major groups, ridge count, the inheritance model, and activities. Includes an example data sheet format for fingerprints. (RT)

  15. Remote Sensing of Vineyard FPAR, with Implications for Irrigation Scheduling

    NASA Technical Reports Server (NTRS)

    Johnson, Lee F.; Scholasch, Thibaut

    2004-01-01

    Normalized difference vegetation index (NDVI) data, acquired at two-meter resolution by an airborne ADAR System 5500, were compared with fraction of photosynthetically active radiation (FPAR) absorbed by commercial vineyards in Napa Valley, California. An empirical line correction was used to transform image digital counts to surface reflectance. "Apparent" NDVI (generated from digital counts) and "corrected" NDVI (from reflectance) were both strongly related to FPAR of range 0.14-0.50 (both r(sup 2) = 0.97, P < 0.01). By suppressing noise, corrected NDVI should form a more spatially and temporally stable relationship with FPAR, reducing the need for repeated field support. Study results suggest the possibility of using optical remote sensing to monitor the transpiration crop coefficient, thus providing an enhanced spatial resolution component to crop water budget calculations and irrigation management.

  16. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  17. Pile-up corrections in laser-driven pulsed X-ray sources

    NASA Astrophysics Data System (ADS)

    Hernández, G.; Fernández, F.

    2018-06-01

    A formalism for treating the pile-up produced in solid-state detectors by laser-driven pulsed X-ray sources has been developed. It allows the direct use of X-ray spectroscopy without artificially decreasing the number of counts in the detector, assuming the duration of a pulse is much shorter than the detector response time and the loss of counts from the energy window of the detector can be modeled or neglected. Experimental application shows that having a small amount of pile-up subsequently corrected improves the signal-to-noise ratio, which would be more beneficial than the strict single-hit condition usually imposed on this detectors.

  18. Radioactive contamination of scintillators

    NASA Astrophysics Data System (ADS)

    Danevich, F. A.; Tretyak, V. I.

    2018-03-01

    Low counting experiments (search for double β decay and dark matter particles, measurements of neutrino fluxes from different sources, search for hypothetical nuclear and subnuclear processes, low background α, β, γ spectrometry) require extremely low background of a detector. Scintillators are widely used to search for rare events both as conventional scintillation detectors and as cryogenic scintillating bolometers. Radioactive contamination of a scintillation material plays a key role to reach low level of background. Origin and nature of radioactive contamination of scintillators, experimental methods and results are reviewed. A programme to develop radiopure crystal scintillators for low counting experiments is discussed briefly.

  19. A new approach for measuring the work and quality of histopathology reporting.

    PubMed

    Sharma, Vijay; Davey, Jonathan G N; Humphreys, Catherine; Johnston, Peter W

    2013-07-01

    Cancer datasets drive report quality, but require more work to inform compliant reports. The aim of this study was to correlate the number of words with measures of quality, to examine the impact of the drive for improved quality on the workload of histopathology reporting over time. We examined the first 10 reports of colon, breast, renal, lung and ovarian carcinoma, melanoma resection, nodal lymphoma appendicitis and seborrhoeic keratosis (SK) issued in 1991, 2001 and 2011. Correlations were analysed using Pearson's partial correlation coefficients. Word count increased significantly over time for most specimen types examined. Word count almost always correlated with units of information, indicating that the word count was a good measure of the amount of information contained within the reports; this correlation was preserved following correction for the effect of time. A good correlation with compliance with cancer datasets was also observed, but was weakened or lost following correction for the increase in word count and units of information that occurred between time points. These data indicate that word count could potentially be used as a measure of information content if its integrity and usefulness are continuously validated. Further prospective studies are required to assess and validate this approach. © 2013 John Wiley & Sons Ltd.

  20. The cosmic ray spectrum above 10(19) EV at Volcano Ranch and Haverah Park

    NASA Technical Reports Server (NTRS)

    Linsley, J.

    1986-01-01

    The cosmic ray energy per particle spectrum above 10 to the 19th power eV is measured the same way that energy spectra are measured at much lower energies, by counting all of the particles in a specified energy range that are incident per unit time with trajectories within specified geometrical limits. Difficulties with background or poorly known detection efficiency are markedly less than in some other cosmic ray measurements. The fraction of primary energy given to muons, neutrinos, and slow hadrons is less than 10% in this region, so the primary energy equals the track length integral of the secondary electrons with only a small correction for the energy given to other kinds of particles. Results from Volcano Ranch and Haverah Park are compared with results from the Yakutsk experiment.

  1. Reply-frequency interference/jamming detector

    NASA Astrophysics Data System (ADS)

    Bishop, Walton B.

    1995-01-01

    Received IFF reply-frequency signals are examined to determine whether they are being interfered with by enemy sources and indication of the extent of detected interference is provided. The number of correct replies received from selected range bins surrounding and including the center one in which a target leading edge is first declared is counted and compared with the count of the number of friend-accept decisions made based on replies from the selected range bins. The level of interference is then indicated by the ratio between the two counts.

  2. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  3. Soudan Low Background Counting Facility (SOLO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attisha, Michael; Viveiros, Luiz de; Gaitksell, Richard

    2005-09-08

    The Soudan Low Background Counting Facility (SOLO) has been in operation at the Soudan Mine, MN since March 2003. In the past two years, we have gamma-screened samples for the Majorana, CDMS and XENON experiments. With individual sample exposure times of up to two weeks we have measured sample contamination down to the 0.1 ppb level for 238U / 232Th, and down to the 0.25 ppm level for 40K.

  4. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  5. Impact of relativistic effects on cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

    2018-01-01

    Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

  6. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...

  7. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...

  8. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...

  9. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...

  10. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...

  11. Influence of slice overlap on positron emission tomography image quality

    NASA Astrophysics Data System (ADS)

    McKeown, Clare; Gillen, Gerry; Dempsey, Mary Frances; Findlay, Caroline

    2016-02-01

    PET scans use overlapping acquisition beds to correct for reduced sensitivity at bed edges. The optimum overlap size for the General Electric (GE) Discovery 690 has not been established. This study assesses how image quality is affected by slice overlap. Efficacy of 23% overlaps (recommended by GE) and 49% overlaps (maximum possible overlap) were specifically assessed. European Association of Nuclear Medicine (EANM) guidelines for calculating minimum injected activities based on overlap size were also reviewed. A uniform flood phantom was used to assess noise (coefficient of variation, (COV)) and voxel accuracy (activity concentrations, Bq ml-1). A NEMA (National Electrical Manufacturers Association) body phantom with hot/cold spheres in a background activity was used to assess contrast recovery coefficients (CRCs) and signal to noise ratios (SNR). Different overlap sizes and sphere-to-background ratios were assessed. COVs for 49% and 23% overlaps were 9% and 13% respectively. This increased noise was difficult to visualise on the 23% overlap images. Mean voxel activity concentrations were not affected by overlap size. No clinically significant differences in CRCs were observed. However, visibility and SNR of small, low contrast spheres (⩽13 mm diameter, 2:1 sphere to background ratio) may be affected by overlap size in low count studies if they are located in the overlap area. There was minimal detectable influence on image quality in terms of noise, mean activity concentrations or mean CRCs when comparing 23% overlap with 49% overlap. Detectability of small, low contrast lesions may be affected in low count studies—however, this is a worst-case scenario. The marginal benefits of increasing overlap from 23% to 49% are likely to be offset by increased patient scan times. A 23% overlap is therefore appropriate for clinical use. An amendment to EANM guidelines for calculating injected activities is also proposed which better reflects the effect overlap size has on image noise.

  12. Infant Maltreatment-Related Mortality in Alaska: Correcting the Count and Using Birth Certificates to Predict Mortality

    ERIC Educational Resources Information Center

    Parrish, Jared W.; Gessner, Bradford D.

    2010-01-01

    Objectives: To accurately count the number of infant maltreatment-related fatalities and to use information from the birth certificates to predict infant maltreatment-related deaths. Methods: A population-based retrospective cohort study of infants born in Alaska for the years 1992 through 2005 was conducted. Risk factor variables were ascertained…

  13. Does Learning to Count Involve a Semantic Induction?

    ERIC Educational Resources Information Center

    Davidson, Kathryn; Eng, Kortney; Barner, David

    2012-01-01

    We tested the hypothesis that, when children learn to correctly count sets, they make a semantic induction about the meanings of their number words. We tested the logical understanding of number words in 84 children that were classified as "cardinal-principle knowers" by the criteria set forth by Wynn (1992). Results show that these children often…

  14. A whole-system approach to x-ray spectroscopy in cargo inspection systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langeveld, Willem G. J.; Gozani, Tsahi; Ryge, Peter

    The bremsstrahlung x-ray spectrum used in high-energy, high-intensity x-ray cargo inspection systems is attenuated and modified by the materials in the cargo in a Z-dependent way. Therefore, spectroscopy of the detected x rays yields information about the Z of the x-rayed cargo material. It has previously been shown that such ZSpectroscopy (Z-SPEC) is possible under certain circumstances. A statistical approach, Z-SCAN (Z-determination by Statistical Count-rate ANalysis), has also been shown to be effective, and it can be used either by itself or in conjunction with Z-SPEC when the x-ray count rate is too high for individual x-ray spectroscopy. Both techniquesmore » require fast x-ray detectors and fast digitization electronics. It is desirable (and possible) to combine all techniques, including x-ray imaging of the cargo, in a single detector array, to reduce costs, weight, and overall complexity. In this paper, we take a whole-system approach to x-ray spectroscopy in x-ray cargo inspection systems, and show how the various parts interact with one another. Faster detectors and read-out electronics are beneficial for both techniques. A higher duty-factor x-ray source allows lower instantaneous count rates at the same overall x-ray intensity, improving the range of applicability of Z-SPEC in particular. Using an intensity-modulated advanced x-ray source (IMAXS) allows reducing the x-ray count rate for cargoes with higher transmission, and a stacked-detector approach may help material discrimination for the lowest attenuations. Image processing and segmentation allow derivation of results for entire objects, and subtraction of backgrounds. We discuss R and D performed under a number of different programs, showing progress made in each of the interacting subsystems. We discuss results of studies into faster scintillation detectors, including ZnO, BaF{sub 2} and PbWO{sub 4}, as well as suitable photo-detectors, read-out and digitization electronics. We discuss high-duty-factor linear-accelerator x-ray sources and their associated requirements, and how such sources improve spectroscopic techniques. We further discuss how image processing techniques help in correcting for backgrounds and overlapping materials. In sum, we present an integrated picture of how to optimize a cargo inspection system for x-ray spectroscopy.« less

  15. Perturbative corrections to B → D form factors in QCD

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ming; Wei, Yan-Bing; Shen, Yue-Long; Lü, Cai-Dian

    2017-06-01

    We compute perturbative QCD corrections to B → D form factors at leading power in Λ/ m b , at large hadronic recoil, from the light-cone sum rules (LCSR) with B-meson distribution amplitudes in HQET. QCD factorization for the vacuum-to- B-meson correlation function with an interpolating current for the D-meson is demonstrated explicitly at one loop with the power counting scheme {m}_c˜ O(√{Λ {m}_b}) . The jet functions encoding information of the hard-collinear dynamics in the above-mentioned correlation function are complicated by the appearance of an additional hard-collinear scale m c , compared to the counterparts entering the factorization formula of the vacuum-to- B-meson correction function for the construction of B → π from factors. Inspecting the next-to-leading-logarithmic sum rules for the form factors of B → Dℓν indicates that perturbative corrections to the hard-collinear functions are more profound than that for the hard functions, with the default theory inputs, in the physical kinematic region. We further compute the subleading power correction induced by the three-particle quark-gluon distribution amplitudes of the B-meson at tree level employing the background gluon field approach. The LCSR predictions for the semileptonic B → Dℓν form factors are then extrapolated to the entire kinematic region with the z-series parametrization. Phenomenological implications of our determinations for the form factors f BD +,0 ( q 2) are explored by investigating the (differential) branching fractions and the R( D) ratio of B → Dℓν and by determining the CKM matrix element |V cb | from the total decay rate of B → Dμν μ .

  16. Improvement of semi-quantitative small-animal PET data with recovery coefficients: a phantom and rat study.

    PubMed

    Aide, Nicolas; Louis, Marie-Hélène; Dutoit, Soizic; Labiche, Alexandre; Lemoisson, Edwige; Briand, Mélanie; Nataf, Valérie; Poulain, Laurent; Gauduchon, Pascal; Talbot, Jean-Noël; Montravers, Françoise

    2007-10-01

    To evaluate the accuracy of semi-quantitative small-animal PET data, uncorrected for attenuation, and then of the same semi-quantitative data corrected by means of recovery coefficients (RCs) based on phantom studies. A phantom containing six fillable spheres (diameter range: 4.4-14 mm) was filled with an 18F-FDG solution (spheres/background activity=10.1, 5.1 and 2.5). RCs, defined as measured activity/expected activity, were calculated. Nude rats harbouring tumours (n=50) were imaged after injection of 18F-FDG and sacrificed. The standardized uptake value (SUV) in tumours was determined with small-animal PET and compared to ex-vivo counting (ex-vivo SUV). Small-animal PET SUVs were corrected with RCs based on the greatest tumour diameter. Tumour proliferation was assessed with cyclin A immunostaining and correlated to the SUV. RCs ranged from 0.33 for the smallest sphere to 0.72 for the largest. A sigmoidal correlation was found between RCs and sphere diameters (r(2)=0.99). Small-animal PET SUVs were well correlated with ex-vivo SUVs (y=0.48x-0.2; r(2)=0.71) and the use of RCs based on the greatest tumour diameter significantly improved regression (y=0.84x-0.81; r(2)=0.77), except for tumours with important necrosis. Similar results were obtained without sacrificing animals, by using PET images to estimate tumour dimensions. RC-based corrections improved correlation between small-animal PET SUVs and tumour proliferation (uncorrected data: Rho=0.79; corrected data: Rho=0.83). Recovery correction significantly improves both accuracy of small-animal PET semi-quantitative data in rat studies and their correlation with tumour proliferation, except for largely necrotic tumours.

  17. Inventory count strategies.

    PubMed

    Springer, W H

    1996-02-01

    An important principle of accounting is that asset inventory needs to be correctly valued to ensure that the financial statements of the institution are accurate. Errors is recording the value of ending inventory in one fiscal year result in errors to published financial statements for that year as well as the subsequent fiscal year. Therefore, it is important that accurate physical counts be periodically taken. It is equally important that any system being used to generate inventory valuation, reordering or management reports be based on consistently accurate on-hand balances. At the foundation of conducting an accurate physical count of an inventory is a comprehensive understanding of the process coupled with a written plan. This article presents a guideline of the physical count processes involved in a traditional double-count approach.

  18. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  19. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    PubMed

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  20. Prioritizing CD4 Count Monitoring in Response to ART in Resource-Constrained Settings: A Retrospective Application of Prediction-Based Classification

    PubMed Central

    Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.

    2012-01-01

    Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings. Please see later in the article for the Editors' Summary PMID:22529752

  1. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prior, P; Timmins, R; Wells, R G

    Dual isotope SPECT allows simultaneous measurement of two different tracers in vivo. With In111 (emission energies of 171keV and 245keV) and Tc99m (140keV), quantification of Tc99m is degraded by cross talk from the In111 photons that scatter and are detected at an energy corresponding to Tc99m. TEW uses counts recorded in two narrow windows surrounding the Tc99m primary window to estimate scatter. Iterative TEW corrects for the bias introduced into the TEW estimate resulting from un-scattered counts detected in the scatter windows. The contamination in the scatter windows is iteratively estimated and subtracted as a fraction of the scatter-corrected primarymore » window counts. The iterative TEW approach was validated with a small-animal SPECT/CT camera using a 2.5mL plastic container holding thoroughly mixed Tc99m/In111 activity fractions of 0.15, 0.28, 0.52, 0.99, 2.47 and 6.90. Dose calibrator measurements were the gold standard. Uncorrected for scatter, the Tc99m activity was over-estimated by as much as 80%. Unmodified TEW underestimated the Tc99m activity by 13%. With iterative TEW corrections applied in projection space, the Tc99m activity was estimated within 5% of truth across all activity fractions above 0.15. This is an improvement over the non-iterative TEW, which could not sufficiently correct for scatter in the 0.15 and 0.28 phantoms.« less

  3. A mercuric iodide detector system for X-ray astronomy. II - Results from flight tests of a balloon borne instrument

    NASA Technical Reports Server (NTRS)

    Vallerga, J. V.; Vanderspek, R. K.; Ricker, G. R.

    1983-01-01

    To establish the expected sensitivity of a new hard X-ray telescope design, described by Ricker et al., an experiment was conducted to measure the background counting rate at balloon altitudes (40 km) of mercuric iodide, a room temperature solid state X-ray detector. The prototype detector consisted of two thin mercuric iodide (HgI2) detectors surrounded by a large bismuth germanate scintillator operated in anticoincidence. The bismuth germanate shield vetoed most of the background counting rate induced by atmospheric gamma-rays, neutrons and cosmic rays. A balloon-borne gondola containing a prototype detector assembly was designed, constructed and flown twice in the spring of 1982 from Palestine, TX. The second flight of this instrument established a differential background counting rate of 4.2 + or - 0.7 x 10 to the -5th counts/s sq cm keV over the energy range of 40-80 keV. This measurement was within 50 percent of the predicted value. The measured rate is about 5 times lower than previously achieved in shielded NaI/CsI or Ge systems operating in the same energy range.

  4. Association of BPD and IVH with early neutrophil and white counts in VLBW neonates with gestational age <32 weeks

    PubMed Central

    Palta, Mari; Sadek-Badawi, Mona; Carlton, David P

    2008-01-01

    Objectives To investigate associations between early low neutrophil count from routine blood samples, white blood count (WBC), pregnancy complications and neonatal outcomes for very low birth weight infants (VLBW ≤1500g) with gestational age <32 weeks. Patients and Methods Information was abstracted on all infants admitted to level III NICUs in Wisconsin 2003-2004. 1002 (78%) had differential and corrected total white counts within 2 ½ hours of birth. Data analyses included frequency tables, binary logistic, ordinal logistc and ordinary regression. Results Low neutrophil count (<1000/μL) was strongly associated with low WBC, pregnancy complications and antenatal steroids. Low neutrophil count predicted bronchopulmonary dysplasia severity level (BPD) (OR: 1.7, 95% CI: 1.1-2.7) and intraventricular hemorrhage (IVH) grade (OR: 2.2, 95% CI: 1.3-3.8). Conclusions Early neutrophil counts may have multiple causes interfering with their routine use as an inflammatory marker. Nonetheless, low neutrophil count has consistent independent associations with outcomes. PMID:18563166

  5. Calibrating passive acoustic monitoring: correcting humpback whale call detections for site-specific and time-dependent environmental characteristics.

    PubMed

    Helble, Tyler A; D'Spain, Gerald L; Campbell, Greg S; Hildebrand, John A

    2013-11-01

    This paper demonstrates the importance of accounting for environmental effects on passive underwater acoustic monitoring results. The situation considered is the reduction in shipping off the California coast between 2008-2010 due to the recession and environmental legislation. The resulting variations in ocean noise change the probability of detecting marine mammal vocalizations. An acoustic model was used to calculate the time-varying probability of detecting humpback whale vocalizations under best-guess environmental conditions and varying noise. The uncorrected call counts suggest a diel pattern and an increase in calling over a two-year period; the corrected call counts show minimal evidence of these features.

  6. (Biased) Grading of Students' Performance: Students' Names, Performance Level, and Implicit Attitudes.

    PubMed

    Bonefeld, Meike; Dickhäuser, Oliver

    2018-01-01

    Biases in pre-service teachers' evaluations of students' performance may arise due to stereotypes (e.g., the assumption that students with a migrant background have lower potential). This study examines the effects of a migrant background, performance level, and implicit attitudes toward individuals with a migrant background on performance assessment (assigned grades and number of errors counted in a dictation). Pre-service teachers ( N = 203) graded the performance of a student who appeared to have a migrant background statistically significantly worse than that of a student without a migrant background. The differences were more pronounced when the performance level was low and when the pre-service teachers held relatively positive implicit attitudes toward individuals with a migrant background. Interestingly, only performance level had an effect on the number of counted errors. Our results support the assumption that pre-service teachers exhibit bias when grading students with a migrant background in a third-grade level dictation assignment.

  7. (Biased) Grading of Students’ Performance: Students’ Names, Performance Level, and Implicit Attitudes

    PubMed Central

    Bonefeld, Meike; Dickhäuser, Oliver

    2018-01-01

    Biases in pre-service teachers’ evaluations of students’ performance may arise due to stereotypes (e.g., the assumption that students with a migrant background have lower potential). This study examines the effects of a migrant background, performance level, and implicit attitudes toward individuals with a migrant background on performance assessment (assigned grades and number of errors counted in a dictation). Pre-service teachers (N = 203) graded the performance of a student who appeared to have a migrant background statistically significantly worse than that of a student without a migrant background. The differences were more pronounced when the performance level was low and when the pre-service teachers held relatively positive implicit attitudes toward individuals with a migrant background. Interestingly, only performance level had an effect on the number of counted errors. Our results support the assumption that pre-service teachers exhibit bias when grading students with a migrant background in a third-grade level dictation assignment. PMID:29867618

  8. The State of the World's Children 2014 in Numbers: Every Child Counts. Revealing Disparities, Advancing Children's Rights

    ERIC Educational Resources Information Center

    Aslam, Abid; Grojec, Anna; Little, Céline; Maloney, Ticiana; Tamagni, Jordan

    2014-01-01

    "The State of the World's Children 2014 In Numbers: Every Child Counts" highlights the critical role data and monitoring play in realizing children's rights. Credible data, disseminated effectively and used correctly, make it possible to target interventions that help right the wrong of exclusion. Data do not, of themselves, change the…

  9. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  10. "How do you know those particles are from cigarettes?": An algorithm to help differentiate second-hand tobacco smoke from background sources of household fine particulate matter.

    PubMed

    Dobson, Ruaraidh; Semple, Sean

    2018-06-18

    Second-hand smoke (SHS) at home is a target for public health interventions, such as air quality feedback interventions using low-cost particle monitors. However, these monitors also detect fine particles generated from non-SHS sources. The Dylos DC1700 reports particle counts in the coarse and fine size ranges. As tobacco smoke produces far more fine particles than coarse ones, and tobacco is generally the greatest source of particulate pollution in a smoking home, the ratio of coarse to fine particles may provide a useful method to identify the presence of SHS in homes. An algorithm was developed to differentiate smoking from smoke-free homes. Particle concentration data from 116 smoking homes and 25 non-smoking homes were used to test this algorithm. The algorithm correctly classified the smoking status of 135 of the 141 homes (96%), comparing favourably with a test of mean mass concentration. Applying this algorithm to Dylos particle count measurements may help identify the presence of SHS in homes or other indoor environments. Future research should adapt it to detect individual smoking periods within a 24 h or longer measurement period. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    PubMed

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.

  12. Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.

    PubMed

    Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien

    2017-01-01

    Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.

  13. Behavior and enterotoxin production by coagulase negative Staphylococcus in cooked ham, reconstituted skimmed milk, and confectionery cream.

    PubMed

    Oliveira, Ana Maria; Miya, Norma Teruko Nago; Sant'Ana, Anderson S; Pereira, José Luiz

    2010-09-01

    In this study, the behavior and enterotoxin production by 10 different coagulase negative Staphylococcus (CNS) strains inoculated in cooked ham, reconstituted skimmed milk, and confectionery cream in the presence or absence of background microbiota have been investigated. After inoculation (103 CFU/g), foods were incubated at 25, 30, and 37 °C and aerobic mesophilic and CNS counts were carried out at 12, 24, 48, and 72 h. Staphylococcal enterotoxins (SE) detection was performed by SET-RPLA (Oxoid, Basingstoke, U.K.) and mini-Vidas® (bioMérieux, La Balme les Grottes, France). CNS counts increased during incubation and approached 10⁶ to 10⁷ CFU/g after 12 h at 37 °C in the 3 foods studied. At 25 °C, counts reached 10⁶ to 10⁷ CFU/g only after 24 to 48 h. The interference of background microbiota on CNS behavior was only observed when they grew in sliced cooked ham, which presented a high initial total count (10⁵ CFU/g). Significantly higher counts of CNS isolated from raw cow's milk in comparison with food handlers isolates were found in reconstituted milk and confectionery cream. Although CNS strains were able to produce SEA, SEB, and SED in culture media, in foods, in the presence or absence of background microbiota S. chromogenes LE0598 was the only strain able to produce SEs. Despite the scarcity of reports on CNS involvement with foodborne disease outbreaks, the results found here support the CNS growth and SE production in foods even in the presence of background microbiota and may affect food safety.

  14. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing is required. The efficiency and background of each of the three new methods have been estimated and are compared with those of the three methods currently employed to measure oceanic 228Ra. From efficiency and background, the relative figure of merit and the detection limit have been determined for each of the six counters. These data suggest that the new counting methods have the potential to measure most 228Ra samples with just 30 L of seawater, to better than 5% precision. Not only would this reduce the time, effort, and expense involved in sample collection, but 228Ra could then be measured on many small-volume samples (20-30 L) previously collected with only 226Ra in mind. By measuring 228Ra quantitatively on such small-volume samples, three analyses (large-volume 228Ra, large-volume 226Ra, and small-volume 226Ra) could be reduced to one, thereby dramatically improving analytical precision.

  15. A Neurological Enigma: The Inborn Numerical Competence of Humans and Animals

    NASA Astrophysics Data System (ADS)

    Gross, Hans J.

    2012-03-01

    "Subitizing" means our ability to recognize and memorize object numbers precisely under conditions where counting is impossible. This is an inborn archaic process which was named after the Latin "subito" = suddenly, immediately, indicating that the objects in question are presented to test persons only for the fraction of a second in order to prevent counting. Sequential counting, however, is an outstanding cultural achievement of mankind and means to count "1, 2, 3, 4, 5, 6, 7, 8 ..." without a limit. In contrast to inborn "subitizing", counting has to be trained, beginning in our early childhood with the help of our fingers. For humans we know since 140 years that we can "subitize" only up to 4 objects correctly and that mistakes occur from 5 objects on. Similar results have been obtained for a number of non-human vertebrates from salamanders to pigeons and dolphins. To our surprise, we have detected this inborn numerical competence for the first time in case of an invertebrate, the honeybee which recognizes and memorizes 3 to 4 objects under rigorous test conditions. This common ability of humans and honeybees to "subitize" up to 4 objects correctly and the miraculous but rare ability of persons with Savant syndrome to "subitize" more than hundred objects precisely raises a number of intriguing questions concerning the evolution and the significance of this biological enigma.

  16. A XMM-Newton Observation of Nova LMC 1995, a Bright Supersoft X-ray Source

    NASA Technical Reports Server (NTRS)

    Orio, Marina; Hartmann, Wouter; Still, Martin; Greiner, Jochen

    2003-01-01

    Nova LMC 1995, previously detected during 1995-1998 with ROSAT, was observed again as a luminous supersoft X-ray source with XMM-Newton in December of 2000. This nova offers the possibility to observe the spectrum of a hot white dwarf, burning hydrogen in a shell and not obscured by a wind or by nebular emission like in other supersoft X-ray sources. Notwithstanding uncertainties in the calibration of the EPIC instruments at energy E<0.5 keV, using atmospheric models in Non Local Thermonuclear Equilibrium we derived an effective temperature in the range 400,000-450,000 K, a bolometric luminosity Lbolabout equal to 2.3 times 10 sup37 erg s sup-l, and we verified that the abundance of carbon is not significantly enhanced in the X-rays emitting shell. The RGS grating spectra do not show emission lines (originated in a nebula or a wind) observed for some other supersoft X-ray sources. The crowded atmospheric absorption lines of the white dwarf cannot be not resolved. There is no hard component (expected from a wind, a surrounding nebula or an accretion disk), with no counts above the background at E>0.6 keV, and an upper limit Fx,hard = 10 sup-14 erg s sup-l cm sup-2 to the X-ray flux above this energy. The background corrected count rate measured by the EPIC instruments was variable on time scales of minutes and hours, but without the flares or sudden obscuration observed for other novae. The power spectrum shows a peak at 5.25 hours, possibly due to a modulation with the orbital period. We also briefly discuss the scenarios in which this nova may become a type Ia supernova progenitor.

  17. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  18. Methodological considerations for global analysis of cellular FLIM/FRET measurements

    NASA Astrophysics Data System (ADS)

    Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.

    2012-02-01

    Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.

  19. Characterization of a neutron sensitive MCP/Timepix detector for quantitative image analysis at a pulsed neutron source

    NASA Astrophysics Data System (ADS)

    Watanabe, Kenichi; Minniti, Triestino; Kockelmann, Winfried; Dalgliesh, Robert; Burca, Genoveva; Tremsin, Anton S.

    2017-07-01

    The uncertainties and the stability of a neutron sensitive MCP/Timepix detector when operating in the event timing mode for quantitative image analysis at a pulsed neutron source were investigated. The dominant component to the uncertainty arises from the counting statistics. The contribution of the overlap correction to the uncertainty was concluded to be negligible from considerations based on the error propagation even if a pixel occupation probability is more than 50%. We, additionally, have taken into account the multiple counting effect in consideration of the counting statistics. Furthermore, the detection efficiency of this detector system changes under relatively high neutron fluxes due to the ageing effects of current Microchannel Plates. Since this efficiency change is position-dependent, it induces a memory image. The memory effect can be significantly reduced with correction procedures using the rate equations describing the permanent gain degradation and the scrubbing effect on the inner surfaces of the MCP pores.

  20. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  1. Neutron-induced reactions in the hohlraum to study reaction in flight neutrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boswell, M. S.; Elliott, S. R.; Tybo, J.

    2013-04-19

    We are currently developing the physics necessary to measure the Reaction In Flight (RIF) neutron flux from a NIF capsule. A measurement of the RIF neutron flux from a NIF capsule could be used to deduce the stopping power in the cold fuel of the NIF capsule. A foil irradiated at the Omega laser at LLE was counted at the LANL low-background counting facility at WIPP. The estimated production rate of {sup 195}Au was just below our experimental sensitivity. We have made several improvements to our counting facility in recent months. These improvements are designed to increase our sensitivity, andmore » include installing two new low-background detectors, and taking steps to reduce noise in the signals.« less

  2. Lensing corrections to features in the angular two-point correlation function and power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam

    2008-01-15

    It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less

  3. AutoCellSeg: robust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques.

    PubMed

    Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert

    2018-05-08

    In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.

  4. Dying dyons don't count

    NASA Astrophysics Data System (ADS)

    Cheng, Miranda C. N.; Verlinde, Erik P.

    2007-09-01

    The dyonic 1/4-BPS states in 4D string theory with Script N = 4 spacetime supersymmetry are counted by a Siegel modular form. The pole structure of the modular form leads to a contour dependence in the counting formula obscuring its duality invariance. We exhibit the relation between this ambiguity and the (dis-)appearance of bound states of 1/2-BPS configurations. Using this insight we propose a precise moduli-dependent contour prescription for the counting formula. We then show that the degeneracies are duality-invariant and are correctly adjusted at the walls of marginal stability to account for the (dis-)appearance of the two-centered bound states. Especially, for large black holes none of these bound states exists at the attractor point and none of these ambiguous poles contributes to the counting formula. Using this fact we also propose a second, moduli-independent contour which counts the ``immortal dyons" that are stable everywhere.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iyengar, Anagha; Beach, Matthew; Newby, Robert J.

    Neutron background measurements using a mobile trailer-based system were conducted in Knoxville, Tennessee. The 0.5 m 2 system consisting of 8 EJ-301 liquid scintillation detectors was used to collect neutron background measurements in order to better understand the systematic background variations that depend solely on the street-level measurement position in a local, downtown area. Data was collected along 5 different streets in the downtown Knoxville area, and the measurements were found to be repeatable. Using 10-min measurements, fractional uncertainty in each measured data point was <2%. Compared with fast neutron background count rates measured away from downtown Knoxville, a reductionmore » in background count rates ranging from 10-50% was observed in the downtown area, sometimes varying substantially over distances of tens of meters. These reductions are attributed to the shielding of adjacent buildings, quantified in part here by the metric angle-of-open-sky. The adjacent buildings may serve to shield cosmic ray neutron flux.« less

  6. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.

  7. The faint galaxy contribution to the diffuse extragalactic background light

    NASA Technical Reports Server (NTRS)

    Cole, Shaun; Treyer, Marie-Agnes; Silk, Joseph

    1992-01-01

    Models of the faint galaxy contribution to the diffuse extragalactic background light are presented, which are consistent with current data on faint galaxy number counts and redshifts. The autocorrelation function of surface brightness fluctuations in the extragalactic diffuse light is predicted, and the way in which these predictions depend on the cosmological model and assumptions of biasing is determined. It is confirmed that the recent deep infrared number counts are most compatible with a high density universe (Omega-0 is approximately equal to 1) and that the steep blue counts then require an extra population of rapidly evolving blue galaxies. The faintest presently detectable galaxies produce an interesting contribution to the extragalactic diffuse light, and still fainter galaxies may also produce a significant contribution. These faint galaxies still only produce a small fraction of the total optical diffuse background light, but on scales of a few arcminutes to a few degrees, they produce a substantial fraction of the fluctuations in the diffuse light.

  8. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less

  9. Building and Activating Students' Background Knowledge: It's What They Already Know That Counts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy; Lapp, Diane

    2012-01-01

    Students enter the middle grades with varying amounts of background knowledge. Teachers must assess student background knowledge for gaps or misconceptions and then provide instruction to build on that base. This article discusses effective strategies for assessing and developing students' background knowledge so they can become independent…

  10. The ALMA Spectroscopic Survey in the Hubble Ultra Deep Field: Continuum Number Counts, Resolved 1.2 mm Extragalactic Background, and Properties of the Faintest Dusty Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Aravena, M.; Decarli, R.; Walter, F.; Da Cunha, E.; Bauer, F. E.; Carilli, C. L.; Daddi, E.; Elbaz, D.; Ivison, R. J.; Riechers, D. A.; Smail, I.; Swinbank, A. M.; Weiss, A.; Anguita, T.; Assef, R. J.; Bell, E.; Bertoldi, F.; Bacon, R.; Bouwens, R.; Cortes, P.; Cox, P.; Gónzalez-López, J.; Hodge, J.; Ibar, E.; Inami, H.; Infante, L.; Karim, A.; Le Le Fèvre, O.; Magnelli, B.; Ota, K.; Popping, G.; Sheth, K.; van der Werf, P.; Wagg, J.

    2016-12-01

    We present an analysis of a deep (1σ = 13 μJy) cosmological 1.2 mm continuum map based on ASPECS, the ALMA Spectroscopic Survey in the Hubble Ultra Deep Field. In the 1 arcmin2 covered by ASPECS we detect nine sources at \\gt 3.5σ significance at 1.2 mm. Our ALMA-selected sample has a median redshift of z=1.6+/- 0.4, with only one galaxy detected at z > 2 within the survey area. This value is significantly lower than that found in millimeter samples selected at a higher flux density cutoff and similar frequencies. Most galaxies have specific star formation rates (SFRs) similar to that of main-sequence galaxies at the same epoch, and we find median values of stellar mass and SFRs of 4.0× {10}10 {M}⊙ and ˜ 40 {M}⊙ yr-1, respectively. Using the dust emission as a tracer for the interstellar medium (ISM) mass, we derive depletion times that are typically longer than 300 Myr, and we find molecular gas fractions ranging from ˜0.1 to 1.0. As noted by previous studies, these values are lower than those using CO-based ISM estimates by a factor of ˜2. The 1 mm number counts (corrected for fidelity and completeness) are in agreement with previous studies that were typically restricted to brighter sources. With our individual detections only, we recover 55% ± 4% of the extragalactic background light (EBL) at 1.2 mm measured by the Planck satellite, and we recover 80% ± 7% of this EBL if we include the bright end of the number counts and additional detections from stacking. The stacked contribution is dominated by galaxies at z˜ 1{--}2, with stellar masses of (1-3) × 1010 M {}⊙ . For the first time, we are able to characterize the population of galaxies that dominate the EBL at 1.2 mm.

  11. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  13. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  14. Dual-fission chamber and neutron beam characterization for fission product yield measurements using monoenergetic neutrons

    NASA Astrophysics Data System (ADS)

    Bhatia, C.; Fallin, B.; Gooden, M. E.; Howell, C. R.; Kelley, J. H.; Tornow, W.; Arnold, C. W.; Bond, E. M.; Bredeweg, T. A.; Fowler, M. M.; Moody, W. A.; Rundberg, R. S.; Rusev, G.; Vieira, D. J.; Wilhelmy, J. B.; Becker, J. A.; Macri, R.; Ryan, C.; Sheets, S. A.; Stoyer, M. A.; Tonchev, A. P.

    2014-09-01

    A program has been initiated to measure the energy dependence of selected high-yield fission products used in the analysis of nuclear test data. We present out initial work of neutron activation using a dual-fission chamber with quasi-monoenergetic neutrons and gamma-counting method. Quasi-monoenergetic neutrons of energies from 0.5 to 15 MeV using the TUNL 10 MV FM tandem to provide high-precision and self-consistent measurements of fission product yields (FPY). The final FPY results will be coupled with theoretical analysis to provide a more fundamental understanding of the fission process. To accomplish this goal, we have developed and tested a set of dual-fission ionization chambers to provide an accurate determination of the number of fissions occurring in a thick target located in the middle plane of the chamber assembly. Details of the fission chamber and its performance are presented along with neutron beam production and characterization. Also presented are studies on the background issues associated with room-return and off-energy neutron production. We show that the off-energy neutron contribution can be significant, but correctable, while room-return neutron background levels contribute less than <1% to the fission signal.

  15. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Systematic measurement of fast neutron background fluctuations in an urban area using a mobile detection system

    DOE PAGES

    Iyengar, Anagha; Beach, Matthew; Newby, Robert J.; ...

    2015-11-12

    Neutron background measurements using a mobile trailer-based system were conducted in Knoxville, Tennessee. The 0.5 m 2 system consisting of 8 EJ-301 liquid scintillation detectors was used to collect neutron background measurements in order to better understand the systematic background variations that depend solely on the street-level measurement position in a local, downtown area. Data was collected along 5 different streets in the downtown Knoxville area, and the measurements were found to be repeatable. Using 10-min measurements, fractional uncertainty in each measured data point was <2%. Compared with fast neutron background count rates measured away from downtown Knoxville, a reductionmore » in background count rates ranging from 10-50% was observed in the downtown area, sometimes varying substantially over distances of tens of meters. These reductions are attributed to the shielding of adjacent buildings, quantified in part here by the metric angle-of-open-sky. The adjacent buildings may serve to shield cosmic ray neutron flux.« less

  17. Systematic measurement of fast neutron background fluctuations in an urban area using a mobile detection system

    NASA Astrophysics Data System (ADS)

    Iyengar, A.; Beach, M.; Newby, R. J.; Fabris, L.; Heilbronn, L. H.; Hayward, J. P.

    2015-02-01

    Neutron background measurements using a mobile trailer-based system were conducted in Knoxville, Tennessee, USA. The 0.5 m2 system, consisting of eight EJ-301 liquid scintillation detectors, was used to collect neutron background measurements in order to better understand the systematic variations in background that depend solely on the street-level measurement position in a downtown area. Data was collected along 5 different streets, and the measurements were found to be repeatable. Using 10-min measurements, the fractional uncertainty in each measured data point was <2%. Compared with fast neutron background count rates measured away from downtown Knoxville, a reduction in background count rates ranging from 10% to 50% was observed in the downtown area, sometimes varying substantially over distances of tens of meters. These reductions are attributed to the net shielding of the cosmic ray neutron flux by adjacent buildings. For reference, the building structure as observed at street level is quantified in part here by a measured angle-of-open-sky metric.

  18. Energy-correction photon counting pixel for photon energy extraction under pulse pile-up

    NASA Astrophysics Data System (ADS)

    Lee, Daehee; Park, Kyungjin; Lim, Kyung Taek; Cho, Gyuseong

    2017-06-01

    A photon counting detector (PCD) has been proposed as an alternative solution to an energy-integrating detector (EID) in medical imaging field due to its high resolution, high efficiency, and low noise. The PCD has expanded to variety of fields such as spectral CT, k-edge imaging, and material decomposition owing to its capability to count and measure the number and the energy of an incident photon, respectively. Nonetheless, pulse pile-up, which is a superimposition of pulses at the output of a charge sensitive amplifier (CSA) in each PC pixel, occurs frequently as the X-ray flux increases due to the finite pulse processing time (PPT) in CSAs. Pulse pile-up induces not only a count loss but also distortion in the measured X-ray spectrum from each PC pixel and thus it is a main constraint on the use of PCDs in high flux X-ray applications. To minimize these effects, an energy-correction PC (ECPC) pixel is proposed to resolve pulse pile-up without cutting off the PPT by adding an energy correction logic (ECL) via a cross detection method (CDM). The ECPC pixel with a size of 200×200 μm2 was fabricated by using a 6-metal 1-poly 0.18 μm CMOS process with a static power consumption of 7.2 μW/pixel. The maximum count rate of the ECPC pixel was extended by approximately three times higher than that of a conventional PC pixel with a PPT of 500 nsec. The X-ray spectrum of 90 kVp, filtered by 3 mm Al filter, was measured as the X-ray current was increased using the CdTe and the ECPC pixel. As a result, the ECPC pixel dramatically reduced the energy spectrum distortion at 2 Mphotons/pixel/s when compared to that of the ERCP pixel with the same 500 nsec PPT.

  19. CT-based attenuation correction and resolution compensation for I-123 IMP brain SPECT normal database: a multicenter phantom study.

    PubMed

    Inui, Yoshitaka; Ichihara, Takashi; Uno, Masaki; Ishiguro, Masanobu; Ito, Kengo; Kato, Katsuhiko; Sakuma, Hajime; Okazawa, Hidehiko; Toyama, Hiroshi

    2018-06-01

    Statistical image analysis of brain SPECT images has improved diagnostic accuracy for brain disorders. However, the results of statistical analysis vary depending on the institution even when they use a common normal database (NDB), due to different intrinsic spatial resolutions or correction methods. The present study aimed to evaluate the correction of spatial resolution differences between equipment and examine the differences in skull bone attenuation to construct a common NDB for use in multicenter settings. The proposed acquisition and processing protocols were those routinely used at each participating center with additional triple energy window (TEW) scatter correction (SC) and computed tomography (CT) based attenuation correction (CTAC). A multicenter phantom study was conducted on six imaging systems in five centers, with either single photon emission computed tomography (SPECT) or SPECT/CT, and two brain phantoms. The gray/white matter I-123 activity ratio in the brain phantoms was 4, and they were enclosed in either an artificial adult male skull, 1300 Hounsfield units (HU), a female skull, 850 HU, or an acrylic cover. The cut-off frequency of the Butterworth filters was adjusted so that the spatial resolution was unified to a 17.9 mm full width at half maximum (FWHM), that of the lowest resolution system. The gray-to-white matter count ratios were measured from SPECT images and compared with the actual activity ratio. In addition, mean, standard deviation and coefficient of variation images were calculated after normalization and anatomical standardization to evaluate the variability of the NDB. The gray-to-white matter count ratio error without SC and attenuation correction (AC) was significantly larger for higher bone densities (p < 0.05). The count ratio error with TEW and CTAC was approximately 5% regardless of bone density. After adjustment of the spatial resolution in the SPECT images, the variability of the NDB decreased and was comparable to that of the NDB without correction. The proposed protocol showed potential for constructing an appropriate common NDB from SPECT images with SC, AC and spatial resolution compensation.

  20. Deep galaxy counts in the K band with the Kech telescope

    NASA Technical Reports Server (NTRS)

    Djorgovski, S.; Soifer, B. T.; Pahre, M. A.; Larkin, J. E.; Smith, J. D.; Neugebauer, G.; Smail, I.; Matthews, K.; Hogg, D. W.; Blandford, R. D.

    1995-01-01

    We present deep galaxy counts in the K (lambda 2.2 micrometer) band, obtained at the W. M. Kech 10 m telescope. The data reach limiting magnitudes K approximately 24 mag, about 5 times deeper than the deepest published K-band images to date. The counts are performed in three small (approximately 1 min), widely separated high-latitude fields. Extensive Monte Carlo tests were used to derive the comleteness corrections and minimize photometric biases. The counts continue to rise, with no sign of a turnover, down to the limits of our data, with the logarithmic slope of d log N/dm = 0.315 +/- 0.02 between K = 20 and 24 mag. This implies a cumulative surface density of approximately 5 x 10(exp 5) galaxies/sq deg, or approximately 2 x 10(exp 10) over the entire sky, down to K = 24 mag. Our counts are in good agreement with, although slightly lower than, those from the Hawaii Deep Survey by Cowie and collaborators; the discrepancies may be due to the small differences in the aperture corrections. We compare our counts with some of the available theoretical predictions. The data do not require models with a high value of Omega(sub 0), but can be well fitted by models with no (or little) evolution, and cosmologies with a low value of Omega(sub 0). Given the uncertainties in the models, it may be premature to put useful constrains on the value of Omega(sub 0) from the counts alone. Optical-to-IR colors are computed, using CCD data obtaind previously at Palomar. We find a few red galaxies with (r-K) approximately greater than 5 mag, or (i-K) approximately greater than 5 mag; these may be ellipticals at z approximately 1. While the redshift distribution of galaxies in our counts is still unknown, the flux limits reached would allow us to detect unobscured L(sub *) galaxies out to substantial redshifts (z greater than 3?).

  1. A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.

    PubMed

    Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi

    2010-04-01

    The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.

  2. A mercuric detector system for X-ray astronomy. 2. Results from flight tests of a balloon borne instrument

    NASA Technical Reports Server (NTRS)

    Vallerga, J.; Vanderspek, R. K.; Ricker, G. R.

    1982-01-01

    To establish the expected sensitivity of a new hard X-ray telescope design, an experiment was conducted to measure the background counting rate at balloon altitudes (40 km) of mercuric iodide, a room temperature solid state X-ray detector. The prototype detector consisted of two thin mercuric iodide (HgI2) detectors surrounded by a large bismuth germanate (Bi4Ge3O12) scintillator operated in anticoincidence. The bismuth germanate shield vetoed most of the background counting rate induced by atmospheric gamma-rays, neutrons and cosmic rays. A balloon-borne gondola containing a prototype detector assembly was designed, constructed and flown twice in the spring of 1982 from Palestine, Texas. The second flight of this instrument established a differential background counting rate of 4.2 O.7 x 10-5 counts/sec cm keV over the energy range of 40 to 80 keV. This measurement was within 50% of the predicted value. The measured rate is approx 5 times lower than previously achieved in shielded NaI/CsI or Ge systems operating in the same energy range. The prediction was based on a Monte Carlo simulation of the detector assembly in the radiation environment at float altitude.

  3. Radioactivities of Long Duration Exposure Facility (LDEF) materials: Baggage and bonanzas

    NASA Technical Reports Server (NTRS)

    Smith, Alan R.; Hurley, Donna L.

    1991-01-01

    Radioactivities in materials onboard the returned Long Duration Exposure Facility (LDEF) satellite were studied by a variety of techniques. Among the most powerful is low background Ge semiconductor detector gamma ray spectrometry. The observed radioactivities are of two origins: those radionuclides produced by nuclear reactions with the radiation field in orbit and radionuclides present initially as contaminants in materials used for construction of the spacecraft and experimental assemblies. In the first category are experiment related monitor foils and tomato seeds, and such spacecraft materials as Al, stainless steel, and Ti. In the second category are Al, Be, Ti, Va, and some special glasses. Consider that measured peak-area count rates from both categories range from a high value of about 1 count per minute down to less than 0.001 count per minute. Successful measurement of count rates toward the low end of this range can be achieved only through low background techniques, such as used to obtain the results presented here.

  4. Radioactivities of Long Duration Exposure Facility (LDEF) materials: Baggage and bonanzas

    NASA Astrophysics Data System (ADS)

    Smith, Alan R.; Hurley, Donna L.

    1991-06-01

    Radioactivities in materials onboard the returned Long Duration Exposure Facility (LDEF) satellite were studied by a variety of techniques. Among the most powerful is low background Ge semiconductor detector gamma ray spectrometry. The observed radioactivities are of two origins: those radionuclides produced by nuclear reactions with the radiation field in orbit and radionuclides present initially as contaminants in materials used for construction of the spacecraft and experimental assemblies. In the first category are experiment related monitor foils and tomato seeds, and such spacecraft materials as Al, stainless steel, and Ti. In the second category are Al, Be, Ti, Va, and some special glasses. Consider that measured peak-area count rates from both categories range from a high value of about 1 count per minute down to less than 0.001 count per minute. Successful measurement of count rates toward the low end of this range can be achieved only through low background techniques, such as used to obtain the results presented here.

  5. Pill counts and pill rental: unintended entrepreneurial opportunities.

    PubMed

    Viscomi, Christopher M; Covington, Melissa; Christenson, Catherine

    2013-07-01

    Prescription opioid diversion and abuse are becoming increasingly prevalent in many regions of the world, particularly the United States. One method advocated to assess compliance with opioid prescriptions is occasional "pill counts." Shortly before a scheduled appointment, a patient is notified that they must bring in the unused portion of their opioid prescription. It has been assumed that if a patient has the correct number and strength of pills that should be present for that point in a prescription interval that they are unlikely to be selling or abusing their opioids. Two cases are presented where patients describe short term rental of opioids from illicit opioid dealers in order to circumvent pill counts. Pill renting appears to be an established method of circumventing pill counts. Pill counts do not assure non-diversion of opioids and provide additional cash flow to illicit opioid dealers.

  6. Approach for counting vehicles in congested traffic flow

    NASA Astrophysics Data System (ADS)

    Tan, Xiaojun; Li, Jun; Liu, Wei

    2005-02-01

    More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.

  7. Measurement of Body Composition: is there a Gold Standard?

    PubMed Central

    Branski, Ludwik K; Norbury, William B; Herndon, David N; Chinkes, David L; Cochran, Amalia; Suman, Oscar; Benjamin, Deb; Jeschke, Marc G

    2015-01-01

    Background Maintaining lean body mass (LBM) after a severe burn is an essential goal of modern burn treatment. An accurate determination of LBM is necessary for short- and longterm therapeutic decisions. The aim of this study was to compare 2 measurement methods for body composition, wholebody potassium counting (K count) and dual x-ray absorptiometry (DEXA), in a large prospective clinical trial in severely burned pediatric patients. Methods Two-hundred seventy-nine patients admitted with burns covering 40% of total body surface area (TBSA) were enrolled in the study. Patients enrolled were controls or received long-term treatment with recombinant human growth hormone (rhGH). Near-simultaneous measurements of LBM with DEXA and fat-free mass (FFM) with K count were performed at hospital discharge and at 6, 9, 12, 18, and 24 months post injury. Results were correlated using Pearson’s regression analysis. Agreement between the 2 methods was analyzed with the Bland-Altman method. Results Age, gender distribution, weight, burn size, and admission time from injury were not significantly different between control and treatment groups. rhGH and control patients at all time points postburn showed a good correlation between LBM and FFM measurements (R2 between 0.9 and 0.95). Bland-Altman revealed that the mean bias and 95% limits of agreement depended only on patient weight and not on treatment or time postburn. The 95% limits ranged from 0.1 ± 2.9 kg for LBM or FFM in 7- to 18-kg patients to 16.3 ± 17.8 kg for LBM or FFM in patients >60 kg. Conclusions DEXA can provide a sufficiently accurate determination of LBM and changes in body composition, but a correction factor must be included for older children and adolescents with more LBM. DEXA scans are easier, cheaper, and less stressful for the patient, and this method should be used rather than the K count. PMID:19884353

  8. Background considerations in the analysis of PIXE spectra by Artificial Neural Systems.

    NASA Astrophysics Data System (ADS)

    Correa, R.; Morales, J. R.; Requena, I.; Miranda, J.; Barrera, V. A.

    2016-05-01

    In order to study the importance of background in PIXE spectra to determine elemental concentrations in atmospheric aerosols using artificial neural systems ANS, two independently trained ANS were constructed, one which considered as input the net number of counts in the peak, and another which included the background. In the training and validation phases thirty eight spectra of aerosols collected in Santiago, Chile, were used. In both cases the elemental concentration values were similar. This fact was due to the intrinsic characteristic of ANS operating with normalized values of the net and total number of counts under the peaks, something that was verified in the analysis of 172 spectra obtained from aerosols collected in Mexico city. Therefore, networks operating under the mode which include background can reduce time and cost when dealing with large number of samples.

  9. Measurement of total-body cobalt-57 vitamin B12 absorption with a gamma camera.

    PubMed

    Cardarelli, J A; Slingerland, D W; Burrows, B A; Miller, A

    1985-08-01

    Previously described techniques for the measurement of the absorption of [57Co]vitamin B12 by total-body counting have required an iron room equipped with scanning or multiple detectors. The present study uses simplifying modifications which make the technique more available and include the use of static geometry, the measurement of body thickness to correct for attenuation, a simple formula to convert the capsule-in-air count to a 100% absorption count, and finally the use of an adequately shielded gamma camera obviating the need of an iron room.

  10. Noise suppressed partial volume correction for cardiac SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu

    Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived frommore » a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods tested on low-count data. AMAP effectively suppressed noise and reduced the spill-in effect in the low activity regions. However it was unable to reduce the spill-out effect in high activity regions. NS-PVC yielded superior performance in terms of both quantitative assessment and visual image quality while improving reproducibility. Conclusions: The results suggest that NS-PVC may be a promising PVC algorithm for application in low-dose protocols, and in gated and dynamic cardiac studies with low counts.« less

  11. Enhanced identification and biological validation of differential gene expression via Illumina whole-genome expression arrays through the use of the model-based background correction methodology

    PubMed Central

    Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.

    2008-01-01

    Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815

  12. Dead-time correction for high-throughput fluorescence lifetime imaging microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Ruhlandt, Daja; Chithik, Anna; Ebrecht, René; Wouters, Fred S.; Gregor, Ingo

    2016-02-01

    Fluorescence lifetime microscopy has become an important method of bioimaging, allowing not only to record intensity and spectral, but also lifetime information across an image. One of the most widely used methods of FLIM is based on Time-Correlated Single Photon Counting (TCSPC). In TCSPC, one determines this curve by exciting molecules with a periodic train of short laser pulses, and then measuring the time delay between the first recorded fluorescence photon after each exciting laser pulse. An important technical detail of TCSPC measurements is the fact that the delay times between excitation laser pulses and resulting fluorescence photons are always measured between a laser pulse and the first fluorescence photon which is detected after that pulse. At high count rates, this leads to so-called pile-up: ``early'' photons eclipse long-delay photons, resulting in heavily skewed TCSPC histograms. To avoid pile-up, a rule of thumb is to perform TCSPC measurements at photon count rates which are at least hundred times smaller than the laser-pulse excitation rate. The downside of this approach is that the fluorescence-photon count-rate is restricted to a value below one hundredth of the laser-pulse excitation-rate, reducing the overall speed with which a fluorescence signal can be measured. We present a new data evaluation method which provides pile-up corrected fluorescence decay estimates from TCSPC measurements at high count rates, and we demonstrate our method on FLIM of fluorescently labeled cells.

  13. Quantifying regional cerebral blood flow by N-isopropyl-P-[I-123]iodoamphetamine (IMP) using a ring type single-photon emission computed tomography system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takahashi, N.; Odano, I.; Ohkubo, M.

    1994-05-01

    We developed a more accurate quantitative measurement of regional cerebral blood flow (rCBF) with the microsphere model using N-isopropyl-p-[I-123] iodoamphetamine (IMP) and a ring type single photon emission computed tomography (SPECT) system. SPECT studies were performed in 17 patients with brain diseases. A dose of 222 MBq (6 mCi) of [I-123]IMP was injected i.v., at the same time a 5 min period of arterial blood withdrawal was begun. SPECT data were acquired from 25 min to 60 min after tracer injection. For obtaining the brain activity concentration at 5 min after IMP injection, total brain counts collections and one minutemore » period short time SPECT studies were performed at 5, 20, and 60 min. Measurement of the values of rCBF was calculated using short time SPECT images at 5 min (rCBF), static SPECT images corrected with total cerebral counts (rCBF{sub Ct}.) and those corrected with reconstructed counts on short time SPECT images (rCBF{sub Cb}). There was a good relationship (r=0.69) between rCBF and rCBF{sub Ct}, however, rCBF{sub Ct} tends to be underestimated in high flow areas and overestimated in low flow areas. There was better relationship between rCBF and rCBF{sub Cb}(r=0.92). The overestimation and underestimation shown in rCBF{sub Ct} was considered to be due to the correction of reconstructed counts using a total cerebral time activity curve, because of the kinetic behavior of [I-123]IMP was different in each region. We concluded that more accurate rCBF values could be obtained using the regional time activity curves.« less

  14. Evaluation of dead-time corrections for post-radionuclide-therapy (177)Lu quantitative imaging with low-energy high-resolution collimators.

    PubMed

    Celler, Anna; Piwowarska-Bilska, Hanna; Shcherbinin, Sergey; Uribe, Carlos; Mikolajczak, Renata; Birkenfeld, Bozena

    2014-01-01

    Dead-time (DT) effects rarely cause problems in diagnostic single-photon emission computed tomography (SPECT) studies; however, in post-radionuclide-therapy imaging, DT can be substantial. Therefore, corrections may be necessary if quantitative images are used in image-based dosimetry or for evaluation of therapy outcomes. This task is particularly challenging if low-energy collimators are used. Our goal was to design a simple method to determine the dead-time correction factor (DTCF) without the need for phantom experiments and complex calculations. Planar and SPECT/CT scans of a water phantom containing a 70 ml bottle filled with lutetium-177 (Lu) were acquired over 60 days. Two small Lu markers were used in all scans. The DTCF based on the ratio of observed to true count rates measured over the entire spectrum and using photopeak primary photons only was estimated for phantom (DT present) and marker (no DT) scans. In addition, variations in counts in SPECT projections (potentially caused by varying bremsstrahlung and scatter) were investigated. For count rates that were about two-fold higher than typically seen in post-therapy Lu scans, the maximum DTCF reached a level of about 17%. The DTCF values determined directly from the phantom experiments using the total energy spectrum and photopeak counts only were equal to 13 and 16%, respectively. They were closely matched by those from the proposed marker-based method, which uses only two energy windows and measures photopeak primary photons (15-17%). A simple, marker-based method allowing for determination of the DTCF in high-activity Lu imaging studies has been proposed and validated using phantom experiments.

  15. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  16. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  17. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  18. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  19. Different binarization processes validated against manual counts of fluorescent bacterial cells.

    PubMed

    Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W

    2016-09-01

    State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Deep 3 GHz number counts from a P(D) fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  1. The NuSTAR Extragalactic Surveys: The Number Counts Of Active Galactic Nuclei And The Resolved Fraction Of The Cosmic X-ray Background

    NASA Technical Reports Server (NTRS)

    Harrison, F. A.; Aird, J.; Civano, F.; Lansbury, G.; Mullaney, J. R.; Ballentyne, D. R.; Alexander, D. M.; Stern, D.; Ajello, M.; Barret, D.; hide

    2016-01-01

    We present the 3-8 kiloelectronvolts and 8-24 kiloelectronvolts number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33 percent -39 percent of the X-ray background in the 8-24 kiloelectronvolts band, directly identifying AGNs with obscuring columns up to approximately 10 (exp 25) per square centimeter. In the softer 3-8 kiloelectronvolts band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range 5 times 10 (exp -15) less than or approximately equal to S (3-8 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12) probed by NuSTAR. In the hard 8-24 kiloelectronvolts band NuSTAR probes fluxes over the range 2 times 10 (exp -14) less than or approximately equal to S (8-24 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12), a factor approximately 100 times fainter than previous measurements. The 8-24 kiloelectronvolts number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT15-55 kiloelectronvolts number counts measured at higher fluxes (S (15-55 kiloelectronvolts) less than or approximately equal to 10 (exp -11) ergs per second per square centimeter), reflecting the evolution of the AGN population between the Swift/BAT local (redshift is less than 0.1) sample and NuSTAR's redshift approximately equal to 1 sample. CXB (Cosmic X-ray Background) synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts

  2. Hematological and Biochemical Parameters in Elite Soccer Players During A Competitive Half Season

    PubMed Central

    Anđelković, Marija; Baralić, Ivana; Đorđević, Brižita; Stevuljević, Jelena Kotur; Radivojević, Nenad; Dikić, Nenad; Škodrić, Sanja Radojević; Stojković, Mirjana

    2015-01-01

    Summary Background The purpose of the present study was to report and discuss the hematological and biochemical behavior of elite soccer players, in order to get more insight in the physiological characteristics of these sportsmen and to provide trainers and sports doctors with useful indicators. Methods Nineteen male soccer players volunteered to participate in this study. We followed the young elite soccer players during a competitive half season. Venous blood samples were collected between 9:00 and 10:00 a.m. after an overnight fast (10 h) at baseline, after 45 and 90 days and hematological and biochemical parameters were measured. Results Hemoglobin and hematocrit levels were significantly reduced over the observational period (p<0.05), but erythrocyte count and iron levels remained unchanged. Bilirubin and ferritin levels significantly increased in response to regular soccer training (p<0.05). We observed a significant decrease in muscle enzyme plasma activity during the 90 days study period. ANOVA analysis revealed a significant increase in the leukocyte and neutrophil counts (p<0.05), in parallel with a significant decrease in the lymphocyte count (p<0.05) after the observational period of 90 days. Conclusions Elite soccer players are characterized by significant changes in biochemical and hematological parameters over the half season, which are linked to training workload, as well as adaptation induced by the soccer training. Although the values of the measured parameters fell within the reference range, regular monitoring of the biochemical and hematological parameters is fundamental for the identification of a healthy status and related optimal performances by sport doctors and trainers and selection of a correct workload by trainers. PMID:28356856

  3. Evaluation of 22 genetic variants with Crohn's Disease risk in the Ashkenazi Jewish population: a case-control study

    PubMed Central

    2011-01-01

    Background Crohn's disease (CD) has the highest prevalence among individuals of Ashkenazi Jewish (AJ) descent compared to non-Jewish Caucasian populations (NJ). We evaluated a set of well-established CD-susceptibility variants to determine if they can explain the increased CD risk in the AJ population. Methods We recruited 369 AJ CD patients and 503 AJ controls, genotyped 22 single nucleotide polymorphisms (SNPs) at or near 10 CD-associated genes, NOD2, IL23R, IRGM, ATG16L1, PTGER4, NKX2-3, IL12B, PTPN2, TNFSF15 and STAT3, and assessed their association with CD status. We generated genetic scores based on the risk allele count alone and the risk allele count weighed by the effect size, and evaluated their predictive value. Results Three NOD2 SNPs, two IL23R SNPs, and one SNP each at IRGM and PTGER4 were independently associated with CD risk. Carriage of 7 or more copies of these risk alleles or the weighted genetic risk score of 7 or greater correctly classified 92% (allelic count score) and 83% (weighted score) of the controls; however, only 29% and 47% of the cases were identified as having the disease, respectively. This cutoff was associated with a >4-fold increased disease risk (p < 10e-16). Conclusions CD-associated genetic risks were similar to those reported in NJ population and are unlikely to explain the excess prevalence of the disease in AJ individuals. These results support the existence of novel, yet unidentified, genetic variants unique to this population. Understanding of ethnic and racial differences in disease susceptibility may help unravel the pathogenesis of CD leading to new personalized diagnostic and therapeutic approaches. PMID:21548950

  4. [Myocardial uptake ratio of iodine-123 labeled beta-methyl iodophenylpentadecanoic acid (123I-BMIPP) in relation to the concentration of the substrates of energy].

    PubMed

    Tsuchimochi, S; Tamaki, N; Kawamoto, M; Tadamura, E; Fujita, T; Nohara, R; Matsumori, A; Sasayama, S; Yonekura, Y; Konishi, J

    1995-06-01

    Iodine-123 beta-methyl iodophenylpentadecanoic acid (BMIPP) has been used for evaluating myocardial fatty acid metabolism in vivo. The whole body BMIPP imaging was acquired in 26 patients (11 with HCM, 11 with CAD and 4 with DCM) to calculate % uptake in the myocardium and to correlate its uptake with biochemical data, including blood sugar (BS), nonesterified fatty acid (NEFA) and insulin in the blood. BMIPP was administered at rest with overnight fasting state, and the anterior and posterior whole body imaging was performed one hour later. The background corrected whole myocardial counts were calculated to obtain %BMIPP uptake. In addition, the heart to mediastinum count ratio (H/M ratio) was calculated from the mean counts in the heart and the upper mediastinum in the anterior view. The %BMIPP uptake was 3.70 +/- 1.22% and H/M ratio was 2.30 +/- 0.23. The patients with DCM showed higher %BMIPP uptake values (DCM = 5.58 +/- 0.67% vs. CAD = 3.09 +/- 0.97% and HCM = 3.63 +/- 0.86%, both p < 0.01), but similar values of H/M ratio with other patients (DCM = 2.43 +/- 0.20, CAD = 2.22 +/- 0.25 and HCM = 2.32 +/- 0.20). Although the biochemical data varied at the time of the tracer administration, they were not significantly correlated with the %BMIPP uptake or H/M ratio. However, there was a significant correlation between %BMIPP uptake and H/M ratio with the correlation coefficient of 0.80 (p < 0.001). We conclude that the myocardial uptake of BMIPP is not influenced by the plasma substrate level under the fasting state.

  5. Method for detecting and correcting for isotope burn-in during long-term neutron dosimetry exposure

    DOEpatents

    Ruddy, Francis H.

    1988-01-01

    A method is described for detecting and correcting for isotope burn-in during-long term neutron dosimetry exposure. In one embodiment, duplicate pairs of solid state track recorder fissionable deposits are used, including a first, fissionable deposit of lower mass to quantify the number of fissions occuring during the exposure, and a second deposit of higher mass to quantify the number of atoms of for instance .sup.239 Pu by alpha counting. In a second embodiment, only one solid state track recorder fissionable deposit is used and the resulting higher track densities are counted with a scanning electron microscope. This method is also applicable to other burn-in interferences, e.g., .sup.233 U in .sup.232 Th or .sup.238 Pu in .sup.237 Np.

  6. A 1.5k x 1.5k class photon counting HgCdTe linear avalanche photo-diode array for low background space astronomy in the 1-5micron infrared

    NASA Astrophysics Data System (ADS)

    Hall, Donald

    Under a current award, NASA NNX 13AC13G "EXTENDING THE ASTRONOMICAL APPLICATION OF PHOTON COUNTING HgCdTe LINEAR AVALANCHE PHOTODIODE ARRAYS TO LOW BACKGROUND SPACE OBSERVATIONS" UH has used Selex SAPHIRA 320 x 256 MOVPE L-APD HgCdTe arrays developed for Adaptive Optics (AO) wavefront (WF) sensing to investigate the potential of this technology for low background space astronomy applications. After suppressing readout integrated circuit (ROIC) glow, we have placed upper limits on gain normalized dark current of 0.01 e-/sec at up to 8 volts avalanche bias, corresponding to avalanche gain of 5, and have operated with avalanche gains of up to several hundred at higher bias. We have also demonstrated detection of individual photon events. The proposed investigation would scale the format to 1536 x 1536 at 12um (the largest achievable in a standard reticule without requiring stitching) while incorporating reference pixels required at these low dark current levels. The primary objective is to develop, produce and characterize a 1.5k x 1.5k at 12um pitch MOVPE HgCdTe L-APD array, with nearly 30 times the pixel count of the 320 x 256 SAPHIRA, optimized for low background space astronomy. This will involve: 1) Selex design of a 1.5k x 1.5k at 12um pitch ROIC optimized for low background operation, silicon wafer fabrication at the German XFab foundry in 0.35 um 3V3 process and dicing/test at Selex, 2) provision by GL Scientific of a 3-side close-buttable carrier building from the heritage of the HAWAII xRG family, 3) Selex development and fabrication of 1.5k x 1.5k at 12 um pitch MOVPE HgCdTe L-APD detector arrays optimized for low background applications, 4) hybridization, packaging into a sensor chip assembly (SCA) with initial characterization by Selex and, 5) comprehensive characterization of low background performance, both in the laboratory and at ground based telescopes, by UH. The ultimate goal is to produce and eventually market a large format array, the L-APD equivalent of the Teledyne H1RG and H2RG, able to achieve sub-electron read noise and count 1 - 5 um photons with high quantum efficiency and low dark count rate while preserving their Poisson statistics and noise.

  7. Study on radiometric consistency of LANDSAT-4 multispectral scanner. [borders between North and South Carolina and between the Imperial Valley of California and Mexico

    NASA Technical Reports Server (NTRS)

    Malila, W. A. (Principal Investigator)

    1983-01-01

    Two full frames of radiometrically corrected LANDSAT-4 MSS data were examined to determine a number of radiometric properties. It was found that LANDSAT-4 MSS produces data of good quality with dynamic ranges and target responses qualitatively similar to those of previous MSS sensors. Banding appears to be quite well corrected, with a residual rms error of about 0.3 digital counts being measured; the histogram equalization algorithm appears to be working as advertised. A low level coherent noise effect was found in all bands, appearing in uniform areas as a diagonal striping pattern. The principle component of this noise was found by Fourier analysis to be a highly consistent wavelength of 3.6 pixels along a scan line (28 KHz). The magnitude of this effect ranged from about 0.75 of one count in the worst band (Band 1) to only about 0.25 counts in the best band (Band 4). Preparations were made for establishing a relative radiometric calibration from MSS 4 data with respect to MSS 3.

  8. Support of selected X-ray studies to be performed using data from the Uhuru (SAS-A) satellite

    NASA Technical Reports Server (NTRS)

    Garmire, G. P.

    1976-01-01

    A new measurement of the diffuse X-ray emission sets more stringent upper limits on the fluctuations of the background and on the number counts of X-ray sources with absolute value of b 20 deg than previous measurements. A random sample of background data from the Uhuru satellite gives a relative fluctuation in excess of statistics of 2.0% between 2.4 and 6.9 keV. The hypothesis that the relative fluctuation exceeds 2.9% can be rejected at the 90% confidence level. No discernable energy dependence is evident in the fluctuations in the pulse height data, when separated into three energy channels of nearly equal width from 1.8 to 10.0 keV. The probability distribution of fluctuations was convolved with the photon noise and cosmic ray background deviation (obtained from the earth-viewing data) to yield the differential source count distribution for high latitude sources. Results imply that a maximum of 160 sources could be between 1.7 and 5.1 x 10 to the -11 power ergs/sq cm/sec (1-3 Uhuru counts).

  9. How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology

    NASA Astrophysics Data System (ADS)

    Ekers, Ron

    2012-09-01

    In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.

  10. Using Pinochle to motivate the restricted combinations with repetitions problem

    NASA Astrophysics Data System (ADS)

    Gorman, Patrick S.; Kunkel, Jeffrey D.; Vasko, Francis J.

    2011-07-01

    A standard example used in introductory combinatoric courses is to count the number of five-card poker hands possible from a straight deck of 52 distinct cards. A more interesting problem is to count the number of distinct hands possible from a Pinochle deck in which there are multiple, but obviously limited, copies of each type of card (two copies for single-deck, four for double deck). This problem is more interesting because our only concern is to count the number of distinguishable hands that can be dealt. In this note, under various scenarios, we will discuss two combinatoric techniques for counting these hands; namely, the inclusion-exclusion principle and generating functions. We will then show that these Pinochle examples motivate a general counting formula for what are called 'regular' combinations by Riordan. Finally, we prove the correctness of this formula using generating functions.

  11. Reducing background contributions in fluorescence fluctuation time-traces for single-molecule measurements in solution.

    PubMed

    Földes-Papp, Zeno; Liao, Shih-Chu Jeff; You, Tiefeng; Barbieri, Beniamino

    2009-08-01

    We first report on the development of new microscope means that reduce background contributions in fluorescence fluctuation methods: i) excitation shutter, ii) electronic switches, and iii) early and late time-gating. The elements allow for measuring molecules at low analyte concentrations. We first found conditions of early and late time-gating with time-correlated single-photon counting that made the fluorescence signal as bright as possible compared with the fluctuations in the background count rate in a diffraction-limited optical set-up. We measured about a 140-fold increase in the amplitude of autocorrelated fluorescence fluctuations at the lowest analyte concentration of about 15 pM, which gave a signal-to-background advantage of more than two-orders of magnitude. The results of this original article pave the way for single-molecule detection in solution and in live cells without immobilization or hydrodynamic/electrokinetic focusing at longer observation times than are currently available.

  12. Initial characterization of unequal-length, low-background proportional counters for absolute gas-counting applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, E. K.; Aalseth, C. E.; Bonicalzi, R.

    Characterization of two sets of custom unequal length proportional counters is underway at Pacific Northwest National Laboratory (PNNL). These detectors will be used in measurements to determine the absolute activity concentration of gaseous radionuclides (e.g., {sup 37}Ar). A set of three detectors has been fabricated based on previous PNNL ultra-low-background proportional counter designs and now operate in PNNL's shallow underground counting laboratory. A second set of four counters has also been fabricated using clean assembly of Oxygen-Free High-Conductivity copper components for use in a shielded above-ground counting laboratory. Characterization of both sets of detectors is underway with measurements of backgroundmore » rates, gas gain, and energy resolution. These results will be presented along with a shielding study for the above-ground cave.« less

  13. Low-energy particle experiments-electron analyzer (LEPe) onboard the Arase spacecraft

    NASA Astrophysics Data System (ADS)

    Kazama, Yoichi; Wang, Bo-Jhou; Wang, Shiang-Yu; Ho, Paul T. P.; Tam, Sunny W. Y.; Chang, Tzu-Fang; Chiang, Chih-Yu; Asamura, Kazushi

    2017-12-01

    In this report, we describe the low-energy electron instrument LEPe (low-energy particle experiments-electron analyzer) onboard the Arase (ERG) spacecraft. The instrument measures a three-dimensional distribution function of electrons with energies of ˜ 19 eV-19 keV. Electrons in this energy range dominate in the inner magnetosphere, and measurement of such electrons is important in terms of understanding the magnetospheric dynamics and wave-particle interaction. The instrument employs a toroidal tophat electrostatic energy analyzer with a passive 6-mm aluminum shield. To minimize background radiation effects, the analyzer has a background channel, which monitors counts produced by background radiation. Background counts are then subtracted from measured counts. Electronic components are radiation tolerant, and 5-mm-thick shielding of the electronics housing ensures that the total dose is less than 100 kRad for the one-year nominal mission lifetime. The first in-space measurement test was done on February 12, 2017, showing that the instrument functions well. On February 27, the first all-instrument run test was done, and the LEPe instrument measured an energy dispersion event probably related to a substorm injection occurring immediately before the instrument turn-on. These initial results indicate that the instrument works fine in space, and the measurement performance is good for science purposes.[Figure not available: see fulltext.

  14. X-ray detection of Nova Del 2013 with Swift

    NASA Astrophysics Data System (ADS)

    Castro-Tirado, Alberto J.; Martin-Carrillo, Antonio; Hanlon, Lorraine

    2013-08-01

    Continuous X-ray monitoring by Swift of Nova Del 2013 (see CBET #3628) shows an increase of X-ray emission at the source location compared to previous observations (ATEL #5283, ATEL #5305) during a 3.9 ksec observation at UT 2013-08-22 12:05. With the XRT instrument operating in window timing mode, 744 counts were extracted from a 50 pixel long source region and 324 counts from a similar box for a background region, resulting in a 13-sigma detection with a net count rate of 0.11±0.008 counts/sec.

  15. Galaxy And Mass Assembly: the evolution of the cosmic spectral energy distribution from z = 1 to z = 0

    NASA Astrophysics Data System (ADS)

    Andrews, S. K.; Driver, S. P.; Davies, L. J. M.; Kafle, P. R.; Robotham, A. S. G.; Vinsen, K.; Wright, A. H.; Bland-Hawthorn, J.; Bourne, N.; Bremer, M.; da Cunha, E.; Drinkwater, M.; Holwerda, B.; Hopkins, A. M.; Kelvin, L. S.; Loveday, J.; Phillipps, S.; Wilkins, S.

    2017-09-01

    We present the evolution of the cosmic spectral energy distribution (CSED) from z = 1 to 0. Our CSEDs originate from stacking individual spectral energy distribution (SED) fits based on panchromatic photometry from the Galaxy And Mass Assembly (GAMA) and COSMOS data sets in 10 redshift intervals with completeness corrections applied. Below z = 0.45, we have credible SED fits from 100 nm to 1 mm. Due to the relatively low sensitivity of the far-infrared data, our far-infrared CSEDs contain a mix of predicted and measured fluxes above z = 0.45. Our results include appropriate errors to highlight the impact of these corrections. We show that the bolometric energy output of the Universe has declined by a factor of roughly 4 - from 5.1 ± 1.0 at z ˜ 1 to 1.3 ± 0.3 × 1035 h70 W Mpc-3 at the current epoch. We show that this decrease is robust to cosmic sample variance, the SED modelling and other various types of error. Our CSEDs are also consistent with an increase in the mean age of stellar populations. We also show that dust attenuation has decreased over the same period, with the photon escape fraction at 150 nm increasing from 16 ± 3 at z ˜ 1 to 24 ± 5 per cent at the current epoch, equivalent to a decrease in AFUV of 0.4 mag. Our CSEDs account for 68 ± 12 and 61 ± 13 per cent of the cosmic optical and infrared backgrounds, respectively, as defined from integrated galaxy counts and are consistent with previous estimates of the cosmic infrared background with redshift.

  16. Low-background Gamma Spectroscopy at Sanford Underground Laboratory

    NASA Astrophysics Data System (ADS)

    Chiller, Christopher; Alanson, Angela; Mei, Dongming

    2014-03-01

    Rare-event physics experiments require the use of material with unprecedented radio-purity. Low background counting assay capabilities and detectors are critical for determining the sensitivity of the planned ultra-low background experiments. A low-background counting, LBC, facility has been built at the 4850-Level Davis Campus of the Sanford Underground Research Facility to perform screening of material and detector parts. Like many rare event physics experiments, our LBC uses lead shielding to mitigate background radiation. Corrosion of lead brick shielding in subterranean installations creates radon plate-out potential as well as human risks of ingestible or respirable lead compounds. Our LBC facilities employ an exposed lead shield requiring clean smooth surfaces. A cleaning process of low-activity silica sand blasting and borated paraffin hot coating preservation was employed to guard against corrosion due to chemical and biological exposures. The resulting lead shield maintains low background contribution integrity while fully encapsulating the lead surface. We report the performance of the current LBC and a plan to develop a large germanium well detector for PMT screening. Support provided by Sd governors research center-CUBED, NSF PHY-0758120 and Sanford Lab.

  17. SU-E-I-79: Source Geometry Dependence of Gamma Well-Counter Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, M; Belanger, A; Kijewski, M

    Purpose: To determine the effect of liquid sample volume and geometry on counting efficiency in a gamma well-counter, and to assess the relative contributions of sample geometry and self-attenuation. Gamma wellcounters are standard equipment in clinical and preclinical studies, for measuring patient blood radioactivity and quantifying animal tissue uptake for tracer development and other purposes. Accurate measurements are crucial. Methods: Count rates were measured for aqueous solutions of 99m- Tc at four liquid volume values in a 1-cm-diam tube and at six volume values in a 2.2-cm-diam vial. Total activity was constant for all volumes, and data were corrected formore » decay. Count rates from a point source in air, supported by a filter paper, were measured at seven heights between 1.3 and 5.7 cm from the bottom of a tube. Results: Sample volume effects were larger for the tube than for the vial. For the tube, count efficiency relative to a 1-cc volume ranged from 1.05 at 0.05 cc to 0.84 at 3 cc. For the vial, relative count efficiency ranged from 1.02 at 0.05 cc to 0.87 at 15 cc. For the point source, count efficiency relative to 1.3 cm from the tube bottom ranged from 0.98 at 1.8 cm to 0.34 at 5.7 cm. The relative efficiency of a 3-cc liquid sample in a tube compared to a 1-cc sample is 0.84; the average relative efficiency for the solid sample in air between heights in the tube corresponding to the surfaces of those volumes (1.3 and 4.8 cm) is 0.81, implying that the major contribution to efficiency loss is geometry, rather than attenuation. Conclusion: Volume-dependent correction factors should be used for accurate quantitation radioactive of liquid samples. Solid samples should be positioned at the bottom of the tube for maximum count efficiency.« less

  18. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  19. Low gamma counting for measuring NORM/TENORM with a radon reducing system

    NASA Astrophysics Data System (ADS)

    Paschoa, Anselmo S.

    2001-06-01

    A detection system for counting low levels of gamma radiation was built by upgrading an existing rectangular chamber made of 18 metric tonne of steel fabricated before World War II. The internal walls, the ceiling, and the floor of the chamber are covered with copper sheets. The new detection system consists of a stainless steel hollow cylinder with variable circular apertures in the cylindrical wall and in the base, to allow introduction of a NaI (Tl) crystal, or alternatively, a HPGe detector in its interior. This counting system is mounted inside the larger chamber, which in turn is located in a subsurface air-conditioned room. The access to the subsurface room is made from a larger entrance room through a tunnel plus a glass anteroom to decrease the air-exchange rate. Both sample and detector are housed inside the stainless steel cylinder. This cylinder is filled with hyper pure nitrogen gas, before counting a sample, to prevent radon coming into contact with the detector surface. As a consequence, the contribution of the 214Bi photopeaks to the background gamma spectra is minimized. The reduction of the gamma radiation background near the detector facilitates measurement of naturally occurring radioactive materials (NORM), and/or technologically enhanced NORM (TENORM), which are usually at concentration levels only slightly higher than those typically found in the natural radioactive background.

  20. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  1. Economic Impacts of Prison Growth

    DTIC Science & Technology

    2010-04-13

    allow collective bargaining for public sector correctional workers, proposals to alter rules for the 2010 Census count, and rural development efforts...number of rural areas have chosen to tie their economies to prisons, viewing the institutions as recession-proof development engines. Though many local...correctional authorities. 80 Beale, Calvin L., “ Rural Prisons: An Update”, Rural Development Perspectives, vol. 11, no. 2, March 2001, p. 25. http

  2. Characterization Results for the March 2016 H-Tank Farm 2H Evaporator Overhead Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, J. C.

    This report contains the radioanalytical results of the 2H evaporator overhead sample received at SRNL on March 16, 2016. Specifically, concentrations of 137Cs, 90Sr, and 129I are reported and compared to the corresponding Waste Acceptance Criteria (WAC) limits of the Effluent Treatment Project (ETP) Waste Water Collection Tank (WWCT) (rev. 6). All of the radionuclide concentrations in the sample were found to be in compliance with the ETP WAC limits. Revision 1 of this document corrects the cumulative beta count initially reported for 90Sr content with the sole 90Sr count obtained after recharacterization of the sample. The initial data wasmore » found to be a cumulative beta count rather than the 90Sr count requested.« less

  3. Montana Kids Count 1996 Data Book.

    ERIC Educational Resources Information Center

    Healthy Mothers, Healthy Babies--The Montana Coalition, Helena.

    This 1996 KIDS COUNT data book presents comparative data on child well-being for each county in Montana and for the state as a whole. Data in the county profiles, which comprise the bulk of the report, are grouped into: background facts (demographic, mental health, education, security, and income support information); charts showing changes in…

  4. Optimal measurement counting time and statistics in gamma spectrometry analysis: The time balance

    NASA Astrophysics Data System (ADS)

    Joel, Guembou Shouop Cebastien; Penabei, Samafou; Maurice, Ndontchueng Moyo; Gregoire, Chene; Jilbert, Nguelem Mekontso Eric; Didier, Takoukam Serge; Werner, Volker; David, Strivay

    2017-01-01

    The optimal measurement counting time for gamma-ray spectrometry analysis using HPGe detectors was determined in our laboratory by comparing twelve hours measurement counting time at day and twelve hours measurement counting time at night. The day spectrum does not fully cover the night spectrum for the same sample. It is observed that the perturbation come to the sun-light. After several investigations became clearer: to remove all effects of radiation from outside (earth, the sun, and universe) our system, it is necessary to measure the background for 24, 48 or 72 hours. In the same way, the samples have to be measured for 24, 48 or 72 hours to be safe to be purified the measurement (equality of day and night measurement). It is also possible to not use the background of the winter in summer. Depend on to the energy of radionuclide we seek, it is clear that the most important steps of a gamma spectrometry measurement are the preparation of the sample and the calibration of the detector.

  5. The Measurement of Human Body-Fluid Volumes: Resting Fluid Volumes Before and After Heat Acclimation

    DTIC Science & Technology

    2001-01-01

    equilibration period. Erythrocytes aliquots were haemolysed before counting with saponin . Both counts were used to correct the derived ECFV, which was...was largely in accordance with the procedures of Greenleaf et al. (1980). This technique used an extraction procedure in which the dye was first...collection. Therefore, the above extraction procedure was not used. A major limitation of using a cellulose column is the possibility of not collecting all

  6. Corrigendum to "Multiple-quantum spin counting in magic-angle-spinning NMR via low-power symmetry-based dipolar recoupling" [J. Magn. Reson. 236 (2013) 31-40

    NASA Astrophysics Data System (ADS)

    Teymoori, Gholamhasan; Pahari, Bholanath; Viswanathan, Elumalai; Edén, Mattias

    2017-03-01

    The authors regret that an inappropriate NMR data processing, not known to all authors at the time of publication, was used to produce the multiple-quantum coherence (MQC) spin counting data presented in our article: this lead to artificially enhanced results, particularly concerning those obtained at long MQC excitation intervals (τexc). Here we reproduce Figs. 4-7 with correctly processed data.

  7. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    USGS Publications Warehouse

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  8. The NuSTAR Extragalactic Surveys: The Number Counts of Active Galactic Nuclei and The Resolved Fraction of The Cosmic X-Ray Background

    DOE PAGES

    Harrison, F. A.; Aird, J.; Civano, F.; ...

    2016-11-07

    Here, we present the 3–8 keV and 8–24 keV number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33%–39% of the X-ray background in the 8–24 keV band, directly identifying AGNs with obscuring columns up tomore » $$\\sim {10}^{25}\\,{\\mathrm{cm}}^{-2}$$. In the softer 3–8 keV band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range $$5\\times {10}^{-15}\\,\\lesssim $$ S(3–8 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$ probed by NuSTAR. In the hard 8–24 keV band NuSTAR probes fluxes over the range $$2\\times {10}^{-14}\\,\\lesssim $$ S(8–24 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$, a factor ~100 fainter than previous measurements. The 8–24 keV number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT 15–55 keV number counts measured at higher fluxes (S(15–55 keV) gsim 10-11 $$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}$$), reflecting the evolution of the AGN population between the Swift/BAT local ($$z\\lt 0.1$$) sample and NuSTAR's $$z\\sim 1$$ sample. CXB synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts.« less

  9. Universal Decoder for PPM of any Order

    NASA Technical Reports Server (NTRS)

    Moision, Bruce E.

    2010-01-01

    A recently developed algorithm for demodulation and decoding of a pulse-position- modulation (PPM) signal is suitable as a basis for designing a single hardware decoding apparatus to be capable of handling any PPM order. Hence, this algorithm offers advantages of greater flexibility and lower cost, in comparison with prior such algorithms, which necessitate the use of a distinct hardware implementation for each PPM order. In addition, in comparison with the prior algorithms, the present algorithm entails less complexity in decoding at large orders. An unavoidably lengthy presentation of background information, including definitions of terms, is prerequisite to a meaningful summary of this development. As an aid to understanding, the figure illustrates the relevant processes of coding, modulation, propagation, demodulation, and decoding. An M-ary PPM signal has M time slots per symbol period. A pulse (signifying 1) is transmitted during one of the time slots; no pulse (signifying 0) is transmitted during the other time slots. The information intended to be conveyed from the transmitting end to the receiving end of a radio or optical communication channel is a K-bit vector u. This vector is encoded by an (N,K) binary error-correcting code, producing an N-bit vector a. In turn, the vector a is subdivided into blocks of m = log2(M) bits and each such block is mapped to an M-ary PPM symbol. The resultant coding/modulation scheme can be regarded as equivalent to a nonlinear binary code. The binary vector of PPM symbols, x is transmitted over a Poisson channel, such that there is obtained, at the receiver, a Poisson-distributed photon count characterized by a mean background count nb during no-pulse time slots and a mean signal-plus-background count of ns+nb during a pulse time slot. In the receiver, demodulation of the signal is effected in an iterative soft decoding process that involves consideration of relationships among photon counts and conditional likelihoods of m-bit vectors of coded bits. Inasmuch as the likelihoods of all the m-bit vectors of coded bits mapping to the same PPM symbol are correlated, the best performance is obtained when the joint mbit conditional likelihoods are utilized. Unfortunately, the complexity of decoding, measured in the number of operations per bit, grows exponentially with m, and can thus become prohibitively expensive for large PPM orders. For a system required to handle multiple PPM orders, the cost is even higher because it is necessary to have separate decoding hardware for each order. This concludes the prerequisite background information. In the present algorithm, the decoding process as described above is modified by, among other things, introduction of an lbit marginalizer sub-algorithm. The term "l-bit marginalizer" signifies that instead of m-bit conditional likelihoods, the decoder computes l-bit conditional likelihoods, where l is fixed. Fixing l, regardless of the value of m, makes it possible to use a single hardware implementation for any PPM order. One could minimize the decoding complexity and obtain an especially simple design by fixing l at 1, but this would entail some loss of performance. An intermediate solution is to fix l at some value, greater than 1, that may be less than or greater than m. This solution makes it possible to obtain the desired flexibility to handle any PPM order while compromising between complexity and loss of performance.

  10. Observation-Corrected Precipitation Estimates in GEOS-5

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Liu, Qing

    2014-01-01

    Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.

  11. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  12. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less

  13. Characterization of the Photon Counting CHASE Jr., Chip Built in a 40-nm CMOS Process With a Charge Sharing Correction Algorithm Using a Collimated X-Ray Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krzyżanowska, A.; Deptuch, G. W.; Maj, P.

    This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less

  14. Development of a Low-Level Ar-37 Calibration Standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Richard M.; Aalseth, Craig E.; Bowyer, Ted W.

    Argon-37 is an important environmental signature of an underground nuclear explosion. Producing and quantifying low-level 37Ar standards is an important step in the development of sensitive field measurement instruments for use during an On-Site Inspection, a key provision of the Comprehensive Nuclear-Test-Ban Treaty. This paper describes progress at Pacific Northwest National Laboratory (PNNL) in the development of a process to generate and quantify low-level 37Ar standard material, which can then be used to calibrate sensitive field systems at activities consistent with soil background levels. The 37Ar used for our work was generated using a laboratory-scale, high-energy neutron source to irradiatemore » powdered samples of calcium carbonate. Small aliquots of 37Ar were then extracted from the head space of the irradiated samples. The specific activity of the head space samples, mixed with P10 (90% stable argon:10% methane by mole fraction) count gas, is then derived using the accepted Length-Compensated Internal-Source Proportional Counting method. Due to the low activity of the samples, a set of three Ultra-Low Background Proportional-Counters designed and fabricated at PNNL from radio-pure electroformed copper was used to make the measurements in PNNL’s shallow underground counting laboratory. Very low background levels (<10 counts/day) have been observed in the spectral region near the 37Ar emission feature at 2.8 keV. Two separate samples from the same irradiation were measured. The first sample was counted for 12 days beginning 28 days after irradiation, the second sample was counted for 24 days beginning 70 days after irradiation (the half-life of 37Ar is 35.0 days). Both sets of measurements were analyzed and yielded very similar results for the starting activity (~0.1 Bq) and activity concentration (0.15 mBq/ccSTP argon) after P10 count gas was added. A detailed uncertainty model was developed based on the ISO Guide to the Expression of Uncertainty in Measurement. This paper presents a discussion of the measurement analysis, along with assumptions and uncertainty estimates.« less

  15. Rejection of randomly coinciding events in ZnMoO scintillating bolometers

    NASA Astrophysics Data System (ADS)

    Chernyak, D. M.; Danevich, F. A.; Giuliani, A.; Mancuso, M.; Nones, C.; Olivieri, E.; Tenconi, M.; Tretyak, V. I.

    2014-06-01

    Random coincidence of events (particularly from two neutrino double beta decay) could be one of the main sources of background in the search for neutrinoless double beta decay with cryogenic bolometers due to their poor time resolution. Pulse-shape discrimination by using front edge analysis, mean-time and methods were applied to discriminate randomly coinciding events in ZnMoO cryogenic scintillating bolometers. These events can be effectively rejected at the level of 99 % by the analysis of the heat signals with rise-time of about 14 ms and signal-to-noise ratio of 900, and at the level of 92 % by the analysis of the light signals with rise-time of about 3 ms and signal-to-noise ratio of 30, under the requirement to detect 95 % of single events. These rejection efficiencies are compatible with extremely low background levels in the region of interest of neutrinoless double beta decay of Mo for enriched ZnMoO detectors, of the order of counts/(y keV kg). Pulse-shape parameters have been chosen on the basis of the performance of a real massive ZnMoO scintillating bolometer. Importance of the signal-to-noise ratio, correct finding of the signal start and choice of an appropriate sampling frequency are discussed.

  16. Analysis of astronomical data from optical superconducting tunnel junctions

    NASA Astrophysics Data System (ADS)

    de Bruijne, J. H.; Reynolds, A. P.; Perryman, Michael A.; Favata, Fabio; Peacock, Anthony J.

    2002-06-01

    Currently operating optical superconducting tunnel junction (STJ) detectors, developed at the European Space Agency (ESA), can simultaneously measure the wavelength ((Delta) (gamma) equals 50 nm at 500 nm) and arrival time (to within approximately 5 microsecond(s) ) of individual photons in the range 310 to 720 nm with an efficiency of approximately 70%, and with count rates of the order of 5000 photons s-1 per junction. A number of STJs placed in an array format generates 4-D data: photon arrival time, energy, and array element (X,Y). Such STJ cameras are ideally suited for, e.g., high-time-resolution spectrally resolved monitoring of variable sources or low- resolution spectroscopy of faint extragalactic objects. The reduction of STJ data involves detector efficiency correction, atmospheric extinction correction, sky background subtraction, and, unlike that of data from CCD-based systems, a more complex energy calibration, barycentric arrival time correction, energy range selection, and time binning; these steps are, in many respects, analogous to procedures followed in high-energy astrophysics. We discuss these calibration steps in detail using a representative observation of the cataclysmic variable UZ Fornacis; these data were obtained with ESA's S-Cam2 6 X 6-pixel device. We furthermore discuss issues related to telescope pointing and guiding, differential atmospheric refraction, and atmosphere-induced image motion and image smearing (`seeing') in the focal plane. We also present a simple and effective recipe for extracting the evolution of atmospheric seeing with time from any science exposure and discuss a number of caveats in the interpretation of STJ-based time-binned data, such as light curves and hardness ratio plots.

  17. Endothelial Progenitor Cells (EPC) Count by Multicolor Flow Cytometry in Healthy Individuals and Diabetes Mellitus (DM) Patients.

    PubMed

    Falay, Mesude; Aktas, Server

    2016-11-01

    The present study aimed to determine circulating Endothelial Progenitor Cell (EPC) counts by multicolor flow cytometry in healthy individuals and diabetic subjects by means of forming an analysis procedure using a combination of monoclonal antibodies (moAbs), which would correctly detect the circulating EPC count. The circulating EPC count was detected in 40 healthy individuals (20 Female, 20 Male; age range: 26 - 50 years) and 30 Diabetes Mellitus (DM) patients (15 Female, 15 Male; age range: 42 - 55) by multicolor flow cytometry (FCM) in a single-tube panel consisting of 5 CD45/CD31/CD34/CD309/ SYTO® and 16 monoclonal antibodies. Circulating EPC count was 11.33 (7.89 - 15.25) cells/µL in the healthy control group and 4.80 (0.70 - 10.85) cells/µL in the DM group. EPC counts were significantly lower in DM cases that developed coronary artery disease (53.3%) as compared to those that did not (p < 0.001). In the present study, we describe a method that identifies circulating EPC counts by multicolor flow cytometry in a single tube and determines the circulating EPC count in healthy individuals. This is the first study conducted on EPC count in Turkish population. We think that the EPC count found in the present study will be a guide for future studies.

  18. Evaluation of the automated hematology analyzer ADVIA® 120 for cerebrospinal fluid analysis and usage of unique hemolysis reagent.

    PubMed

    Tanada, H; Ikemoto, T; Masutani, R; Tanaka, H; Takubo, T

    2014-02-01

    In this study, we evaluated the performance of the ADVIA 120 hematology system for cerebrospinal fluid (CSF) assay. Cell counts and leukocyte differentials in CSF were examined with the ADVIA 120 hematology system, while simultaneously confirming an effective hemolysis agent for automated CSF cell counts. The detection limits of both white blood cell (WBC) counts and red blood cell (RBC) counts on the measurement of CSF cell counts by the ADVIA 120 hematology system were superior at 2 cells/μL (10(-6) L). The WBC count was linear up to 9.850 cells/μL, and the RBC count was linear up to approximately 20 000 cells/μL. The intrarun reproducibility indicated good precision. The leukocyte differential of CSF cells, performed by the ADVIA120 hematology system, showed good correlation with the microscopic procedure. The VersaLyse hemolysis solution efficiently lysed the samples without interfering with cell counts and leukocyte differential, even in a sample that included approximately 50 000/μL RBC. These data show the ADVIA 120 hematology system correctly measured the WBC count and leukocyte differential in CSF. The VersaLyse hemolysis solution is considered to be optimal for hemolysis treatment of CSF when measuring cell counts and differentials by the ADVIA 120 hematology system. © 2013 John Wiley & Sons Ltd.

  19. High-energy electrons from the muon decay in orbit: Radiative corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szafron, Robert; Czarnecki, Andrzej

    2015-12-07

    We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  1. Development of a stained cell nuclei counting system

    NASA Astrophysics Data System (ADS)

    Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori

    2011-03-01

    This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.

  2. Utility of the serum C-reactive protein for detection of occult bacterial infection in children.

    PubMed

    Isaacman, Daniel J; Burke, Bonnie L

    2002-09-01

    To assess the utility of serum C-reactive protein (CRP) as a screen for occult bacterial infection in children. Febrile children ages 3 to 36 months who visited an urban children's hospital emergency department and received a complete blood cell count and blood culture as part of their evaluation were prospectively enrolled from February 2, 2000, through May 30, 2001. Informed consent was obtained for the withdrawal of an additional 1-mL aliquot of blood for use in CRP evaluation. Logistic regression and receiver operator characteristic (ROC) curves were modeled for each predictor to identify optimal test values, and were compared using likelihood ratio tests. Two hundred fifty-six patients were included in the analysis, with a median age of 15.3 months (range, 3.1-35.2 months) and median temperature at triage 40.0 degrees C (range, 39.0 degrees C-41.3 degrees C). Twenty-nine (11.3%) cases of occult bacterial infection (OBI) were identified, including 17 cases of pneumonia, 9 cases of urinary tract infection, and 3 cases of bacteremia. The median white blood cell count in this data set was 12.9 x 10(3)/ micro L [corrected] (range, 3.6-39.1 x10(3)/ micro L) [corrected], the median absolute neutrophil count (ANC) was 7.12 x 10(3)/L [corrected] (range, 0.56-28.16 x10(3)/L) [corrected], and the median CRP level was 1.7 mg/dL (range, 0.2-43.3 mg/dL). The optimal cut-off point for CRP in this data set (4.4 mg/dL) achieved a sensitivity of 63% and a specificity of 81% for detection of OBI in this population. Comparing models using cut-off values from individual laboratory predictors (ANC, white blood cell count, and CRP) that maximized sensitivity and specificity revealed that a model using an ANC of 10.6 x10(3)/L [corrected] (sensitivity, 69%; specificity, 79%) was the best predictive model. Adding CRP to the model insignificantly increased sensitivity to 79%, while significantly decreasing specificity to 50%. Active monitoring of emergency department blood cultures drawn during the study period from children between 3 and 36 months of age showed an overall bacteremia rate of 1.1% during this period. An ANC cut-off point of 10.6 x10(3)/L [corrected] offers the best predictive model for detection of occult bacterial infection using a single test. The addition of CRP to ANC adds little diagnostic utility. Furthermore, the lowered incidence of occult bacteremia in our population supports a decrease in the use of diagnostic screening in this population.

  3. A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source

    NASA Astrophysics Data System (ADS)

    Spina, G.; Lantieri, M.

    2014-01-01

    In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.

  4. ON THE FERMI -GBM EVENT 0.4 s AFTER GW150914

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, J.; Yu, H.-F.; Burgess, J. M.

    In view of the recent report by Connaughton et al., we analyze continuous time-tagged event (TTE) data of Fermi -gamma-ray burst monitor (GBM) around the time of the gravitational-wave event GW 150914. We find that after proper accounting for low-count statistics, the GBM transient event at 0.4 s after GW 150914 is likely not due to an astrophysical source, but consistent with a background fluctuation, removing the tension between the INTEGRAL /ACS non-detection and GBM. Additionally, reanalysis of other short GRBs shows that without proper statistical modeling the fluence of faint events is over-predicted, as verified for some joint GBM–ACSmore » detections of short GRBs. We detail the statistical procedure to correct these biases. As a result, faint short GRBs, verified by ACS detections, with significances in the broadband light curve even smaller than that of the GBM–GW150914 event are recovered as proper non-zero source, while the GBM–GW150914 event is consistent with zero fluence.« less

  5. Background-free beta-decay half-life measurements by in-trap decay and high-resolution MR-ToF mass analysis

    NASA Astrophysics Data System (ADS)

    Wolf, R. N.; Atanasov, D.; Blaum, K.; Kreim, S.; Lunney, D.; Manea, V.; Rosenbusch, M.; Schweikhard, L.; Welker, A.; Wienholtz, F.; Zuber, K.

    2016-06-01

    In-trap decay in ISOLTRAP's radiofrequency quadrupole (RFQ) ion beam cooler and buncher was used to determine the lifetime of short-lived nuclides. After various storage times, the remaining mother nuclides were mass separated from accompanying isobaric contaminations by the multi-reflection time-of-flight mass separator (MR-ToF MS), allowing for a background-free ion counting. A feasibility study with several online measurements shows that the applications of the ISOLTRAP setup can be further extended by exploiting the high resolving power of the MR-ToF MS in combination with in-trap decay and single-ion counting.

  6. Progress towards barium daughter tagging in Xe136 decay using single molecule fluorescence imaging

    NASA Astrophysics Data System (ADS)

    McDonald, Austin; NEXT Collaboration

    2017-09-01

    The existence of Majorana fermions is of great interest as it may be related to the asymmetry between matter and anti-matter particles in the universe. However, the search for them has proven to be a difficult one. Neutrino-less Double Beta decay (NLDB) offers a possible opportunity for direct observation of a Majorana Fermion. The rate for NLDB decay may be as low as 1 count /ton /year if the mass ordering is inverted. Current detector technologies have background rates between 4 to 300 count /ton /year /ROI at the 100kg scale which is much larger than the universal goal of 0.1 count /ton /year /ROI desired for ton-scale detectors. The premise of my research is to develop new detector technologies that will allow for a background-free experiment. My current work is to develop a sensor that will tag the daughter ion Ba++ from the Xe136 decay. The development of a sensor that is sensitive to single barium ion detection based on the single molecule fluorescence imaging technique is the major focus of this work. If successful, this could provide a path to a background-free experiment.

  7. Progress towards barium daughter tagging in Xe136 decay using single molecule fluorescence imaging

    NASA Astrophysics Data System (ADS)

    McDonald, Austin; Jones, Ben; Benson, Jordan; Nygren, David; NEXT Collaboration

    2017-01-01

    The existence of Majorana Fermions has been predicted, and is of great interest as it may be related to the asymmetry between matter and anti-matter particles in the universe. However, the search for them has proven to be a difficult one. Neutrino-less Double Beta decay (NLDB) offers a possible opportunity for direct observation of a Majorana Fermion. The rate for NLDB decay may be as low as 1 count / ton / year . Current detector technologies have background rates between 4 to 300 count / ton / year / ROI which is much larger than the universal goal of 0 . 1 count / ton / year / ROI desired for ton-scale detectors. The premise of my research is to develop new detector technologies that will allow for a background-free experiment. My current work is to develop a sensor that will tag the daughter ion Ba++ from the Xe136 decay. The development of a sensor that is sensitive to single barium ion detection based on the single molecule fluorescence imaging technique is the major focus of this work. If successful, this could provide a path to a background-free experiment.

  8. Atmospheric Correction and Vicarious Calibration of Oceansat-1 Ocean Color Monitor (OCM) Data in Coastal Case 2 Waters

    DTIC Science & Technology

    2012-06-08

    Earth Scan Laboratory, Louisiana State University. Raw OCM data were calibrated by converting raw counts to radiance values for the eight OCM spectral...La(λi)) Aerosol path radiance is the contribution of scattering by particles similar to or larger than the wavelength of light such as dust, pollen ...University. Raw OCM data were calibrated by converting raw counts to radiance values for the eight OCM spectral bands using the SeaSpace Terascan TM

  9. Swift J1822.3-1606: pre-outburst ROSAT limits (plus erratum)

    NASA Astrophysics Data System (ADS)

    Esposito, P.; Rea, N.; Israel, G. L.; Tieng, A.

    2011-07-01

    We report on a pre-outburst ROSAT PSPC observation of the new SGR discovered by Swift-BAT on 2011 July 14 (Cummings et al. Atel #3488). The PSPC observation was performed on 1993 September 12 for ~6.7ks. We find a source at: RA (2000) = 18 22 18.1 and Dec (2000)= -16 04 26.4, with a 5sigma detection significance. The count-rate (corrected for the PSPC PSF, sampling dead time, and vignetting) is about 0.012 counts/s.

  10. A Prescription for List-Mode Data Processing Conventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef

    There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less

  11. Potential errors in body composition as estimated by whole body scintillation counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.

    Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r . 0.87, p less than 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less

  12. Potential errors in body composition as estimated by whole body scintillation counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.

    Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r = 0.87, p < 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less

  13. Effective estimation of correct platelet counts in pseudothrombocytopenia using an alternative anticoagulant based on magnesium salt

    PubMed Central

    Schuff-Werner, Peter; Steiner, Michael; Fenger, Sebastian; Gross, Hans-Jürgen; Bierlich, Alexa; Dreissiger, Katrin; Mannuß, Steffen; Siegert, Gabriele; Bachem, Maximilian; Kohlschein, Peter

    2013-01-01

    Pseudothrombocytopenia remains a challenge in the haematological laboratory. The pre-analytical problem that platelets tend to easily aggregate in vitro, giving rise to lower platelet counts, has been known since ethylenediamine-tetra acetic acid EDTA and automated platelet counting procedures were introduced in the haematological laboratory. Different approaches to avoid the time and temperature dependent in vitro aggregation of platelets in the presence of EDTA were tested, but none of them proved optimal for routine purposes. Patients with unexpectedly low platelet counts or flagged for suspected aggregates, were selected and smears were examined for platelet aggregates. In these cases patients were asked to consent to the drawing of an additional sample of blood anti-coagulated with a magnesium additive. Magnesium was used in the beginning of the last century as anticoagulant for microscopic platelet counts. Using this approach, we documented 44 patients with pseudothrombocytopenia. In all cases, platelet counts were markedly higher in samples anti-coagulated with the magnesium containing anticoagulant when compared to EDTA-anticoagulated blood samples. We conclude that in patients with known or suspected pseudothrombocytopenia the magnesium-anticoagulant blood samples may be recommended for platelet counting. PMID:23808903

  14. Development of an automated asbestos counting software based on fluorescence microscopy.

    PubMed

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  15. Critical Configuration and Physics Measurements for Beryllium Reflected Assemblies of U(93.15)O₂ Fuel Rods (1.506-cm Pitch and 7-Tube Clusters)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, Margaret A.; Bess, John D.; Briggs, J. Blair

    2015-03-01

    Cadmium ratios were measured with enriched uranium metal foils at various locations in the assembly with the fuel tube at the 1.506-cm spacing. They are described in the following subsections. The experiment configuration was the same as the first critical configuration described in HEU-COMP-FAST-004 (Case 1). The experimenter placed 0.75-cm-diameter × 0.010-cm-thick 93.15%-235U-enriched uranium metal foils with and without 0.051-cm-thick cadmium covers at various locations in the core and top reflector. One part of the cadmium cover was cupshape and contained the uranium foil. The other part was a lid that fit over the exposed side of the foil whenmore » it was in the cup shaped section of the cover. As can be seen in the logbook, two runs were required to obtain all the measurements necessary for the cadmium ratio. The bare foil measurements within the top reflector were run first as part of the axial foil activation measurements. The results of this run are used for both the axial activation results and the cadmium ratios. Cadmium covered foils were then placed at the same location through the top reflector in a different run. Three pairs of bare and cadmium covered foils were also placed through the core tank. One pair was placed at the axial center of a fuel tube 11.35 cm from the center of the core. Two pairs of foils were placed on top of fuel tubes 3.02 and 12.06 cm from the center of the core. The activation of the uranium metal foils was measured after removal from the assembly using two lead shielded NaI scintillation detectors as follows. The NaI scintillators were carefully matched and had detection efficiencies for counting delayed-fission-product gamma rays with energies above 250 KeV within 5%. In all foil activation measurements, one foil at a specific location was used as a normalizing foil to remove the effects of the decay of fission products during the counting measurements with the NaI detectors. The normalization foil was placed on one NaI scintillator and the other foil on the other NaI detector and the activities measured simultaneously. The activation of a particular foil was compared to that of the normalization foil by dividing the count rate for each foil by that of the normalization foil. To correct for the differing efficiencies of the two NaI detectors, the normalization foil was counted in Detector 1 simultaneously with the foil at position x in Detector 2, and then the normalization foil was counted simultaneously in Detector 2 with the foil from position x in Counter 1. The activity of the foil from position x was divided by the activity of the normalization foil counted simultaneously. This resulted in obtaining two values of the ratio that were then averaged. This procedure essentially removed the effect of the differing efficiencies of the two NaI detectors. Differing efficiencies of 10% resulted in errors in the ratios measured to less than 1%. The background counting rates obatined with the foils used for the measurements on the NaI detectors before their irradiation measurement were subtracted from all count rates. The results of the cadmium ratio measurements are given in Table 1.3-1 and Figure 1.3-1. “No correction has been made for self shielding in the foils” (Reference 3).« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Jeter C.; Aalseth, Craig E.; Bonicalzi, Ricco

    Age dating groundwater and seawater using 39Ar/Ar ratios is an important tool to understand water mass flow rates and mean residence time. For modern or contemporary argon, the 39Ar activity is 1.8 mBq per liter of argon. Radiation measurements at these activity levels require ultra low-background detectors. Low-background proportional counters have been developed at Pacific Northwest National Laboratory. These detectors use traditional mixtures of argon and methane as counting gas, and the residual 39Ar from commercial argon has become a predominant source of background activity in these detectors. We demonstrated sensitivity to 39Ar by using geological or ancient argon frommore » gas wells in place of commercial argon. The low level counting performance of these proportional counters is then demonstrated for sensitivities to 39Ar/Ar ratios sufficient to date water masses as old as 1000 years.« less

  17. Direct Validation of the Wall Interference Correction System of the Ames 11-Foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Boone, Alan R.

    2003-01-01

    Data from the test of a large semispan model was used to perform a direct validation of a wall interference correction system for a transonic slotted wall wind tunnel. At first, different sets of uncorrected aerodynamic coefficients were generated by physically changing the boundary condition of the test section walls. Then, wall interference corrections were computed and applied to all data points. Finally, an interpolation of the corrected aerodynamic coefficients was performed. This interpolation made sure that the corrected Mach number of a given run would be constant. Overall, the agreement between corresponding interpolated lift, drag, and pitching moment coefficient sets was very good. Buoyancy corrections were also investigated. These studies showed that the accuracy goal of one drag count may only be achieved if reliable estimates of the wall interference induced buoyancy correction are available during a test.

  18. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  19. Examples of Mesh and NURBS modelling for in vivo lung counting studies.

    PubMed

    Farah, Jad; Broggio, David; Franck, Didier

    2011-03-01

    Realistic calibration coefficients for in vivo counting installations are assessed using voxel phantoms and Monte Carlo calculations. However, voxel phantoms construction is time consuming and their flexibility extremely limited. This paper involves Mesh and non-uniform rational B-splines graphical formats, of greater flexibility, to optimise the calibration of in vivo counting installations. Two studies validating the use of such phantoms and involving geometry deformation and modelling were carried out to study the morphologic effect on lung counting efficiency. The created 3D models fitted with the reference ones, with volumetric differences of <5 %. Moreover, it was found that counting efficiency varies with the inverse of lungs' volume and that the latter primes when compared with chest wall thickness. Finally, a series of different thoracic female phantoms of various cup sizes, chest girths and internal organs' volumes were created starting from the International Commission on Radiological Protection (ICRP) adult female reference computational phantom to give correction factors for the lung monitoring of female workers.

  20. Low Background Signal Readout Electronics for the MAJORANA DEMONSTRATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guinn, I.; Abgrall, N.; Arnquist, Isaac J.

    2015-03-18

    The Majorana Demonstrator (MJD)[1] is an array of p-type point contact (PPC) high purity Germanium (HPGe) detectors intended to search for neutrinoless double beta decay (0vBB decay) in 76Ge. MJD will consist of 40 kg of detectors, 30 kg of which will be isotopically enriched to 87% 76Ge. The array will consist of 14 strings of four or ve detectors placed in two separate cryostats. One of the main goals of the experiment is to demonstrate the feasibility of building a tonne-scale array of detectors to search for 0vBB decay with a much higher sensitivity. This involves acheiving backgrounds inmore » the 4 keV region of interest (ROI) around the 2039 keV Q-value of the BB decay of less than 1 count/ROI-t-y. Because many backgrounds will not directly scale with detector mass, the specific background goal of MJD is less than 3 counts/ROI-t-y.« less

  1. The coincidence counting technique for orders of magnitude background reduction in data obtained with the magnetic recoil spectrometer at OMEGA and the NIF.

    PubMed

    Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S

    2011-07-01

    A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.

  2. Improved detection of radioactive material using a series of measurements

    NASA Astrophysics Data System (ADS)

    Mann, Jenelle

    The goal of this project is to develop improved algorithms for detection of radioactive sources that have low signal compared to background. The detection of low signal sources is of interest in national security applications where the source may have weak ionizing radiation emissions, is heavily shielded, or the counting time is short (such as portal monitoring). Traditionally to distinguish signal from background the decision threshold (y*) is calculated by taking a long background count and limiting the false negative error (alpha error) to 5%. Some problems with this method include: background is constantly changing due to natural environmental fluctuations and large amounts of data are being taken as the detector continuously scans that are not utilized. Rather than looking at a single measurement, this work investigates looking at a series of N measurements and develops an appropriate decision threshold for exceeding the decision threshold n times in a series of N. This methodology is investigated for a rectangular, triangular, sinusoidal, Poisson, and Gaussian distribution.

  3. Rigid-body transformation of list-mode projection data for respiratory motion correction in cardiac PET.

    PubMed

    Livieratos, L; Stegger, L; Bloomfield, P M; Schafers, K; Bailey, D L; Camici, P G

    2005-07-21

    High-resolution cardiac PET imaging with emphasis on quantification would benefit from eliminating the problem of respiratory movement during data acquisition. Respiratory gating on the basis of list-mode data has been employed previously as one approach to reduce motion effects. However, it results in poor count statistics with degradation of image quality. This work reports on the implementation of a technique to correct for respiratory motion in the area of the heart at no extra cost for count statistics and with the potential to maintain ECG gating, based on rigid-body transformations on list-mode data event-by-event. A motion-corrected data set is obtained by assigning, after pre-correction for detector efficiency and photon attenuation, individual lines-of-response to new detector pairs with consideration of respiratory motion. Parameters of respiratory motion are obtained from a series of gated image sets by means of image registration. Respiration is recorded simultaneously with the list-mode data using an inductive respiration monitor with an elasticized belt at chest level. The accuracy of the technique was assessed with point-source data showing a good correlation between measured and true transformations. The technique was applied on phantom data with simulated respiratory motion, showing successful recovery of tracer distribution and contrast on the motion-corrected images, and on patient data with C15O and 18FDG. Quantitative assessment of preliminary C15O patient data showed improvement in the recovery coefficient at the centre of the left ventricle.

  4. High-Performance X-ray Detection in a New Analytical Electron Microscope

    NASA Technical Reports Server (NTRS)

    Lyman, C. E.; Goldstein, J. I.; Williams, D. B.; Ackland, D. W.; vonHarrach, S.; Nicholls, A. W.; Statham, P. J.

    1994-01-01

    X-ray detection by energy-dispersive spectrometry in the analytical electron microscope (AEM) is often limited by low collected X-ray intensity (P), modest peak-to-background (P/B) ratios, and limitations on total counting time (tau) due to specimen drift and contamination. A new AFM has been designed with maximization of P. P/B, and tau as the primary considerations. Maximization of P has been accomplished by employing a field-emission electron gun, X-ray detectors with high collection angles, high-speed beam blanking to allow only one photon into the detector at a time, and simultaneous collection from two detectors. P/B has been maximized by reducing extraneous background signals generated at the specimen holder, the polepieces and the detector collimator. The maximum practical tau has been increased by reducing specimen contamination and employing electronic drift correction. Performance improvments have been measured using the NIST standard Cr thin film. The 0-3 steradian solid angle of X-ray collection is the highest value available. The beam blanking scheme for X-ray detection provides 3-4 times greater throughput of X-rays at high count rates into a recorded spectrum than normal systems employing pulse-pileup rejection circuits. Simultaneous X-ray collection from two detectors allows the highest X-ray intensity yet recorded to be collected from the NIST Cr thin film. The measured P/B of 6300 is the highest level recorded for an AEM. In addition to collected X-ray intensity (cps/nA) and P/B measured on the standard Cr film, the product of these can be used as a figure-of-merit to evaluate instruments. Estimated minimum mass fraction (MMF) for Cr measured on the standard NIST Cr thin film is also proposed as a figure-of-merit for comparing X-ray detection in AEMs. Determinations here of the MMF of Cr detectable show at least a threefold improvement over previous instruments.

  5. Atmospheric deposition of {sup 7}Be by rain events, incentral Argentina

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayub, J. Juri; Velasco, H.; Rizzotto, M.

    2008-08-07

    Beryllium-7 is a natural radionuclide that enters into the ecosystems through wet and dry depositions and has numerous environmental applications in terrestrial and aquatic ecosystems. Atmospheric wet deposition of {sup 7}Be was measured in central Argentina. Rain traps were installed (1 m above ground) and individual rain events have been collected. Rain samples were filtered and analyzed by gamma spectrometry. The gamma counting was undertaken using a 40%-efficient p-type coaxial intrinsic high-purity natural germanium crystal built by Princeton Gamma-Tech. The cryostat was made from electroformed high-purity copper using ultralow-background technology. The detector was surrounded by 50 cm of lead bricksmore » to provide shielding against radioactive background. The detector gamma efficiency was determined using a water solution with known amounts of chemical compounds containing long-lived naturally occurring radioisotopes, {sup 176}Lu, {sup 138}La and {sup 40}K. Due to the geometry of the sample and its position close to the detector, the efficiency points from the {sup 176}Lu decay, had to be corrected for summing effects. The measured samples were 400 ml in size and were counted curing one day. The {sup 7}Be detection limit for the present measurements was as low as 0.2 Bq l{sup -1}. Thirty two rain events were sampled and analyzed (November 2006-May 2007). The measured values show that the events corresponding to low rainfall (<20 mm) are characterized by significantly higher activity concentrations (Bq l{sup -1}). The activity concentration of each individual event varied from 0.8 to 3.5 Bq l{sup -1}, while precipitations varied between 4 and 70 mm. The integrated activity by event of {sup 7}Be was fitted with a model that takes into account the precipitation amount and the elapsed time between two rain events. The integrated activities calculated with this model show a good agreement with experimental values.« less

  6. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  7. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  8. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    PubMed

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  9. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    NASA Astrophysics Data System (ADS)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with higher SpCs.

  10. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs

    PubMed Central

    Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.

    2010-01-01

    Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158

  12. Low Background Counting at LBNL

    DOE PAGES

    Smith, A. R.; Thomas, K. J.; Norman, E. B.; ...

    2015-03-24

    The Low Background Facility (LBF) at Lawrence Berkeley National Laboratory in Berkeley, California provides low background gamma spectroscopy services to a wide array of experiments and projects. The analysis of samples takes place within two unique facilities; locally within a carefully-constructed, low background cave and remotely at an underground location that historically has operated underground in Oroville, CA, but has recently been relocated to the Sanford Underground Research Facility (SURF) in Lead, SD. These facilities provide a variety of gamma spectroscopy services to low background experiments primarily in the form of passive material screening for primordial radioisotopes (U, Th, K)more » or common cosmogenic/anthropogenic products, as well as active screening via Neutron Activation Analysis for specific applications. The LBF also provides hosting services for general R&D testing in low background environments on the surface or underground for background testing of detector systems or similar prototyping. A general overview of the facilities, services, and sensitivities is presented. Recent activities and upgrades will also be presented, such as the completion of a 3π anticoincidence shield at the surface station and environmental monitoring of Fukushima fallout. The LBF is open to any users for counting services or collaboration on a wide variety of experiments and projects.« less

  13. Reducing the Child Poverty Rate. KIDS COUNT Indicator Brief

    ERIC Educational Resources Information Center

    Shore, Rima; Shore, Barbara

    2009-01-01

    In 2007, nearly one in five or 18 percent of children in the U.S. lived in poverty (KIDS COUNT Data Center, 2009). Many of these children come from minority backgrounds. African American (35 percent), American Indian (33 percent) and Latino (27 percent) children are more likely to live in poverty than their white (11 percent) and Asian (12…

  14. Cosmic-ray effects on diffuse gamma-ray measurements.

    NASA Technical Reports Server (NTRS)

    Fishman, G. J.

    1972-01-01

    Evaluation of calculations and experimental evidence from 600-MeV proton irradiation indicating that cosmic-ray-induced radioactivity in detectors used to measure the diffuse gamma-ray background produces a significant counting rate in the energy region around 1 MeV. It is concluded that these counts may be responsible for the observed flattening of the diffuse photon spectrum at this energy.

  15. Retrospective determination of the contamination in the HML's counting chambers.

    PubMed

    Kramer, Gary H; Hauck, Barry; Capello, Kevin; Phan, Quoc

    2008-09-01

    The original documentation surrounding the purchase of the Human Monitoring Laboratory's (HML) counting chambers clearly showed that the steel contained low levels of radioactivity, presumably as a result of A-bomb fallout or perhaps to the inadvertent mixing of radioactive sources with scrap steel. Monte Carlo simulations have been combined with experimental measurements to estimate the level of contamination in the steel of the HML's whole body counting chamber. A 24-h empty chamber background count showed the presence of 137Cs and 60Co. The estimated activity of 137Cs in the 51 tons of steel was 2.7 kBq in 2007 (51.3 microBq g(-1) steel) which would have been 8 kBq at the time of manufacture. The 60Co that was found in the background spectrum is postulated to be contained in the bed-frame. The estimated amount in 2007 was 5 Bq and its origin is likely to be contaminated scrap metal entering the steel production cycle sometime in the past. The estimated activities are 10 to 25 times higher than the estimated minimum detectable activity for this measurement. These amounts have no impact on the usefulness of the whole body counter.

  16. Relativistic Corrections to the Sunyaev-Zeldovich Effect for Clusters of Galaxies. III. Polarization Effect

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu

    2000-04-01

    We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.

  17. Spatial variation of ultrafine particles and black carbon in two cities: results from a short-term measurement campaign.

    PubMed

    Klompmaker, Jochem O; Montagne, Denise R; Meliefste, Kees; Hoek, Gerard; Brunekreef, Bert

    2015-03-01

    Recently, short-term monitoring campaigns have been carried out to investigate the spatial variation of air pollutants within cities. Typically, such campaigns are based on short-term measurements at relatively large numbers of locations. It is largely unknown how well these studies capture the spatial variation of long term average concentrations. The aim of this study was to evaluate the within-site temporal and between-site spatial variation of the concentration of ultrafine particles (UFPs) and black carbon (BC) in a short-term monitoring campaign. In Amsterdam and Rotterdam (the Netherlands) measurements of number counts of particles larger than 10nm as a surrogate for UFP and BC were performed at 80 sites per city. Each site was measured in three different seasons of 2013 (winter, spring, summer). Sites were selected from busy urban streets, urban background, regional background and near highways, waterways and green areas, to obtain sufficient spatial contrast. Continuous measurements were performed for 30 min per site between 9 and 16 h to avoid traffic spikes of the rush hour. Concentrations were simultaneously measured at a reference site to correct for temporal variation. We calculated within- and between-site variance components reflecting temporal and spatial variations. Variance ratios were compared with previous campaigns with longer sampling durations per sample (24h to 14 days). The within-site variance was 2.17 and 2.44 times higher than the between-site variance for UFP and BC, respectively. In two previous studies based upon longer sampling duration much smaller variance ratios were found (0.31 and 0.09 for UFP and BC). Correction for temporal variation from a reference site was less effective for the short-term monitoring campaign compared to the campaigns with longer duration. Concentrations of BC and UFP were on average 1.6 and 1.5 times higher at urban street compared to urban background sites. No significant differences between the other site types and urban background were found. The high within to between-site concentration variances may result in the loss of precision and low explained variance when average concentrations from short-term campaigns are used to develop land use regression models. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Review of approaches to the recording of background lesions in toxicologic pathology studies in rats.

    PubMed

    McInnes, E F; Scudamore, C L

    2014-08-17

    Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, L.G.; Norman, P.I.; Leadbeater, T.W.

    Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less

  20. Characteristics of Congenital Hepatic Fibrosis in a Large Cohort of Patients With Autosomal Recessive Polycystic Kidney Disease

    PubMed Central

    Gunay–Aygun, Meral; Font–Montgomery, Esperanza; Lukose, Linda; Gerstein, Maya Tuchman; Piwnica–Worms, Katie; Choyke, Peter; Daryanani, Kailash T.; Turkbey, Baris; Fischer, Roxanne; Bernardini, Isa; Sincan, Murat; Zhao, Xiongce; Sandler, Netanya G.; Roque, Annelys; Douek, Daniel C.; Graf, Jennifer; Huizing, Marjan; Bryant, Joy C.; Mohan, Parvathi; Gahl, William A.; Heller, Theo

    2013-01-01

    BACKGROUND & AIMS Autosomal recessive polycystic kidney disease (ARPKD), the most common ciliopathy of childhood, is characterized by congenital hepatic fibrosis and progressive cystic degeneration of kidneys. We aimed to describe congenital hepatic fibrosis in patients with ARPKD, confirmed by detection of mutations in PKHD1. METHODS Patients with ARPKD and congenital hepatic fibrosis were evaluated at the National Institutes of Health from 2003 to 2009. We analyzed clinical, molecular, and imaging data from 73 patients (age, 1–56 years; average, 12.7 ± 13.1 years) with kidney and liver involvement (based on clinical, imaging, or biopsy analyses) and mutations in PKHD1. RESULTS Initial symptoms were liver related in 26% of patients, and others presented with kidney disease. One patient underwent liver and kidney transplantation, and 10 others received kidney transplants. Four presented with cholangitis and one with variceal bleeding. Sixty-nine percent of patients had enlarged left lobes on magnetic resonance imaging, 92% had increased liver echogenicity on ultrasonography, and 65% had splenomegaly. Splenomegaly started early in life; 60% of children younger than 5 years had enlarged spleens. Spleen volume had an inverse correlation with platelet count and prothrombin time but not with serum albumin level. Platelet count was the best predictor of spleen volume (area under the curve of 0.88905), and spleen length corrected for patient’s height correlated inversely with platelet count (R2 = 0.42, P < .0001). Spleen volume did not correlate with renal function or type of PKHD1 mutation. Twenty-two of 31 patients who underwent endoscopy were found to have varices. Five had variceal bleeding, and 2 had portosystemic shunts. Forty-percent had Caroli syndrome, and 30% had an isolated dilated common bile duct. CONCLUSIONS Platelet count is the best predictor of the severity of portal hypertension, which has early onset but is underdiagnosed in patients with ARPKD. Seventy percent of patients with ARPKD have biliary abnormalities. Kidney and liver disease are independent, and variability in severity is not explainable by type of PKHD1 mutation; PMID:23041322

  1. Counting in Lattices: Combinatorial Problems from Statistical Mechanics.

    NASA Astrophysics Data System (ADS)

    Randall, Dana Jill

    In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of California dissertation year fellowship and Esprit working group "RAND". Part of this work was done while visiting ICSI and the University of Edinburgh.

  2. Automated vehicle counting using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  3. An algorithm for determining the rotation count of pulsars

    NASA Astrophysics Data System (ADS)

    Freire, Paulo C. C.; Ridolfi, Alessandro

    2018-06-01

    We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.

  4. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  5. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  6. Detection of bremsstrahlung radiation of 90Sr-90Y for emergency lung counting.

    PubMed

    Ho, A; Hakmana Witharana, S S; Jonkmans, G; Li, L; Surette, R A; Dubeau, J; Dai, X

    2012-09-01

    This study explores the possibility of developing a field-deployable (90)Sr detector for rapid lung counting in emergency situations. The detection of beta-emitters (90)Sr and its daughter (90)Y inside the human lung via bremsstrahlung radiation was performed using a 3″ × 3″ NaI(Tl) crystal detector and a polyethylene-encapsulated source to emulate human lung tissue. The simulation results show that this method is a viable technique for detecting (90)Sr with a minimum detectable activity (MDA) of 1.07 × 10(4) Bq, using a realistic dual-shielded detector system in a 0.25-µGy h(-1) background field for a 100-s scan. The MDA is sufficiently sensitive to meet the requirement for emergency lung counting of Type S (90)Sr intake. The experimental data were verified using Monte Carlo calculations, including an estimate for internal bremsstrahlung, and an optimisation of the detector geometry was performed. Optimisations in background reduction techniques and in the electronic acquisition systems are suggested.

  7. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  8. Gross beta determination in drinking water using scintillating fiber array detector.

    PubMed

    Lv, Wen-Hui; Yi, Hong-Chang; Liu, Tong-Qing; Zeng, Zhi; Li, Jun-Li; Zhang, Hui; Ma, Hao

    2018-04-04

    A scintillating fiber array detector for measuring gross beta counting is developed to monitor the real-time radioactivity in drinking water. The detector, placed in a stainless-steel tank, consists of 1096 scintillating fibers, both sides of which are connected to a photomultiplier tube. The detector parameters, including working voltage, background counting rate and stability, are tested, and the detection efficiency is calibrated using standard potassium chloride solution. Water samples are measured with the detector and the results are compared with those by evaporation method. The results show consistency with those by evaporation method. The background counting rate of the detector is 38.131 ± 0.005 cps, and the detection efficiency for β particles is 0.37 ± 0.01 cps/(Bq/l). The MDAC of this system can be less than 1.0 Bq/l for β particles in 120 min without pre-concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. White blood cell subsets are associated with carotid intima-media thickness and pulse wave velocity in an older Chinese population: the Guangzhou Biobank Cohort Study.

    PubMed

    Phillips, A C; Jiang, C Q; Thomas, G N; Lin, J M; Yue, X J; Cheng, K K; Jin, Y L; Zhang, W S; Lam, T H

    2012-08-01

    Cross-sectional associations between white blood cell (WBC) count, lymphocyte and granulocyte numbers, and carotid intima-media thickness (IMT) and brachial-ankle pulse wave velocity (PWV) were examined in a novel older Chinese community sample. A total of 817 men and 760 women from a sub-study of the Guangzhou Biobank Cohort Study had a full blood count measured by an automated hematology analyzer, carotid IMT by B-mode ultrasonography and brachial-ankle PWV by a non-invasive automatic waveform analyzer. Following adjustment for confounders, WBC count (β=0.07, P<0.001) and granulocyte (β=0.07, P<0.001) number were significantly positively related to PWV, but not lymphocyte number. Similarly, WBC count (β=0.08, P=0.03), lymphocyte (β=0.08, P=0.002) and granulocyte (β=0.03, P=0.04) number were significantly positively associated with carotid IMT, but only the association with lymphocyte count survived correction for other cardiovascular risk factors. In conclusion, higher WBC, particularly lymphocyte and granulocyte, count could be used, respectively, as markers of cardiovascular disease risk, measured through indicators of atherosclerosis and arterial stiffness. The associations for WBC count previously observed by others were likely driven by higher granulocytes; an index of systemic inflammation.

  10. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  11. A new approach to counting measurements: Addressing the problems with ISO-11929

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less

  12. A new approach to counting measurements: Addressing the problems with ISO-11929

    DOE PAGES

    Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie

    2017-12-23

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less

  13. A new approach to counting measurements: Addressing the problems with ISO-11929

    NASA Astrophysics Data System (ADS)

    Klumpp, John; Miller, Guthrie; Poudel, Deepesh

    2018-06-01

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.

  14. Low-background gamma-ray spectrometry for the international monitoring system

    DOE PAGES

    Greenwood, L. R.; Cantaloub, M. G.; Burnett, J. L.; ...

    2016-12-28

    PNNL has developed two low-background gamma-ray spectrometers in a new shallow underground laboratory, thereby significantly improving its ability to detect low levels of gamma-ray emitting fission or activation products in airborne particulate in samples from the IMS (International Monitoring System). Furthermore, the combination of cosmic veto panels, dry nitrogen gas to reduce radon and low background shielding results in a reduction of the background count rate by about a factor of 100 compared to detectors operating above ground at our laboratory.

  15. Effects of Two Application Methods of Plantaricin BM-1 on Control of Listeria monocytogenes and Background Spoilage Bacteria in Sliced Vacuum-Packaged Cooked Ham Stored at 4°C.

    PubMed

    Zhou, Huimin; Xie, Yuanhong; Liu, Hui; Jin, Junhua; Duan, Huixia; Zhang, Hongxing

    2015-10-01

    Two application methods were used to investigate the effect of plantaricin BM-1 on the control of Listeria monocytogenes and background spoilage bacteria in sliced vacuum-packaged cooked ham without the addition of any chemical preservatives, including sodium nitrite, during 35 days of storage at 4°C. Regardless of the application method, plantaricin BM-1 treatment (320, 640, or 1,280 arbitrary units [AU]/g of sliced cooked ham) significantly (P < 0.05) reduced the survival of L. monocytogenes (inoculated at 4 log CFU/g of sliced ham) compared with its survival in the control during the first 21 days of storage at 4°C. The inhibitory effect of plantaricin applied to the surface of the ham was significantly better than the same concentration of plantaricin incorporated into the cooked ham (P < 0.0001) during storage. Even 320 AU/g plantaricin applied to the surface exhibited greater inhibition of L. monocytogenes than 1,280 AU/g plantaricin incorporated into the cooked ham on days 1, 14, and 28. A level of 1,280 AU/g plantaricin applied to the surface of the ham reduced L. monocytogenes counts to below the detection limit from the 1st to the 21st day of storage at 4°C. Afterwards, L. monocytogenes was able to regrow, and the viable counts of L. monocytogenes at the end of storage reached 2.76 log CFU/g (6.11 log CFU/g lower than in the control). In the control ham, the counts of background spoilage bacteria increased gradually and surpassed the microbiological spoilage limitation level on the 21st day of storage. However, plantaricin BM-1 treatment significantly (P < 0.05) reduced the survival of background spoilage bacteria in ham compared with their survival in the control from day 21 to 35 of storage at 4°C. A level of 1,280 AU/g plantaricin incorporated into cooked ham was the most effective, reducing the count of background spoilage bacteria count from an initial 2.0 log CFU/g to 1.5 log CFU/g on day 7. This was then maintained for another 14 days and finally increased to 2.76 log CFU/g at the end of the storage at 4°C (2.85 log CFU/g lower than in the control). In conclusion, plantaricin BM-1 application inhibited the growth of L. monocytogenes and background spoilage bacteria in cooked ham during storage at 4°C and could be used as an antimicrobial additive for meat preservation.

  16. Neonatal nucleated red blood cells in infants of overweight and obese mothers.

    PubMed

    Sheffer-Mimouni, Galit; Mimouni, Francis B; Dollberg, Shaul; Mandel, Dror; Deutsch, Varda; Littner, Yoav

    2007-06-01

    The perinatal outcome of the infant of obese mother is adversely affected and in theory, may involve fetal hypoxia. We hypothesized that an index of fetal hypoxia, the neonatal nucleated red blood cell (NRBC) count, is elevated in infants of overweight and obese mothers. Absolute NRBC counts taken during the first 12 hours of life in 41 infants of overweight and obese mothers were compared to 28 controls. Maternal body mass index and infant birthweight were significantly higher in the overweight and obese group (P < 0.01). Hematocrit, corrected white blood cell and lymphocyte counts did not differ between groups. The absolute NRBC count was higher (P = 0.01), and the platelet count lower (P = 0.05) in infants of overweight and obese mothers than in controls. In stepwise regression analysis, the absolute NRBC count in infants of overweight and obese mothers remained significantly higher even after taking into account birthweight or gestational age and Apgar scores (P < 0.02). Infants of overweight and obese mothers have increased nucleated red blood cells at birth compared with controls. We speculate that even apparently healthy fetuses of overweight and obese mothers are exposed to a subtle hypoxemic environment.

  17. The Herschel-ATLAS: Extragalatic Number Counts from 250 to 500 Microns

    NASA Technical Reports Server (NTRS)

    Clements, D. L.; Rigby, E.; Maddox, S.; Dunne, L.; Mortier, A.; Amblard, A.; Auld, R.; Bonfield, D.; Cooray, A.; Dariush, A.; hide

    2010-01-01

    Aims.The Herschel-ATLAS survey (H-ATLAS) will be the largest area survey to be undertaken by the Herschel Space Observatory. It will cover 550 sq. deg. of extragalactic sky at wavelengths of 100, 160, 250, 350 and 500 microns when completed, reaching flux limits (50-) from 32 to 145mJy. We here present galaxy number counts obtained for SPIRE observations of the first -14 sq. deg. observed at 250, 350 and 500 m. Methods. Number counts are a fundamental tool in constraining models of galaxy evolution. We use source catalogs extracted from the H-ATLAS maps as the basis for such an analysis. Correction factors for completeness and flux boosting are derived by applying our extraction method to model catalogs and then applied to the raw observational counts. Results. We find a steep rise in the number counts at flux levels of 100-200mJy in all three SPIRE bands, consistent with results from BLAST. The counts are compared to a range of galaxy evolution models. None of the current models is an ideal fit to the data but all ascribe the steep rise to a population of luminous, rapidly evolving dusty galaxies at moderate to high redshift.

  18. Exploring the effects of transfers and readmissions on trends in population counts of hospital admissions for coronary heart disease: a Western Australian data linkage study.

    PubMed

    Lopez, Derrick; Nedkoff, Lee; Knuiman, Matthew; Hobbs, Michael S T; Briffa, Thomas G; Preen, David B; Hung, Joseph; Beilby, John; Mathur, Sushma; Reynolds, Anna; Sanfilippo, Frank M

    2017-11-17

    To develop a method for categorising coronary heart disease (CHD) subtype in linked data accounting for different CHD diagnoses across records, and to compare hospital admission numbers and ratios of unlinked versus linked data for each CHD subtype over time, and across age groups and sex. Cohort study. Person-linked hospital administrative data covering all admissions for CHD in Western Australia from 1988 to 2013. Ratios of (1) unlinked admission counts to contiguous admission (CA) counts (accounting for transfers), and (2) 28-day episode counts (accounting for transfers and readmissions) to CA counts stratified by CHD subtype, sex and age group. In all CHD subtypes, the ratios changed in a linear or quadratic fashion over time and the coefficients of the trend term differed across CHD subtypes. Furthermore, for many CHD subtypes the ratios also differed by age group and sex. For example, in women aged 35-54 years, the ratio of unlinked to CA counts for non-ST elevation myocardial infarction admissions in 2000 was 1.10, and this increased in a linear fashion to 1.30 in 2013, representing an annual increase of 0.0148. The use of unlinked counts in epidemiological estimates of CHD hospitalisations overestimates CHD counts. The CA and 28-day episode counts are more aligned with epidemiological studies of CHD. The degree of overestimation of counts using only unlinked counts varies in a complex manner with CHD subtype, time, sex and age group, and it is not possible to apply a simple correction factor to counts obtained from unlinked data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Background correction in separation techniques hyphenated to high-resolution mass spectrometry - Thorough correction with mass spectrometry scans recorded as profile spectra.

    PubMed

    Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda

    2017-04-07

    Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Development of a low-level 39Ar calibration standard – Analysis by absolute gas counting measurements augmented with simulation

    DOE PAGES

    Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...

    2017-02-17

    Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).

  1. Development of a low-level 39Ar calibration standard – Analysis by absolute gas counting measurements augmented with simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.

    Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).

  2. New Swift UVOT data reduction tools and AGN variability studies

    NASA Astrophysics Data System (ADS)

    Gelbord, Jonathan; Edelson, Rick

    2017-08-01

    The efficient slewing and flexible scheduling of the Swift observatory have made it possible to conduct monitoring campaigns that are both intensive and prolonged, with multiple visits per day sustained over weeks and months. Recent Swift monitoring campaigns of a handful of AGN provide simultaneous optical, UV and X-ray light curves that can be used to measure variability and interband correlations on timescales from hours to months, providing new constraints for the structures within AGN and the relationships between them. However, the first of these campaigns, thrice-per-day observations of NGC 5548 through four months, revealed anomalous dropouts in the UVOT light curves (Edelson, Gelbord, et al. 2015). We identified the cause as localized regions of reduced detector sensitivity that are not corrected by standard processing. Properly interpreting the light curves required identifying and screening out the affected measurements.We are now using archival Swift data to better characterize these low sensitivity regions. Our immediate goal is to produce a more complete mapping of their locations so that affected measurements can be identified and screened before further analysis. Our longer-term goal is to build a more quantitative model of the effect in order to define a correction for measured fluxes, if possible, or at least to put limits on the impact upon any observation. We will combine data from numerous background stars in well-monitored fields in order to quantify the strength of the effect as a function of filter as well as location on the detector, and to test for other dependencies such as evolution over time or sensitivity to the count rate of the target. Our UVOT sensitivity maps and any correction tools will be provided to the community of Swift users.

  3. Evaluation of sliding baseline methods for spatial estimation for cluster detection in the biosurveillance system

    PubMed Central

    Xing, Jian; Burkom, Howard; Moniz, Linda; Edgerton, James; Leuze, Michael; Tokars, Jerome

    2009-01-01

    Background The Centers for Disease Control and Prevention's (CDC's) BioSense system provides near-real time situational awareness for public health monitoring through analysis of electronic health data. Determination of anomalous spatial and temporal disease clusters is a crucial part of the daily disease monitoring task. Our study focused on finding useful anomalies at manageable alert rates according to available BioSense data history. Methods The study dataset included more than 3 years of daily counts of military outpatient clinic visits for respiratory and rash syndrome groupings. We applied four spatial estimation methods in implementations of space-time scan statistics cross-checked in Matlab and C. We compared the utility of these methods according to the resultant background cluster rate (a false alarm surrogate) and sensitivity to injected cluster signals. The comparison runs used a spatial resolution based on the facility zip code in the patient record and a finer resolution based on the residence zip code. Results Simple estimation methods that account for day-of-week (DOW) data patterns yielded a clear advantage both in background cluster rate and in signal sensitivity. A 28-day baseline gave the most robust results for this estimation; the preferred baseline is long enough to remove daily fluctuations but short enough to reflect recent disease trends and data representation. Background cluster rates were lower for the rash syndrome counts than for the respiratory counts, likely because of seasonality and the large scale of the respiratory counts. Conclusion The spatial estimation method should be chosen according to characteristics of the selected data streams. In this dataset with strong day-of-week effects, the overall best detection performance was achieved using subregion averages over a 28-day baseline stratified by weekday or weekend/holiday behavior. Changing the estimation method for particular scenarios involving different spatial resolution or other syndromes can yield further improvement. PMID:19615075

  4. Generalized uncertainty principle impact onto the black holes information flux and the sparsity of Hawking radiation

    NASA Astrophysics Data System (ADS)

    Alonso-Serrano, Ana; DÄ browski, Mariusz P.; Gohar, Hussain

    2018-02-01

    We investigate the generalized uncertainty principle (GUP) corrections to the entropy content and the information flux of black holes, as well as the corrections to the sparsity of the Hawking radiation at the late stages of evaporation. We find that due to these quantum gravity motivated corrections, the entropy flow per particle reduces its value on the approach to the Planck scale due to a better accuracy in counting the number of microstates. We also show that the radiation flow is no longer sparse when the mass of a black hole approaches Planck mass which is not the case for non-GUP calculations.

  5. The beam stop array method to measure object scatter in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook

    2014-03-01

    Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.

  6. Determination of serum aluminum by electrothermal atomic absorption spectrometry: A comparison between Zeeman and continuum background correction systems

    NASA Astrophysics Data System (ADS)

    Kruger, Pamela C.; Parsons, Patrick J.

    2007-03-01

    Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3 SD) for Al were very similar: 3.0, 3.2, and 4.1 μg L - 1 for the Z5100, 4100ZL, and 3110, respectively. Serum Al method detection limits (3 SD) were 9.8, 6.9, and 7.3 μg L - 1 , respectively. Accuracy was assessed using archived serum (and plasma) reference materials from various external quality assessment schemes (EQAS). Values found with all three instruments were within the acceptable EQAS ranges. The data indicate that relatively modest ETAAS instrumentation equipped with continuum background correction is adequate for routine serum Al monitoring.

  7. Revision of the NIST Standard for (223)Ra: New Measurements and Review of 2008 Data.

    PubMed

    Zimmerman, B E; Bergeron, D E; Cessna, J T; Fitzgerald, R; Pibida, L

    2015-01-01

    After discovering a discrepancy in the transfer standard currently being disseminated by the National Institute of Standards and Technology (NIST), we have performed a new primary standardization of the alpha-emitter (223)Ra using Live-timed Anticoincidence Counting (LTAC) and the Triple-to-Double Coincidence Ratio Method (TDCR). Additional confirmatory measurements were made with the CIEMAT-NIST efficiency tracing method (CNET) of liquid scintillation counting, integral γ-ray counting using a NaI(Tl) well counter, and several High Purity Germanium (HPGe) detectors in an attempt to understand the origin of the discrepancy and to provide a correction. The results indicate that a -9.5 % difference exists between activity values obtained using the former transfer standard relative to the new primary standardization. During one of the experiments, a 2 % difference in activity was observed between dilutions of the (223)Ra master solution prepared using the composition used in the original standardization and those prepared using 1 mol·L(-1) HCl. This effect appeared to be dependent on the number of dilutions or the total dilution factor to the master solution, but the magnitude was not reproducible. A new calibration factor ("K-value") has been determined for the NIST Secondary Standard Ionization Chamber (IC "A"), thereby correcting the discrepancy between the primary and secondary standards.

  8. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  9. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  10. HSCT Propulsion Airframe Integration Studies

    NASA Technical Reports Server (NTRS)

    Chaney, Steve

    1999-01-01

    The Lockheed Martin spillage study was a substantial effort and is worthy of a separate paper. However, since a paper was not submitted a few of the most pertinent results have been pulled out and included in this paper. The reader is urged to obtain a copy of the complete Boeing Configuration Aerodynamics final 1995 contract report for the complete Lockheed documentation of the spillage work. The supersonic cruise studies presented here focus on the bifurcated - axisymmetric inlet drag delta. In the process of analyzing this delta several test/CFD data correlation problems arose that lead to a correction of the measured drag delta from 4.6 counts to 3.1 counts. This study also lead to much better understanding of the OVERFLOW gridding and solution process, and to increased accuracy of the force and moment data. Detailed observations of the CFD results lead to the conclusion that the 3.1 count difference between the two inlet types could be reduced to approximately 2 counts, with an absolute lower bound of 1.2 counts due to friction drag and the bifurcated lip bevel.

  11. A unified genetic association test robust to latent population structure for a count phenotype.

    PubMed

    Song, Minsun

    2018-06-04

    Confounding caused by latent population structure in genome-wide association studies has been a big concern despite the success of genome-wide association studies at identifying genetic variants associated with complex diseases. In particular, because of the growing interest in association mapping using count phenotype data, it would be interesting to develop a testing framework for genetic associations that is immune to population structure when phenotype data consist of count measurements. Here, I propose a solution for testing associations between single nucleotide polymorphisms and a count phenotype in the presence of an arbitrary population structure. I consider a classical range of models for count phenotype data. Under these models, a unified test for genetic associations that protects against confounding was derived. An algorithm was developed to efficiently estimate the parameters that are required to fit the proposed model. I illustrate the proposed approach using simulation studies and an empirical study. Both simulated and real-data examples suggest that the proposed method successfully corrects population structure. Copyright © 2018 John Wiley & Sons, Ltd.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hausladen, Paul; Blessinger, Christopher S; Guzzardo, Tyler

    A complete understanding of both the steady state and transient background measured by Radiation Portal Monitors (RPMs) is essential to predictable system performance, as well as maximization of detection sensitivity. To facilitate this understanding, a test bed for the study of natural background in RPMs has been established at the Oak Ridge National Laboratory. This work was performed in support of the Second Line of Defense Program's mission to detect the illicit movement of nuclear material. In the present work, transient increases in gamma ray counting rates in RPMs due to rain are investigated. The increase in background activity associatedmore » with rain, which has been well documented in the field of environmental radioactivity, originates from the atmospheric deposition of two radioactive daughters of radon-222, namely lead-214 and bismuth-214 (henceforth {sup 222}Rn, {sup 214}Pb and {sup 214}Bi). In this study, rainfall rates recorded by a co-located weather station are compared with RPM count rates and High Purity Germanium spectra. The data verifies these radionuclides are responsible for the dominant transient natural background fluctuations in RPMs. Effects on system performance and potential mitigation strategies are discussed.« less

  13. [Septic arthritis caused by Streptococcus suis].

    PubMed

    Hedegaard, Sofie Sommer; Zaccarin, Matthias; Lindberg, Jens

    2013-05-27

    Streptococcus suis is a global endemic swine pathogen. S. suis can cause meningitis, endocarditis and severe sepsis in humans, who are exposed to swine. Human infection with S. suis was first reported in 1968, since then, human infections have been sporadic although an outbreak in China counted 215 cases. In a rare case of disseminated arthritis we found that correct clinical diagnosis was difficult due to unspecific symptomatology and slow growing bacterial culture. However, conducting thorough examinations is crucial, and if treated correctly the outcome is favourable.

  14. LROC Investigation of Three Strategies for Reducing the Impact of Respiratory Motion on the Detection of Solitary Pulmonary Nodules in SPECT

    NASA Astrophysics Data System (ADS)

    Smyczynski, Mark S.; Gifford, Howard C.; Dey, Joyoni; Lehovich, Andre; McNamara, Joseph E.; Segars, W. Paul; King, Michael A.

    2016-02-01

    The objective of this investigation was to determine the effectiveness of three motion reducing strategies in diminishing the degrading impact of respiratory motion on the detection of small solitary pulmonary nodules (SPNs) in single-photon emission computed tomographic (SPECT) imaging in comparison to a standard clinical acquisition and the ideal case of imaging in the absence of respiratory motion. To do this nonuniform rational B-spline cardiac-torso (NCAT) phantoms based on human-volunteer CT studies were generated spanning the respiratory cycle for a normal background distribution of Tc-99 m NeoTect. Similarly, spherical phantoms of 1.0-cm diameter were generated to model small SPN for each of the 150 uniquely located sites within the lungs whose respiratory motion was based on the motion of normal structures in the volunteer CT studies. The SIMIND Monte Carlo program was used to produce SPECT projection data from these. Normal and single-lesion containing SPECT projection sets with a clinically realistic Poisson noise level were created for the cases of 1) the end-expiration (EE) frame with all counts, 2) respiration-averaged motion with all counts, 3) one fourth of the 32 frames centered around EE (Quarter Binning), 4) one half of the 32 frames centered around EE (Half Binning), and 5) eight temporally binned frames spanning the respiratory cycle. Each of the sets of combined projection data were reconstructed with RBI-EM with system spatial-resolution compensation (RC). Based on the known motion for each of the 150 different lesions, the reconstructed volumes of respiratory bins were shifted so as to superimpose the locations of the SPN onto that in the first bin (Reconstruct and Shift). Five human observers performed localization receiver operating characteristics (LROC) studies of SPN detection. The observer results were analyzed for statistical significance differences in SPN detection accuracy among the three correction strategies, the standard acquisition, and the ideal case of the absence of respiratory motion. Our human-observer LROC determined that Quarter Binning and Half Binning strategies resulted in SPN detection accuracy statistically significantly below ( ) that of standard clinical acquisition, whereas the Reconstruct and Shift strategy resulted in a detection accuracy not statistically significantly different from that of the ideal case. This investigation demonstrates that tumor detection based on acquisitions associated with less than all the counts which could potentially be employed may result in poorer detection despite limiting the motion of the lesion. The Reconstruct and Shift method results in tumor detection that is equivalent to ideal motion correction.

  15. Holographic corrections to the Veneziano amplitude

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-08-01

    We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.

  16. Revised radiometric calibration technique for LANDSAT-4 Thematic Mapper data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    Observations of raw image data, raw radiometric calibration data, and background measurements extracted from the raw data streams on high density tape reveal major shortcomings in a technique proposed by the Canadian Center for Remote Sensing in 1982 for the radiometric correction of TM data. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and data corrected using the earlier proposed technique is explained and the correction required for these factors as a function of individual scan line number for each detector is described. How the revised technique can be incorporated into an operational environment is demonstrated.

  17. Elementary review of electron microprobe techniques and correction requirements

    NASA Technical Reports Server (NTRS)

    Hart, R. K.

    1968-01-01

    Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.

  18. Blade counting tool with a 3D borescope for turbine applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.; Gu, Jiajun; Tao, Li; Song, Guiju; Han, Jie

    2014-07-01

    Video borescopes are widely used for turbine and aviation engine inspection to guarantee the health of blades and prevent blade failure during running. When the moving components of a turbine engine are inspected with a video borescope, the operator must view every blade in a given stage. The blade counting tool is video interpretation software that runs simultaneously in the background during inspection. It identifies moving turbine blades in a video stream, tracks and counts the blades as they move across the screen. This approach includes blade detection to identify blades in different inspection scenarios and blade tracking to perceive blade movement even in hand-turning engine inspections. The software is able to label each blade by comparing counting results to a known blade count for the engine type and stage. On-screen indications show the borescope user labels for each blade and how many blades have been viewed as the turbine is rotated.

  19. One Year Follow-Up of Children and Adolescents With Chronic Immune Thrombocytopenic Purpura (ITP) Treated With Rituximab

    PubMed Central

    Mueller, Brigitta U.; Bennett, Carolyn M.; Feldman, Henry A.; Bussel, James B.; Abshire, Thomas C.; Moore, Theodore B.; Sawaf, Hadi; Loh, Mignon L.; Rogers, Zora R.; Glader, Bertil E.; McCarthy, Maggie C.; Mahoney, Donald H.; Olson, Thomas A.; Feig, Stephen A.; Lorenzana, Adonis N.; Mentzer, William C.; Buchanan, George R.; Neufeld, Ellis J.

    2017-01-01

    Background We previously showed in a prospective study that rituximab appears to be effective in some children and adolescents with severe chronic immune thrombocytopenia. Eleven of 36 patients achieved and maintained platelet counts over 50,000/mm3 within the first 12 weeks. These patients were followed for the next year. Methods Platelet counts were monitored monthly and all subsequent bleeding manifestations and need for further treatment was noted. Results Eight of the 11 initial responders maintained a platelet count over 150,000/mm3 without further treatment intervention. Three patients had a late relapse. One initial non-responder achieved a remission after 16 weeks, and two additional patients maintained platelet counts around 50,000/mm3 without the need for further intervention. Conclusions Rituximab resulted in sustained efficacy with platelet counts of 50,000/mm3 or higher in 11 of 36 patients (31%). PMID:18937333

  20. Morphological spot counting from stacked images for automated analysis of gene copy numbers by fluorescence in situ hybridization.

    PubMed

    Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli

    2002-01-01

    Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.

  1. On the use of positron counting for radio-Assay in nuclear pharmaceutical production.

    PubMed

    Maneuski, D; Giacomelli, F; Lemaire, C; Pimlott, S; Plenevaux, A; Owens, J; O'Shea, V; Luxen, A

    2017-07-01

    Current techniques for the measurement of radioactivity at various points during PET radiopharmaceutical production and R&D are based on the detection of the annihilation gamma rays from the radionuclide in the labelled compound. The detection systems to measure these gamma rays are usually variations of NaI or CsF scintillation based systems requiring costly and heavy lead shielding to reduce background noise. These detectors inherently suffer from low detection efficiency, high background noise and very poor linearity. They are also unable to provide any reasonably useful position information. A novel positron counting technique is proposed for the radioactivity assay during radiopharmaceutical manufacturing that overcomes these limitations. Detection of positrons instead of gammas offers an unprecedented level of position resolution of the radiation source (down to sub-mm) thanks to the nature of the positron interaction with matter. Counting capability instead of charge integration in the detector brings the sensitivity down to the statistical limits at the same time as offering very high dynamic range and linearity from zero to any arbitrarily high activity. This paper reports on a quantitative comparison between conventional detector systems and the proposed positron counting detector. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Assessment of background hydrogen by the Monte Carlo computer code MCNP-4A during measurements of total body nitrogen.

    PubMed

    Ryde, S J; al-Agel, F A; Evans, C J; Hancock, D A

    2000-05-01

    The use of a hydrogen internal standard to enable the estimation of absolute mass during measurement of total body nitrogen by in vivo neutron activation is an established technique. Central to the technique is a determination of the H prompt gamma ray counts arising from the subject. In practice, interference counts from other sources--e.g., neutron shielding--are included. This study reports use of the Monte Carlo computer code, MCNP-4A, to investigate the interference counts arising from shielding both with and without a phantom containing a urea solution. Over a range of phantom size (depth 5 to 30 cm, width 20 to 40 cm), the counts arising from shielding increased by between 4% and 32% compared with the counts without a phantom. For any given depth, the counts increased approximately linearly with width. For any given width, there was little increase for depths exceeding 15 centimeters. The shielding counts comprised between 15% and 26% of those arising from the urea phantom. These results, although specific to the Swansea apparatus, suggest that extraneous hydrogen counts can be considerable and depend strongly on the subject's size.

  3. Why is the conclusion of the Gerda experiment not justified

    NASA Astrophysics Data System (ADS)

    Klapdor-Kleingrothaus, H. V.; Krivosheina, I. V.

    2013-12-01

    The first results of the GERDA double beta experiment in Gran Sasso were recently presented. They are fully consistent with the HEIDELBERG-MOSCOW experiment, but because of its low statistics cannot proof anything at this moment. It is no surprise that the statistics is still far from being able to test the signal claimed by the HEIDELBERG-MOSCOW experiment. The energy resolution of the coaxial detectors is a factor of 1.5 worse than in the HEIDELBERG-MOSCOW experiment. The original goal of background reduction to 10-2 counts/kg y keV, or by an order of magnitude compared to the HEIDELBERG-MOSCOW experiment, has not been reached. The background is only a factor 2.3 lower if we refer it to the experimental line width, i.e. in units counts/kg y energy resolution. With pulse shape analysis ( PSA) the back-ground in the HEIDELBERG-MOSCOW experiment around Q ββ is 4 × 10-3 counts/kg y keV [1], which is a factor of 4 (5 referring to the line width) lower than that of GERDA with pulse shape analysis. The amount of enriched material used in the GERDA measurement is 14.6 kg, only a factor of 1.34 larger than that used in the HEIDELBERG-MOSCOW experiment. The background model is oversimplified and not yet adequate. It is not shown that the lines of their background can be identified. GERDA has to continue the measurement further ˜5 years, until they can responsibly present an understood background. The present half life limit presented by GERDA of T {1/2/0v} > 2.1 × 1025 y (90% confidence level, i.e. 1.6ρ) is still lower than the half-life of T {1/2/0v} = 2.23{-0.31/+0.44} × 1025 y [1] determined in the HEIDELBERG-MOSCOW experiment.

  4. Verification survey report of the south waste tank farm training/test tower and hazardous waste storage lockers at the West Valley demonstration project, West Valley, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weaver, Phyllis C.

    A team from ORAU's Independent Environmental Assessment and Verification Program performed verification survey activities on the South Test Tower and four Hazardous Waste Storage Lockers. Scan data collected by ORAU determined that both the alpha and alpha-plus-beta activity was representative of radiological background conditions. The count rate distribution showed no outliers that would be indicative of alpha or alpha-plus-beta count rates in excess of background. It is the opinion of ORAU that independent verification data collected support the site?s conclusions that the South Tower and Lockers sufficiently meet the site criteria for release to recycle and reuse.

  5. Observer error structure in bull trout redd counts in Montana streams: Implications for inference on true redd numbers

    USGS Publications Warehouse

    Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.

    2006-01-01

    Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.

  6. Assessment of functional liver reserve: old and new in 99mTc-sulfur colloid scintigraphy.

    PubMed

    Matesan, Manuela M; Bowen, Stephen R; Chapman, Tobias R; Miyaoka, Robert S; Velez, James W; Wanner, Michele F; Nyflot, Matthew J; Apisarnthanarax, Smith; Vesselle, Hubert J

    2017-07-01

    A semiquantitative assessment of hepatic reticuloendothelial system function using colloidal particles scintigraphy has been proposed previously as a surrogate for liver function evaluation. In this article, we present an updated method for the overall assessment of technetium-99m (Tc)-sulfur colloid (SC) biodistribution that combines information from planar and attenuation-corrected Tc-SC single-photon emission computed tomography (SPECT) images. The imaging protocol described here was developed as an easy-to-implement method to assess overall and regional liver function changes associated with chronic liver disease. Thirty patients with chronic liver disease and primary liver cancers underwent Tc-SC whole-body planar imaging and upper-abdomen SPECT/computed tomography (CT) imaging before external beam radiation therapy. Liver plus spleen and bone marrow counts as a fraction of whole-body total counts were calculated from SC planar imaging. Attenuation correction Tc-SC images were rigidly coregistered with treatment planning CT images that contained liver and spleen regions-of-interest. Ratios of total liver counts to total spleen counts were obtained from the aligned Tc-SC SPECT and CT images, and were subsequently used to separate liver plus spleen counts obtained on the planar images. This hybrid SPECT/CT and planar scintigraphy approach yielded an updated estimation of whole-body SC distribution. These biodistribution estimates were compared with historical data for reference. Statistical associations of Tc-SC biodistribution to liver function parameters and liver disease scoring systems (Child-Pugh) were evaluated by Spearman rank correlation. Percentages of Tc-SC uptake ranged from 19.3 to 77.3% for the liver; 3.4 to 40.7% for the spleen; and 19.0 to 56.7% for the bone marrow. Spearman's correlation coefficient showed a significant statistical association between Child-Pugh score and bone marrow uptake at 0.55 (P≤0.05), liver uptake at 0.71 (P≤0.001), spleen uptake at 0.56 (P≤0.05), and spleen plus bone marrow uptake at 0.71 (P≤0.001). There was also a good correlation of SC uptake percentages with individual quantitative liver function components such as albumin and total bilirubin, and qualitative liver function components (varices, portal hypertension, ascites). For albumin: r=0.64 (P<0.001) compared with liver uptake percentage from the whole-body counts, r=0.49 (P<0.001) compared with splenic uptake percentage, and r=0.45 (P≤0.05) compared with bone marrow uptake percentage. We describe a novel liver function quantitative assessment method that combines whole-body planar images and SPECT/CT attenuation-corrected images of Tc-SC distribution. Attenuation-corrected SC images provide valuable regional liver function information, which is a unique feature compared with other imaging methods available. The results of our study indicate that the Tc-SC uptake by the liver, spleen, and bone marrow correlates with liver function parameters in patients with diffuse liver disease and the correlation with liver disease severity is slightly better for liver uptake percentages than for individual values of bone marrow and spleen uptake percentages.

  7. Time-of-day Corrections to Aircraft Noise Metrics

    NASA Technical Reports Server (NTRS)

    Clevenson, S. (Editor); Shepherd, W. T. (Editor)

    1980-01-01

    The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.

  8. Detector response function of an energy-resolved CdTe single photon counting detector.

    PubMed

    Liu, Xin; Lee, Hyoung Koo

    2014-01-01

    While spectral CT using single photon counting detector has shown a number of advantages in diagnostic imaging, knowledge of the detector response function of an energy-resolved detector is needed to correct the signal bias and reconstruct the image more accurately. The objective of this paper is to study the photo counting detector response function using laboratory sources, and investigate the signal bias correction method. Our approach is to model the detector response function over the entire diagnostic energy range (20 keV

  9. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation volume, as well as interferences from fluorescence at a distance from adjacent phases or distinct compositional domains in the same phase. Interference corrections for elements detected during boundary fluorescence are further complicated by X-ray focusing geometry considerations. Additional complications arise from the high current densities required for high spatial resolution and high count precision, such as fluctuations in internal charge distribution and peak shape changes as satellite production efficiency varies from calibration to analysis. No flawless method has yet emerged. Extreme care in interference corrections, especially where multiple and sometime mutual overlaps are present, and maximum care (and precision) in background characterization to account for interferences and curvature (e.g., WDS scan or multipoint regression), are crucial developments. Calibration curves from multiple peak and interference calibration measurements at different concentrations, and iterative software methodologies for incorporating absorption edge effects, and non-linearities in interference corrections due to peak shape changes and off-axis X-ray defocussing during boundary fluorescence at a distance, are directions with significant potential.

  10. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  11. The limit of detection in scintigraphic imaging with I-131 in patients with differentiated thyroid carcinoma

    NASA Astrophysics Data System (ADS)

    Hänscheid, H.; Lassmann, M.; Buck, A. K.; Reiners, C.; Verburg, F. A.

    2014-05-01

    Radioiodine scintigraphy influences staging and treatment in patients with differentiated thyroid carcinoma. The limit of detection for fractional uptake in an iodine avid focus in a scintigraphic image was determined from the number of lesion net counts and the count density of the tissue background. The count statistics were used to calculate the diagnostic activity required to elevate the signal from a lesion with a given uptake significantly above a homogeneous background with randomly distributed counts per area. The dependences of the minimal uptake and the minimal size of lesions visible in a scan on several parameters of influence were determined by linking the typical biokinetics observed in iodine avid tissue to the lesion mass and to the absorbed dose received in a radioiodine therapy. The detection limits for fractional uptake in a neck lesion of a typical patient are about 0.001% after therapy with 7000 MBq, 0.01% for activities typically administered in diagnostic assessments (74-185 MBq), and 0.1% after the administration of 10 MBq I-131. Lesions at the limit of detection in a diagnostic scan with biokinetics eligible for radioiodine therapy are small with diameters of a few millimeters. Increasing the diagnostic activity by a factor of 4 reduces the diameter of visible lesions by 25% or about 1 mm. Several other determinants have a comparable or higher influence on the limit of detection than the administered activity; most important are the biokinetics in both blood pool and target tissue and the time of measurement. A generally valid recommendation for the timing of the scan is impossible as the time of the highest probability to detect iodine avid tissue depends on the administered activity as well as on the biokinetics in the lesion and background in the individual patient.

  12. Refinement of moisture calibration curves for nuclear gage : interim report no. 1.

    DOT National Transportation Integrated Search

    1972-01-01

    This study was initiated to determine the correct moisture calibration curves for different nuclear gages. It was found that the Troxler Model 227 had a linear response between count ratio and moisture content. Also, the two calibration curves for th...

  13. Development and evaluation of the Kids Count Farm Safety Lesson.

    PubMed

    Liller, K D; Noland, V; Rijal, P; Pesce, K; Gonzalez, R

    2002-11-01

    The Kids Count Farm Safety Lesson was delivered to nearly 2,000 fifth-grade students in 15 rural schools in Hillsborough County, Florida. The lesson covered animal, machinery, water, and general safety topics applicable to farming in Florida. A staggered pretest-posttest study design was followed whereby five schools received a multiple-choice pretest and posttest and the remainder of the schools (N = 10) received the posttest only. Results of the study showed a significant increase in the mean number of correct answers on the posttests compared to the pretests. There was no significant difference in the mean number of correct answers of those students who received the pretest and those students who had not, eliminating a "pretest" effect. This study fills an important gap in the literature by evaluating a farm safety curriculum offered in the elementary school setting. It also included migrant schoolchildren in the study population. It is strongly recommended that agricultural safety information be included into the health education curriculum of these elementary schools.

  14. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia.

    PubMed

    Oh, Jooyoung; Chun, Ji-Won; Lee, Jung Suk; Kim, Jae-Jin

    2014-04-16

    Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia.

  15. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  16. Combination of Heat Shock and Enhanced Thermal Regime to Control the Growth of a Persistent Legionella pneumophila Strain

    PubMed Central

    Bédard, Emilie; Boppe, Inès; Kouamé, Serge; Martin, Philippe; Pinsonneault, Linda; Valiquette, Louis; Racine, Jules; Prévost, Michèle

    2016-01-01

    Following nosocomial cases of Legionella pneumophila, the investigation of a hot water system revealed that 81.5% of sampled taps were positive for L. pneumophila, despite the presence of protective levels of copper in the water. A significant reduction of L. pneumophila counts was observed by culture after heat shock disinfection. The following corrective measures were implemented to control L. pneumophila: increasing the hot water temperature (55 to 60 °C), flushing taps weekly with hot water, removing excess lengths of piping and maintaining a water temperature of 55 °C throughout the system. A gradual reduction in L. pneumophila counts was observed using the culture method and qPCR in the 18 months after implementation of the corrective measures. However, low level contamination was retained in areas with hydraulic deficiencies, highlighting the importance of maintaining a good thermal regime at all points within the system to control the population of L. pneumophila. PMID:27092528

  17. Generalized scaling relationships on transition metals: Influence of adsorbate-coadsorbate interactions

    NASA Astrophysics Data System (ADS)

    Majumdar, Paulami; Greeley, Jeffrey

    2018-04-01

    Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.

  18. Demons versus Level-Set motion registration for coronary 18F-sodium fluoride PET.

    PubMed

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R; Fletcher, Alison; Motwani, Manish; Thomson, Louise E; Germano, Guido; Dey, Damini; Berman, Daniel S; Newby, David E; Slomka, Piotr J

    2016-02-27

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18 F-sodium fluoride ( 18 F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18 F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18 F-NaF PET. To this end, fifteen patients underwent 18 F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18 F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  19. Demons versus level-set motion registration for coronary 18F-sodium fluoride PET

    NASA Astrophysics Data System (ADS)

    Rubeaux, Mathieu; Joshi, Nikhil; Dweck, Marc R.; Fletcher, Alison; Motwani, Manish; Thomson, Louise E.; Germano, Guido; Dey, Damini; Berman, Daniel S.; Newby, David E.; Slomka, Piotr J.

    2016-03-01

    Ruptured coronary atherosclerotic plaques commonly cause acute myocardial infarction. It has been recently shown that active microcalcification in the coronary arteries, one of the features that characterizes vulnerable plaques at risk of rupture, can be imaged using cardiac gated 18F-sodium fluoride (18F-NaF) PET. We have shown in previous work that a motion correction technique applied to cardiac-gated 18F-NaF PET images can enhance image quality and improve uptake estimates. In this study, we further investigated the applicability of different algorithms for registration of the coronary artery PET images. In particular, we aimed to compare demons vs. level-set nonlinear registration techniques applied for the correction of cardiac motion in coronary 18F-NaF PET. To this end, fifteen patients underwent 18F-NaF PET and prospective coronary CT angiography (CCTA). PET data were reconstructed in 10 ECG gated bins; subsequently these gated bins were registered using demons and level-set methods guided by the extracted coronary arteries from CCTA, to eliminate the effect of cardiac motion on PET images. Noise levels, target-to-background ratios (TBR) and global motion were compared to assess image quality. Compared to the reference standard of using only diastolic PET image (25% of the counts from PET acquisition), cardiac motion registration using either level-set or demons techniques almost halved image noise due to the use of counts from the full PET acquisition and increased TBR difference between 18F-NaF positive and negative lesions. The demons method produces smoother deformation fields, exhibiting no singularities (which reflects how physically plausible the registration deformation is), as compared to the level-set method, which presents between 4 and 8% of singularities, depending on the coronary artery considered. In conclusion, the demons method produces smoother motion fields as compared to the level-set method, with a motion that is physiologically plausible. Therefore, level-set technique will likely require additional post-processing steps. On the other hand, the observed TBR increases were the highest for the level-set technique. Further investigations of the optimal registration technique of this novel coronary PET imaging technique are warranted.

  20. Prior image constrained image reconstruction in emerging computed tomography applications

    NASA Astrophysics Data System (ADS)

    Brunner, Stephen T.

    Advances have been made in computed tomography (CT), especially in the past five years, by incorporating prior images into the image reconstruction process. In this dissertation, we investigate prior image constrained image reconstruction in three emerging CT applications: dual-energy CT, multi-energy photon-counting CT, and cone-beam CT in image-guided radiation therapy. First, we investigate the application of Prior Image Constrained Compressed Sensing (PICCS) in dual-energy CT, which has been called "one of the hottest research areas in CT." Phantom and animal studies are conducted using a state-of-the-art 64-slice GE Discovery 750 HD CT scanner to investigate the extent to which PICCS can enable radiation dose reduction in material density and virtual monochromatic imaging. Second, we extend the application of PICCS from dual-energy CT to multi-energy photon-counting CT, which has been called "one of the 12 topics in CT to be critical in the next decade." Numerical simulations are conducted to generate multiple energy bin images for a photon-counting CT acquisition and to investigate the extent to which PICCS can enable radiation dose efficiency improvement. Third, we investigate the performance of a newly proposed prior image constrained scatter correction technique to correct scatter-induced shading artifacts in cone-beam CT, which, when used in image-guided radiation therapy procedures, can assist in patient localization, and potentially, dose verification and adaptive radiation therapy. Phantom studies are conducted using a Varian 2100 EX system with an on-board imager to investigate the extent to which the prior image constrained scatter correction technique can mitigate scatter-induced shading artifacts in cone-beam CT. Results show that these prior image constrained image reconstruction techniques can reduce radiation dose in dual-energy CT by 50% in phantom and animal studies in material density and virtual monochromatic imaging, can lead to radiation dose efficiency improvement in multi-energy photon-counting CT, and can mitigate scatter-induced shading artifacts in cone-beam CT in full-fan and half-fan modes.

  1. An Investigation of the Effects of Deposit Feeding Invertebrates on the Structural Properties of Clay Minerals.

    DTIC Science & Technology

    1981-07-01

    Dennis M. Lavoie of NORDA for chemical analysis of clay minerals with the x-ray energy dispersive spectrometer. We thank Fred Bowles, Peter Fleischer...diffractograi of Nuculana acuta fecal pellet 11 residue (illite experiment). TABLES TABLE 1. X-ray energy dispersive spectrometer chemical 8 analysis for...inontmorillonite experiments. Counts for elements after background counts removed. TABLE 2. X-ray energy dispersive spectroneter chemical analysis 12 for

  2. Lunar Cratering Chronology: Calibrating Degree of Freshness of Craters to Absolute Ages

    NASA Astrophysics Data System (ADS)

    Trang, D.; Gillis-Davis, J.; Boyce, J. M.

    2013-12-01

    The use of impact craters to age-date surfaces of and/or geomorphological features on planetary bodies is a decades old practice. Various dating techniques use different aspects of impact craters in order to determine ages. One approach is based on the degree of freshness of primary-impact craters. This method examines the degradation state of craters through visual inspection of seven criteria: polygonality, crater ray, continuous ejecta, rim crest sharpness, satellite craters, radial channels, and terraces. These criteria are used to rank craters in order of age from 0.0 (oldest) to 7.0 (youngest). However, the relative decimal scale used in this technique has not been tied to a classification of absolute ages. In this work, we calibrate the degree of freshness to absolute ages through crater counting. We link the degree of freshness to absolute ages through crater counting of fifteen craters with diameters ranging from 5-22 km and degree of freshness from 6.3 to 2.5. We use the Terrain Camera data set on Kaguya to count craters on the continuous ejecta of each crater in our sample suite. Specifically, we divide the crater's ejecta blanket into quarters and count craters between the rim of the main crater out to one crater radii from the rim for two of the four sections. From these crater counts, we are able to estimate the absolute model age of each main crater using the Craterstats2 tool in ArcGIS. Next, we compare the degree of freshness for the crater count-derived age of our main craters to obtain a linear inverse relation that links these two metrics. So far, for craters with degree of freshness from 6.3 to 5.0, the linear regression has an R2 value of 0.7, which corresponds to a relative uncertainty of ×230 million years. At this point, this tool that links degree of freshness to absolute ages cannot be used with craters <8km because this class of crater degrades quicker than larger craters. A graphical solution exists for correcting the degree of freshness for craters <8 km in diameter. We convert this graphical solution to a single function of two independent variables, observed degree of freshness and crater diameter. This function, which results in a corrected degree of freshness is found through a curve-fitting routine and corrects the degree of freshness for craters <8 km in diameter. As a result, we are able to derive absolute ages from the degree of freshness of craters with diameters from about ≤20 km down to a 1 km in diameter with a precision of ×230 million years.

  3. Corrections Officer Candidate Information Booklet and User's Manual. Standards and Training for Corrections Program.

    ERIC Educational Resources Information Center

    California State Board of Corrections, Sacramento.

    This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…

  4. Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity

    ERIC Educational Resources Information Center

    Simmons-Mackie, Nina; Damico, Jack S.

    2008-01-01

    Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…

  5. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z.

    PubMed

    Lahmann, B; Milanese, L M; Han, W; Gatu Johnson, M; Séguin, F H; Frenje, J A; Petrasso, R D; Hahn, K D; Jones, B

    2016-11-01

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protons at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.

  6. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lahmann, B.; Milanese, L. M.; Han, W.

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less

  7. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lahmann, B., E-mail: lahmann@mit.edu; Milanese, L. M.; Han, W.

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. These results are in excellent agreement with previous work applied to DT neutrons.« less

  8. Application of the coincidence counting technique to DD neutron spectrometry data at the NIF, OMEGA, and Z

    DOE PAGES

    Lahmann, B.; Milanese, L. M.; Han, W.; ...

    2016-07-20

    A compact neutron spectrometer, based on a CH foil for the production of recoil protons and CR-39 detection, is being developed for the measurements of the DD-neutron spectrum at the NIF, OMEGA, and Z facilities. As a CR-39 detector will be used in the spectrometer, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). To reject the background to the required level for measurements of the down-scattered and primary DD-neutron components in the spectrum, the Coincidence Counting Technique (CCT) must be applied to the data. Using a piece of CR-39 exposed to 2.5-MeV protonsmore » at the MIT HEDP accelerator facility and DD-neutrons at Z, a significant improvement of a DD-neutron signal-to-background level has been demonstrated for the first time using the CCT. In conclusion, these results are in excellent agreement with previous work applied to DT neutrons.« less

  9. Evaluation of amplitude-based sorting algorithm to reduce lung tumor blurring in PET images using 4D NCAT phantom.

    PubMed

    Wang, Jiali; Byrne, James; Franquiz, Juan; McGoron, Anthony

    2007-08-01

    develop and validate a PET sorting algorithm based on the respiratory amplitude to correct for abnormal respiratory cycles. using the 4D NCAT phantom model, 3D PET images were simulated in lung and other structures at different times within a respiratory cycle and noise was added. To validate the amplitude binning algorithm, NCAT phantom was used to simulate one case of five different respiratory periods and another case of five respiratory periods alone with five respiratory amplitudes. Comparison was performed for gated and un-gated images and for the new amplitude binning algorithm with the time binning algorithm by calculating the mean number of counts in the ROI (region of interest). an average of 8.87+/-5.10% improvement was reported for total 16 tumors with different tumor sizes and different T/B (tumor to background) ratios using the new sorting algorithm. As both the T/B ratio and tumor size decreases, image degradation due to respiration increases. The greater benefit for smaller diameter tumor and lower T/B ratio indicates a potential improvement in detecting more problematic tumors.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwood, L. R.; Cantaloub, M. G.; Burnett, J. L.

    PNNL has developed two low-background gamma-ray spectrometers in a new shallow underground laboratory, thereby significantly improving its ability to detect low levels of gamma-ray emitting fission or activation products in airborne particulate in samples from the IMS (International Monitoring System). Furthermore, the combination of cosmic veto panels, dry nitrogen gas to reduce radon and low background shielding results in a reduction of the background count rate by about a factor of 100 compared to detectors operating above ground at our laboratory.

  11. Probing cluster potentials through gravitational lensing of background X-ray sources

    NASA Technical Reports Server (NTRS)

    Refregier, A.; Loeb, A.

    1996-01-01

    The gravitational lensing effect of a foreground galaxy cluster, on the number count statistics of background X-ray sources, was examined. The lensing produces a deficit in the number of resolved sources in a ring close to the critical radius of the cluster. The cluster lens can be used as a natural telescope to study the faint end of the (log N)-(log S) relation for the sources which account for the X-ray background.

  12. Modeling OPC complexity for design for manufacturability

    NASA Astrophysics Data System (ADS)

    Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong

    2005-11-01

    Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.

  13. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  14. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  15. Investigation of image distortion due to MCP electronic readout misalignment and correction via customized GUI application

    NASA Astrophysics Data System (ADS)

    Vitucci, G.; Minniti, T.; Tremsin, A. S.; Kockelmann, W.; Gorini, G.

    2018-04-01

    The MCP-based neutron counting detector is a novel device that allows high spatial resolution and time-resolved neutron radiography and tomography with epithermal, thermal and cold neutrons. Time resolution is possible by the high readout speeds of ~ 1200 frames/sec, allowing high resolution event counting with relatively high rates without spatial resolution degradation due to event overlaps. The electronic readout is based on a Timepix sensor, a CMOS pixel readout chip developed at CERN. Currently, a geometry of a quad Timepix detector is used with an active format of 28 × 28 mm2 limited by the size of the Timepix quad (2 × 2 chips) readout. Measurements of a set of high-precision micrometers test samples have been performed at the Imaging and Materials Science & Engineering (IMAT) beamline operating at the ISIS spallation neutron source (U.K.). The aim of these experiments was the full characterization of the chip misalignment and of the gaps between each pad in the quad Timepix sensor. Such misalignment causes distortions of the recorded shape of the sample analyzed. We present in this work a post-processing image procedure that considers and corrects these effects. Results of the correction will be discussed and the efficacy of this method evaluated.

  16. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  17. Accounting for imperfect detection is critical for inferring marine turtle nesting population trends.

    PubMed

    Pfaller, Joseph B; Bjorndal, Karen A; Chaloupka, Milani; Williams, Kristina L; Frick, Michael G; Bolten, Alan B

    2013-01-01

    Assessments of population trends based on time-series counts of individuals are complicated by imperfect detection, which can lead to serious misinterpretations of data. Population trends of threatened marine turtles worldwide are usually based on counts of nests or nesting females. We analyze 39 years of nest-count, female-count, and capture-mark-recapture (CMR) data for nesting loggerhead turtles (Caretta caretta) on Wassaw Island, Georgia, USA. Annual counts of nests and females, not corrected for imperfect detection, yield significant, positive trends in abundance. However, multistate open robust design modeling of CMR data that accounts for changes in imperfect detection reveals that the annual abundance of nesting females has remained essentially constant over the 39-year period. The dichotomy could result from improvements in surveys or increased within-season nest-site fidelity in females, either of which would increase detection probability. For the first time in a marine turtle population, we compare results of population trend analyses that do and do not account for imperfect detection and demonstrate the potential for erroneous conclusions. Past assessments of marine turtle population trends based exclusively on count data should be interpreted with caution and re-evaluated when possible. These concerns apply equally to population assessments of all species with imperfect detection.

  18. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  19. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  20. The use of flow cytometry to accurately ascertain total and viable counts of Lactobacillus rhamnosus in chocolate.

    PubMed

    Raymond, Yves; Champagne, Claude P

    2015-04-01

    The goals of this study were to evaluate the precision and accuracy of flow cytometry (FC) methodologies in the evaluation of populations of probiotic bacteria (Lactobacillus rhamnosus R0011) in two commercial dried forms, and ascertain the challenges in enumerating them in a chocolate matrix. FC analyses of total (FC(T)) and viable (FC(V)) counts in liquid or dried cultures were almost two times more precise (reproducible) than traditional direct microscopic counts (DCM) or colony forming units (CFU). With FC, it was possible to ascertain low levels of dead cells (FC(D)) in fresh cultures, which is not possible with traditional CFU and DMC methodologies. There was no interference of chocolate solids on FC counts of probiotics when inoculation was above 10(7) bacteria per g. Addition of probiotics in chocolate at 40 °C resulted in a 37% loss in viable cells. Blending of the probiotic powder into chocolate was not uniform which raised a concern that the precision of viable counts could suffer. FCT data can serve to identify the correct inoculation level of a sample, and viable counts (FCV or CFU) can subsequently be better interpreted. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  1. Error correction and statistical analyses for intra-host comparisons of feline immunodeficiency virus diversity from high-throughput sequencing data.

    PubMed

    Liu, Yang; Chiaromonte, Francesca; Ross, Howard; Malhotra, Raunaq; Elleder, Daniel; Poss, Mary

    2015-06-30

    Infection with feline immunodeficiency virus (FIV) causes an immunosuppressive disease whose consequences are less severe if cats are co-infected with an attenuated FIV strain (PLV). We use virus diversity measurements, which reflect replication ability and the virus response to various conditions, to test whether diversity of virulent FIV in lymphoid tissues is altered in the presence of PLV. Our data consisted of the 3' half of the FIV genome from three tissues of animals infected with FIV alone, or with FIV and PLV, sequenced by 454 technology. Since rare variants dominate virus populations, we had to carefully distinguish sequence variation from errors due to experimental protocols and sequencing. We considered an exponential-normal convolution model used for background correction of microarray data, and modified it to formulate an error correction approach for minor allele frequencies derived from high-throughput sequencing. Similar to accounting for over-dispersion in counts, this accounts for error-inflated variability in frequencies - and quite effectively reproduces empirically observed distributions. After obtaining error-corrected minor allele frequencies, we applied ANalysis Of VAriance (ANOVA) based on a linear mixed model and found that conserved sites and transition frequencies in FIV genes differ among tissues of dual and single infected cats. Furthermore, analysis of minor allele frequencies at individual FIV genome sites revealed 242 sites significantly affected by infection status (dual vs. single) or infection status by tissue interaction. All together, our results demonstrated a decrease in FIV diversity in bone marrow in the presence of PLV. Importantly, these effects were weakened or undetectable when error correction was performed with other approaches (thresholding of minor allele frequencies; probabilistic clustering of reads). We also queried the data for cytidine deaminase activity on the viral genome, which causes an asymmetric increase in G to A substitutions, but found no evidence for this host defense strategy. Our error correction approach for minor allele frequencies (more sensitive and computationally efficient than other algorithms) and our statistical treatment of variation (ANOVA) were critical for effective use of high-throughput sequencing data in understanding viral diversity. We found that co-infection with PLV shifts FIV diversity from bone marrow to lymph node and spleen.

  2. Rain-induced increase in background radiation detected by Radiation Portal Monitors.

    PubMed

    Livesay, R J; Blessinger, C S; Guzzardo, T F; Hausladen, P A

    2014-11-01

    A complete understanding of both the steady state and transient background measured by Radiation Portal Monitors (RPMs) is essential to predictable system performance, as well as maximization of detection sensitivity. To facilitate this understanding, a test bed for the study of natural background in RPMs has been established at the Oak Ridge National Laboratory. This work was performed in support of the Second Line of Defense Program's mission to enhance partner country capability to deter, detect, and interdict the illicit movement of special nuclear material. In the present work, transient increases in gamma-ray counting rates in RPMs due to rain are investigated. The increase in background activity associated with rain, which has been well documented in the field of environmental radioactivity, originates primarily from the wet-deposition of two radioactive daughters of (222)Rn, namely, (214)Pb and (214)Bi. In this study, rainfall rates recorded by a co-located weather station are compared with RPM count rates and high-purity germanium spectra. The data verify that these radionuclides are responsible for the largest environmental background fluctuations in RPMs. Analytical expressions for the detector response function in Poly-Vinyl Toluene have been derived. Effects on system performance and potential mitigation strategies are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Psychosocial outcomes in a cohort of perinatally HIV-infected adolescents in Western Jamaica.

    PubMed

    Evans-Gilbert, Tracy; Kasimbie, Kazie; Reid, Gail; Williams, Shelly Ann

    2018-02-05

    Background Psychosocial factors interact with adolescent development and affect the ability of HIV-infected adolescents to cope with and adhere to treatment. Aim To evaluate psychosocial outcomes in perinatally HIV-infected adolescents (PHIVAs) in Western Jamaica after psychosocial intervention. Methods The Bright Futures Paediatric Symptom Checklist (BF-PSC) was used for psychological screening of PHIVAs in Western Jamaica. Referred patients were evaluated using the Youth version of the Columbia Impairment Scale (CIS). Demographic, laboratory and clinical data obtained between July 2014 and June 2016 were evaluated retrospectively and outcomes were reviewed before and after psychosocial intervention. Results Sixty PHIVAs were enrolled and 36 (60%) had a positive BF-PSC score that necessitated referral. The BF-PSC correctly identified 89% of patients with impaired psychosocial assessment by CIS scores. Referred patients were less likely to adhere to treatment, to be virologically suppressed or to have a CD4+ count of >500 cells/μl, and were more likely to be in the late teenage group or to be of orphan status. After intervention, the prevalence of viral suppression increased and median viral load decreased. A difference in mean CD4+ cell count was detected before but not after intervention in teenage and orphan groups. Conclusions The BF-PSC identified at-risk PHIVAs with impaired psychosocial functioning. Increased vulnerability was noted in orphans and older teenagers. Psychosocial interventions (including family therapy) reduced psychosocial impairment and improved virological suppression. Mental health intervention should be instituted to facilitate improved clinical outcomes, autonomy of care and transition to adult care.

  4. 133Xe contamination found in internal bacteria filter of xenon ventilation system.

    PubMed

    Hackett, Michael T; Collins, Judith A; Wierzbinski, Rebecca S

    2003-09-01

    We report on (133)Xe contamination found in the reusable internal bacteria filter of our xenon ventilation system. Internal bacteria filters (n = 6) were evaluated after approximately 1 mo of normal use. The ventilation system was evacuated twice to eliminate (133)Xe in the system before removal of the filter. Upon removal, the filter was monitored using a survey meter with an energy-compensated probe and was imaged on a scintillation camera. The filter was monitored and imaged over several days and was stored in a fume hood. Estimated (133)Xe activity in each filter immediately after removal ranged from 132 to 2,035 kBq (3.6-55.0 micro Ci), based on imaging. Initial surface radiation levels ranged from 0.4 to 4.5 micro Sv/h (0.04-0.45 mrem/h). The (133)Xe activity did not readily leave the filter over time (i.e., time to reach half the counts of the initial decay-corrected image ranged from <6 to >72 h). The majority of the image counts (approximately 70%) were seen in 2 distinctive areas in the filter. They corresponded to sites where the manufacturer used polyurethane adhesive to attach the fiberglass filter medium to the filter housing. (133)Xe contamination within the reusable internal bacteria filter of our ventilation system was easily detected by a survey meter and imaging. Although initial activities and surface radiation levels were low, radiation safety practices would dictate that a (133)Xe-contaminated bacteria filter be stored preferably in a fume hood until it cannot be distinguished from background before autoclaving or disposal.

  5. Parallel Low-Loss Measurement of Multiple Atomic Qubits

    NASA Astrophysics Data System (ADS)

    Kwon, Minho; Ebert, Matthew F.; Walker, Thad G.; Saffman, M.

    2017-11-01

    We demonstrate low-loss measurement of the hyperfine ground state of rubidium atoms by state dependent fluorescence detection in a dipole trap array of five sites. The presence of atoms and their internal states are minimally altered by utilizing circularly polarized probe light and a strictly controlled quantization axis. We achieve mean state detection fidelity of 97% without correcting for imperfect state preparation or background losses, and 98.7% when corrected. After state detection and correction for background losses, the probability of atom loss due to the state measurement is <2 % and the initial hyperfine state is preserved with >98 % probability.

  6. Clustering method for counting passengers getting in a bus with single camera

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  7. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.

    PubMed

    Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran

    2016-05-01

    In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.

  8. Intacs for early pellucid marginal degeneration.

    PubMed

    Kymionis, George D; Aslanides, Ioannis M; Siganos, Charalambos S; Pallikaris, Ioannis G

    2004-01-01

    A 42-year-old man had Intacs (Addition Technology Inc.) implantation for early pellucid marginal degeneration (PMD). Two Intacs segments (0.45 mm thickness) were inserted uneventfully in the fashion typically used for low myopia correction (nasal-temporal). Eleven months after the procedure, the uncorrected visual acuity was 20/200, compared with counting fingers preoperatively, while the best spectacle-corrected visual acuity improved to 20/25 from 20/50. Corneal topographic pattern also improved. Although the results are encouraging, concern still exists regarding the long-term effect of this approach for the management of patients with PMD.

  9. Determination of the volume activity concentration of alpha artificial radionuclides with alpha spectrometer.

    PubMed

    Liu, B; Zhang, Q; Li, Y

    1997-12-01

    This paper introduces a method to determine the volume activity concentration of alpha and/or beta artificial radionuclides in the environment and radon/thoron progeny background-compensation based on a Si surface-barrier detector. By measuring the alpha peak counts of 218Po and 214Po in two time intervals, the activity concentration of 218Po, 214Pb and 214Bi aerosol particles were determined; meanwhile, the total beta count of 214Pb and 214Bi aerosols was also calculated from their decay scheme. With the average equilibrium factor of thoron progeny in general environment, the alpha and beta counts of thoron progeny were approximately evaluated by 212Po alpha peak counts. The alpha count of transuranic aerosols was determined by subtracting the trail counts of radon/thoron progeny alpha peaks. The total count of beta artificial radionuclides was determined by subtracting the beta counts of radon/thoron progeny aerosol particles. In our preliminary experiments, if the radon progeny concentration is less than 15 Bq m(-3), the lower limit of detection of transuranics concentration is less than 0.1 Bq m(-3). Even if the radon progeny concentration is as high as 75 Bq m(-3), the lower limit of detection of total beta activity concentration of artificial nuclides aerosols is less than 1 Bq m(-3).

  10. Impact of double counting and transfer bias on estimated rates and outcomes of acute myocardial infarction.

    PubMed

    Westfall, J M; McGloin, J

    2001-05-01

    Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double counting and transfer bias should be considered when conducting and interpreting research on ischemic heart disease, particularly in rural regions.

  11. --No Title--

    Science.gov Websites

    2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias

  12. Neonatal nucleated red blood cells in G6PD deficiency.

    PubMed

    Yeruchimovich, Mark; Shapira, Boris; Mimouni, Francis B; Dollberg, Shaul

    2002-05-01

    The objective of this study is to study the absolute number of nucleated red blood cells (RBC) at birth, an index of active fetal erythropoiesis, in infants with G6PD deficiency and in controls. We tested the hypothesis that hematocrit and hemoglobin would be lower, and absolute nucleated RBC counts higher, in the G6PD deficient and that these changes would be more prominent in infants exposed passively to fava bean through maternal diet. Thirty-two term infants with G6PD deficiency were compared with 30 term controls. Complete blood counts with manual differential counts were obtained within 12 hours of life. Absolute nucleated RBC and corrected leukocyte counts were computed from the Coulter results and the differential count. G6PD deficient patients did not differ from controls in terms of gestational age, birth weight, or Apgar scores or in any of the hematologic parameters studied, whether or not the mother reported fava beans consumption in the days prior to delivery. Although intrauterine hemolysis is possible in G6PD deficient fetuses exposed passively to fava beans, our study supports that such events must be very rare.

  13. Modeling zero-modified count and semicontinuous data in health services research part 2: case studies.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    This article is the second installment of a two-part tutorial on the analysis of zero-modified count and semicontinuous data. Part 1, which appears as a companion piece in this issue of Statistics in Medicine, provides a general background and overview of the topic, with particular emphasis on applications to health services research. Here, we present three case studies highlighting various approaches for the analysis of zero-modified data. The first case study describes methods for analyzing zero-inflated longitudinal count data. Case study 2 considers the use of hurdle models for the analysis of spatiotemporal count data. The third case study discusses an application of marginalized two-part models to the analysis of semicontinuous health expenditure data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Image Reconstruction for a Partially Collimated Whole Body PET Scanner

    PubMed Central

    Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.

    2008-01-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731

  15. Image Reconstruction for a Partially Collimated Whole Body PET Scanner.

    PubMed

    Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E

    2008-06-01

    Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.

  16. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  17. Attenuation correction strategies for multi-energy photon emitters using SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-12-31

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows separately.« less

  18. New semi-quantitative 123I-MIBG estimation method compared with scoring system in follow-up of advanced neuroblastoma: utility of total MIBG retention ratio versus scoring method.

    PubMed

    Sano, Yuko; Okuyama, Chio; Iehara, Tomoko; Matsushima, Shigenori; Yamada, Kei; Hosoi, Hajime; Nishimura, Tsunehiko

    2012-07-01

    The purpose of this study is to evaluate a new semi-quantitative estimation method using (123)I-MIBG retention ratio to assess response to chemotherapy for advanced neuroblastoma. Thirteen children with advanced neuroblastoma (International Neuroblastoma Risk Group Staging System: stage M) were examined for a total of 51 studies with (123)I-MIBG scintigraphy (before and during chemotherapy). We proposed a new semi-quantitative method using MIBG retention ratio (count obtained with delayed image/count obtained with early image with decay correction) to estimate MIBG accumulation. We analyzed total (123)I-MIBG retention ratio (TMRR: total body count obtained with delayed image/total body count obtained with early image with decay correction) and compared with a scoring method in terms of correlation with tumor markers. TMRR showed significantly higher correlations with urinary catecholamine metabolites before chemotherapy (VMA: r(2) = 0.45, P < 0.05, HVA: r(2) = 0.627, P < 0.01) than MIBG score (VMA: r(2) = 0.19, P = 0.082, HVA: r(2) = 0.25, P = 0.137). There were relatively good correlations between serial change of TMRR and those of urinary catecholamine metabolites (VMA: r(2) = 0.274, P < 0.001, HVA: r(2) = 0.448, P < 0.0001) compared with serial change of MIBG score and those of tumor markers (VMA: r(2) = 0.01, P = 0.537, HVA: 0.084, P = 0.697) during chemotherapy for advanced neuroblastoma. TMRR could be a useful semi-quantitative method for estimating early response to chemotherapy of advanced neuroblastoma because of its high correlation with urine catecholamine metabolites.

  19. Evaluation of a standardized procedure for [corrected] microscopic cell counts [corrected] in body fluids.

    PubMed

    Emerson, Jane F; Emerson, Scott S

    2005-01-01

    A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.

  20. The prediction of progression-free and overall survival in women with an advanced stage of epithelial ovarian carcinoma.

    PubMed

    Gerestein, C G; Eijkemans, M J C; de Jong, D; van der Burg, M E L; Dykgraaf, R H M; Kooi, G S; Baalbergen, A; Burger, C W; Ansink, A C

    2009-02-01

    Prognosis in women with ovarian cancer mainly depends on International Federation of Gynecology and Obstetrics stage and the ability to perform optimal cytoreductive surgery. Since ovarian cancer has a heterogeneous presentation and clinical course, predicting progression-free survival (PFS) and overall survival (OS) in the individual patient is difficult. The objective of this study was to determine predictors of PFS and OS in women with advanced stage epithelial ovarian cancer (EOC) after primary cytoreductive surgery and first-line platinum-based chemotherapy. Retrospective observational study. Two teaching hospitals and one university hospital in the south-western part of the Netherlands. Women with advanced stage EOC. All women who underwent primary cytoreductive surgery for advanced stage EOC followed by first-line platinum-based chemotherapy between January 1998 and October 2004 were identified. To investigate independent predictors of PFS and OS, a Cox' proportional hazard model was used. Nomograms were generated with the identified predictive parameters. The primary outcome measure was OS and the secondary outcome measures were response and PFS. A total of 118 women entered the study protocol. Median PFS and OS were 15 and 44 months, respectively. Preoperative platelet count (P = 0.007), and residual disease <1 cm (P = 0.004) predicted PFS with a optimism corrected c-statistic of 0.63. Predictive parameters for OS were preoperative haemoglobin serum concentration (P = 0.012), preoperative platelet counts (P = 0.031) and residual disease <1 cm (P = 0.028) with a optimism corrected c-statistic of 0.67. PFS could be predicted by postoperative residual disease and preoperative platelet counts, whereas residual disease, preoperative platelet counts and preoperative haemoglobin serum concentration were predictive for OS. The proposed nomograms need to be externally validated.

  1. Single-molecule fluorescence detection: autocorrelation criterion and experimental realization with phycoerythrin.

    PubMed Central

    Peck, K; Stryer, L; Glazer, A N; Mathies, R A

    1989-01-01

    A theory for single-molecule fluorescence detection is developed and then used to analyze data from subpicomolar solutions of B-phycoerythrin (PE). The distribution of detected counts is the convolution of a Poissonian continuous background with bursts arising from the passage of individual fluorophores through the focused laser beam. The autocorrelation function reveals single-molecule events and provides a criterion for optimizing experimental parameters. The transit time of fluorescent molecules through the 120-fl imaged volume was 800 microseconds. The optimal laser power (32 mW at 514.5 nm) gave an incident intensity of 1.8 x 10(23) photons.cm-2.s-1, corresponding to a mean time of 1.1 ns between absorptions. The mean incremental count rate was 1.5 per 100 microseconds for PE monomers and 3.0 for PE dimers above a background count rate of 1.0. The distribution of counts and the autocorrelation function for 200 fM monomer and 100 fM dimer demonstrate that single-molecule detection was achieved. At this concentration, the mean occupancy was 0.014 monomer molecules in the probed volume. A hard-wired version of this detection system was used to measure the concentration of PE down to 1 fM. This single-molecule counter is 3 orders of magnitude more sensitive than conventional fluorescence detection systems. PMID:2726766

  2. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    PubMed

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution can be compared with the film-measured dose distribution using a dose constraint of 4% (relative to the measured dose) for doses between 1 and 3 Gy. At lower doses, the dose constraint must be relaxed.

  3. Response to Arend Flick

    ERIC Educational Resources Information Center

    RiCharde, R. Stephen

    2009-01-01

    This article presents the author's response to Arend Flick. The author states that Flick is correct that the issue of rubrics is broader than interrater reliability, though it is the assessment practitioner's primary armament against what the author has heard dubbed "refried bean counting" (insinuating that assessment statistics are not just bean…

  4. Effect of Videotape Playback and Teacher Comment on Anxiety During Subsequent Task Performance.

    ERIC Educational Resources Information Center

    Breen, Myles P.; Diehl, Roderick

    Feedback by teacher comment, by television playback, and by self-analysis, singly, or together, reduced anxiety in subsequent performance as measured by nonfluencies in speech. Nonfluencies were counted in eight categories: the sounds, "ah,""um," or "uh;" correction; sentence incompletion; repetition; stutter; intruding incoherent sound; tongue…

  5. Using EDA, ANOVA and Regression to Optimise Some Microbiology Data

    ERIC Educational Resources Information Center

    Binnie, Neil

    2004-01-01

    Bacteria are cultured in medical laboratories to identify them so patients can be treated correctly. The tryptone dataset contains measurements of bacteria counts following the culturing of five strains of "Staphylococcus aureus". It also contains the time of incubation, temperature of incubation and concentration of tryptone, a nutrient. The…

  6. The Absolute Measurement of Beta Activities; SOBRE LA MEDIDA ABSOLUTA DE ACTIVIDADES BETA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Rio, C.S.; Reynaldo, O.J.; Mayquez, E.R.

    1956-01-01

    A new method for the absolute beta counting of solid samples is given. The measurements are made with an inside Geiger-Muller tube of new construction. The backscattering correction, when using an "infinite" thick mounting, is discussed and results for different materials given. (auth)

  7. Implication of the first decision on visual information-sampling in the spatial frequency domain in pulmonary nodule recognition

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David; Donovan, Tim; Dix, Alan

    2010-02-01

    Aim: To investigate the impact on visual sampling strategy and pulmonary nodule recognition of image-based properties of background locations in dwelled regions where the first overt decision was made. . Background: Recent studies in mammography show that the first overt decision (TP or FP) has an influence on further image reading including the correctness of the following decisions. Furthermore, the correlation between the spatial frequency properties of the local background following decision sites and the first decision correctness has been reported. Methods: Subjects with different radiological experience were eye tracked during detection of pulmonary nodules from PA chest radiographs. Number of outcomes and the overall quality of performance are analysed in terms of the cases where correct or incorrect decisions were made. JAFROC methodology is applied. The spatial frequency properties of selected local backgrounds related to a certain decisions were studied. ANOVA was used to compare the logarithmic values of energy carried by non redundant stationary wavelet packet coefficients. Results: A strong correlation has been found between the number of TP as a first decision and the JAFROC score (r = 0.74). The number of FP as a first decision was found negatively correlated with JAFROC (r = -0.75). Moreover, the differential spatial frequency profiles outcomes depend on the first choice correctness.

  8. Background correction in forensic photography. II. Photography of blood under conditions of non-uniform illumination or variable substrate color--practical aspects and limitations.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.

  9. Isotopic analysis of uranium in natural waters by alpha spectrometry

    USGS Publications Warehouse

    Edwards, K.W.

    1968-01-01

    A method is described for the determination of U234/U238 activity ratios for uranium present in natural waters. The uranium is coprecipitated from solution with aluminum phosphate, extracted into ethyl acetate, further purified by ion exchange, and finally electroplated on a titanium disc for counting. The individual isotopes are determined by measurement of the alpha-particle energy spectrum using a high resolution low-background alpha spectrometer. Overall chemical recovery of about 90 percent and a counting efficiency of 25 percent allow analyses of water samples containing as little as 0.10 ?g/l of uranium. The accuracy of the method is limited, on most samples, primarily by counting statistics.

  10. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  11. People counting in classroom based on video surveillance

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbin; Huang, Xiang; Su, Juan

    2014-11-01

    Currently, the switches of the lights and other electronic devices in the classroom are mainly relied on manual control, as a result, many lights are on while no one or only few people in the classroom. It is important to change the current situation and control the electronic devices intelligently according to the number and the distribution of the students in the classroom, so as to reduce the considerable waste of electronic resources. This paper studies the problem of people counting in classroom based on video surveillance. As the camera in the classroom can not get the full shape contour information of bodies and the clear features information of faces, most of the classical algorithms such as the pedestrian detection method based on HOG (histograms of oriented gradient) feature and the face detection method based on machine learning are unable to obtain a satisfied result. A new kind of dual background updating model based on sparse and low-rank matrix decomposition is proposed in this paper, according to the fact that most of the students in the classroom are almost in stationary state and there are body movement occasionally. Firstly, combining the frame difference with the sparse and low-rank matrix decomposition to predict the moving areas, and updating the background model with different parameters according to the positional relationship between the pixels of current video frame and the predicted motion regions. Secondly, the regions of moving objects are determined based on the updated background using the background subtraction method. Finally, some operations including binarization, median filtering and morphology processing, connected component detection, etc. are performed on the regions acquired by the background subtraction, in order to induce the effects of the noise and obtain the number of people in the classroom. The experiment results show the validity of the algorithm of people counting.

  12. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  13. Relationship between automated total nucleated cell count and enumeration of cells on direct smears of canine synovial fluid.

    PubMed

    Dusick, Allison; Young, Karen M; Muir, Peter

    2014-12-01

    Canine osteoarthritis is a common disorder seen in veterinary clinical practice and causes considerable morbidity in dogs as they age. Synovial fluid analysis is an important tool for diagnosis and treatment of canine joint disease and obtaining a total nucleated cell count (TNCC) is particularly important. However, the low sample volumes obtained during arthrocentesis are often insufficient for performing an automated TNCC, thereby limiting diagnostic interpretation. The aim of the present study was to investigate whether estimation of TNCC in canine synovial fluid could be achieved by performing manual cell counts on direct smears of fluid. Fifty-eight synovial fluid samples, taken by arthrocentesis from 48 dogs, were included in the study. Direct smears of synovial fluid were prepared, and hyaluronidase added before cell counts were obtained using a commercial laser-based instrument. A protocol was established to count nucleated cells in a specific region of the smear, using a serpentine counting pattern; the mean number of nucleated cells per 400 × field was then calculated. There was a positive correlation between the automated TNCC and mean manual cell count, with more variability at higher TNCC. Regression analysis was performed to estimate TNCC from manual counts. By this method, 78% of the samples were correctly predicted to fall into one of three categories (within the reference interval, mildly to moderately increased, or markedly increased) relative to the automated TNCC. Intra-observer and inter-observer agreement was good to excellent. The results of the study suggest that interpretation of canine synovial fluid samples of low volume can be aided by methodical manual counting of cells on direct smears. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing

    NASA Astrophysics Data System (ADS)

    Geach, James E.; Peacock, John A.

    2018-03-01

    Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.

  15. Hematology of healthy Florida manatees (Trichechus manatus)

    USGS Publications Warehouse

    Harvey, J.W.; Harr, K.E.; Murphy, D.; Walsh, M.T.; Nolan, E.C.; Bonde, R.K.; Pate, M.G.; Deutsch, C.J.; Edwards, H.H.; Clapp, W.L.

    2009-01-01

    Background: Hematologic analysis is an important tool in evaluating the general health status of free-ranging manatees and in the diagnosis and monitoring of rehabilitating animals. Objectives: The purpose of this study was to evaluate diagnostically important hematologic analytes in healthy manatees (Trichechus manatus) and to assess variations with respect to location (free ranging vs captive), age class (small calves, large calves, subadults, and adults), and gender. Methods: Blood was collected from 55 free-ranging and 63 captive healthy manatees. Most analytes were measured using a CELL-DYN 3500R; automated reticulocytes were measured with an ADVIA 120. Standard manual methods were used for differential leukocyte counts, reticulocyte and Heinz body counts, and plasma protein and fibrinogen concentrations. Results: Rouleaux, slight polychromasia, stomatocytosis, and low numbers of schistocytes and nucleated RBCs (NRBCs) were seen often in stained blood films. Manual reticulocyte counts were higher than automated reticulocyte counts. Heinz bodies were present in erythrocytes of most manatees. Compared with free-ranging manatees, captive animals had slightly lower MCV, MCH, and eosinophil counts and slightly higher heterophil and NRBC counts, and fibrinogen concentration. Total leukocyte, heterophil, and monocyte counts tended to be lower in adults than in younger animals. Small calves tended to have higher reticulocyte counts and NRBC counts than older animals. Conclusions: Hematologic findings were generally similar between captive and free-ranging manatees. Higher manual reticulocyte counts suggest the ADVIA detects only reticulocytes containing large amounts of RNA. Higher reticulocyte and NRBC counts in young calves probably reflect an increased rate of erythropoiesis compared with older animals. ?? 2009 American Society for Veterinary Clinical Pathology.

  16. Photon-Counting Kinetic Inductance Detectors for the Origins Space Telescope

    NASA Astrophysics Data System (ADS)

    Noroozian, Omid

    We propose to develop photon-counting Kinetic Inductance Detectors (KIDs) for the Origins Space Telescope (OST) and any predecessor missions, with the goal of producing background-limited photon-counting sensitivity, and with a preliminary technology demonstration in time to inform the Decadal Survey planning process. The OST, a midto far- infrared observatory concept, is being developed as a major NASA mission to be considered by the next Decadal Survey with support from NASA Headquarters. The objective of such a facility is to allow rapid spectroscopic surveys of the high redshift universe at 420-800 μm, using arrays of integrated spectrometers with moderate resolutions (R=λ/Δλ 1000), to create a powerful new data set for exploring galaxy evolution and the growth of structure in the Universe. A second objective of OST is to perform higher resolution (R 10,000-100,000) spectroscopic surveys at 20-300 µm, a uniquely powerful tool for exploring the evolution of protoplanetary disks into fledgling solar systems. Finally the OST aims to obtain sensitive mid-infrared (5-40 µm) spectroscopy of thermal emission from rocky planets in the habitable zone using the transit method. These OST science objectives are very exciting and represent a wellorganized community agreement. However, they are all impossible to reach without new detector technology, and the OST can’t be recommended or approved if suitable detectors do not exist. In all of the above instrument concepts, photon-counting direct detectors are mission-enabling and essential for reaching the sensitivity permitted by the cryogenic Origins Space Telescope and the performance required for its important science programs. Our group has developed an innovative design for an optically-coupled KID that can reach the photon-counting sensitivity required by the ambitious science goals of the OST mission. A KID is a planar microwave resonator patterned from a superconducting thin film, which responds to incident photons with a change in its resonance frequency and dissipation. This detector response is intrinsically frequency multiplexed, and consequently KIDs at different resonance frequencies can be read out using standard digital radio techniques, which enables multiplexing of 10,000s of detectors. In our photon-counting KID design we employ a small-volume (and thin) superconducting Al inductor to enhance the per-photon responsivity, and large parallel-plate NbTiN capacitors on single-crystal silicon-on-insulator (SOI) substrates to eliminate frequency noise. We have developed a comprehensive design demonstrating that photon-counting sensitivity is possible in a small-volume Al KID. In addition, we have already demonstrated ultra-high quality factors in resonators made of very thin ( 10 nm) Al films with long electron lifetimes. These are the critical material parameters for reaching photon-counting sensitivity levels. In our proposed work plan our objective is to implement these high quality films into our optically-coupled small-volume KID design and demonstrate photon-counting sensitivity. The successful development of our photon-counting technology will significantly increase the sensitivity of the OST mission, making it more scientifically competitive than one based on power detectors. Photon-counting at the background limit provides a x4 increase in observation speed over that of background-limited power detection, since there is no need to measure and subtract a zero point. Photon-counting detectors will enable an instrument on the OST to observe the fine structure lines of galaxies which are currently only observable at redshifts of z 1, out to redshifts of z=6, probing the early stages of galaxy, star and planet formation. Our photon-counting detectors will also enable entirely new science, including the mapping of the composition and evolution of water and other key volatiles in planet-forming materials around large samples of nearby young stars.

  17. Experience with a gastrointestinal marker (51CrCl3) in a combined study of ileal function using 75SeHCAT and 58CoB12 measured by whole body counting.

    PubMed Central

    Smith, T; Bjarnason, I

    1990-01-01

    Introduction of a radioactive gastrointestinal marker (51CrCl3) into a combined study (75SeHCAT + 58CoB12) of ileal function by whole body counting has been undertaken. The technique was assessed in 23 subjects (15 patients with inflammatory bowel disease, six on non-steroidal anti-inflammatory drugs for rheumatoid arthritis, and two normal subjects). Mean (SD) 51CrCl3 retention was only 4.1 (6.0)% on day 4, and was similar on day 7 in subjects given a second dose of 51CrCl3 on day 4. Only one subject had more than 20% of 51CrCl3 retention after four days. A 51CrCl3 correction method adequately corrected for colonic hold up of 58CoB12, when compared with final equilibrium values of 58CoB12 retention. Use of the non-absorbed 58CoB12 fraction as a gastrointestinal marker gave good agreement with the 51CrCl3 method in correcting 75SeHCAT values. In all subjects studied, corrections for colonic retention of 75SeHCAT on day 4, were small (less than 7% of dose) and did not affect the assessment of any subject. In conclusion, an additional gastrointestinal marker such as 51CrCl3 is unnecessary in our combined study since that role can be effected, when indicated by the non-absorbed 58CoB12 fraction. PMID:2128070

  18. On the accuracy of gamma spectrometric isotope ratio measurements of uranium

    NASA Astrophysics Data System (ADS)

    Ramebäck, H.; Lagerkvist, P.; Holmgren, S.; Jonsson, S.; Sandström, B.; Tovedal, A.; Vesterlund, A.; Vidmar, T.; Kastlander, J.

    2016-04-01

    The isotopic composition of uranium was measured using high resolution gamma spectrometry. Two acid solutions and two samples in the form of UO2 pellets were measured. The measurements were done in close geometries, i.e. directly on the endcap of the high purity germanium detector (HPGe). Applying no corrections for count losses due to true coincidence summing (TCS) resulted in up to about 40% deviation in the abundance of 235U from the results obtained with mass spectrometry. However, after correction for TCS, excellent agreement was achieved between the results obtained using two different measurement methods, or a certified value. Moreover, after corrections, the fitted relative response curves correlated excellently with simulated responses, for the different geometries, of the HPGe detector.

  19. Correction of bias in belt transect studies of immotile objects

    USGS Publications Warehouse

    Anderson, D.R.; Pospahala, R.S.

    1970-01-01

    Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W.; Mikell, J. K.; Kappadath, S. C., E-mail

    Purpose: To develop a practical background compensation (BC) technique to improve quantitative {sup 90}Y-bremsstrahlung single-photon emission computed tomography (SPECT)/computed tomography (CT) using a commercially available imaging system. Methods: All images were acquired using medium-energy collimation in six energy windows (EWs), ranging from 70 to 410 keV. The EWs were determined based on the signal-to-background ratio in planar images of an acrylic phantom of different thicknesses (2–16 cm) positioned below a {sup 90}Y source and set at different distances (15–35 cm) from a gamma camera. The authors adapted the widely used EW-based scatter-correction technique by modeling the BC as scaled images.more » The BC EW was determined empirically in SPECT/CT studies using an IEC phantom based on the sphere activity recovery and residual activity in the cold lung insert. The scaling factor was calculated from 20 clinical planar {sup 90}Y images. Reconstruction parameters were optimized in the same SPECT images for improved image quantification and contrast. A count-to-activity calibration factor was calculated from 30 clinical {sup 90}Y images. Results: The authors found that the most appropriate imaging EW range was 90–125 keV. BC was modeled as 0.53× images in the EW of 310–410 keV. The background-compensated clinical images had higher image contrast than uncompensated images. The maximum deviation of their SPECT calibration in clinical studies was lowest (<10%) for SPECT with attenuation correction (AC) and SPECT with AC + BC. Using the proposed SPECT-with-AC + BC reconstruction protocol, the authors found that the recovery coefficient of a 37-mm sphere (in a 10-mm volume of interest) increased from 39% to 90% and that the residual activity in the lung insert decreased from 44% to 14% over that of SPECT images with AC alone. Conclusions: The proposed EW-based BC model was developed for {sup 90}Y bremsstrahlung imaging. SPECT with AC + BC gave improved lesion detectability and activity quantification compared to SPECT with AC only. The proposed methodology can readily be used to tailor {sup 90}Y SPECT/CT acquisition and reconstruction protocols with different SPECT/CT systems for quantification and improved image quality in clinical settings.« less

  1. SYNCHROTRON RADIATION, FREE ELECTRON LASER, APPLICATION OF NUCLEAR TECHNOLOGY, ETC. Physical design of positronium time of flight spectroscopy apparatus

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao-Pan; Zhang, Zi-Liang; Qin, Xiu-Bo; Yu, Run-Sheng; Wang, Bao-Yi

    2010-12-01

    Positronium time of flight spectroscopy (Ps-TOF) is an effective technique for porous material research. It has advantages over other techniques for analyzing the porosity and pore tortuosity of materials. This paper describes a design for Ps-TOF apparatus based on the Beijing intense slow positron beam, supplying a new material characterization technique. In order to improve the time resolution and increase the count rate of the apparatus, the detector system is optimized. For 3 eV o-Ps, the time broadening is 7.66 ns and the count rate is 3 cps after correction.

  2. Inappropriate ICD discharges due to "triple counting" during normal sinus rhythm.

    PubMed

    Khan, Ejaz; Voudouris, Apostolos; Shorofsky, Stephen R; Peters, Robert W

    2006-11-01

    To describe the clinical course of a patient with multiple ICD shocks in the setting of advanced renal failure and hyperkalemia. The patient was brought to the Electrophysiology Laboratory where the ICD was interrogated. The patient was found to be hyperkalemic (serum potassium 7.6 mg/dl). Analysis of stored intracardiac electrograms from the ICD revealed "triple counting" (twice during his QRS complex and once during the T wave) and multiple inappropriate shocks. Correction of his electrolyte abnormality normalized his electrogram and no further ICD activations were observed. Electrolyte abnormalities can distort the intracardiac electrogram in patients with ICD's and these changes can lead to multiple inappropriate shocks.

  3. Holographic shell model: Stack data structure inside black holes?

    NASA Astrophysics Data System (ADS)

    Davidson, Aharon

    2014-03-01

    Rather than tiling the black hole horizon by Planck area patches, we suggest that bits of information inhabit, universally and holographically, the entire black core interior, a bit per a light sheet unit interval of order Planck area difference. The number of distinguishable (tagged by a binary code) configurations, counted within the context of a discrete holographic shell model, is given by the Catalan series. The area entropy formula is recovered, including Cardy's universal logarithmic correction, and the equipartition of mass per degree of freedom is proven. The black hole information storage resembles, in the count procedure, the so-called stack data structure.

  4. VizieR Online Data Catalog: Sample of faint X-ray pulsators (Israel+, 2016)

    NASA Astrophysics Data System (ADS)

    Israel, G. L.; Esposito, P.; Rodriguez Castillo, G. A.; Sidoli, L.

    2018-04-01

    As of 2015 December 31, we extracted about 430000 time series from sources with more than 10 counts (after background subtraction); ~190000 of them have more than 50 counts and their PSDs were searched for significant peaks. At the time of writing, the total number of searched Fourier frequencies was about 4.3x109. After a detailed screening, we obtained a final sample of 41 (42) new X-ray pulsators (signals), which are listed in Table 1. (1 data file).

  5. EFFECT OF THERMIONIC EMISSION AT ROOM TEMPERATURES IN PHOTOSENSITIVE GEIGERMULLER TUBES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grotowski, K.; Hrynkiewicz, A.Z.; Niewodniczanski, H.

    1953-01-01

    The temperature-dependence of the background of Geiger-Mueller counting tubes was compared for nonsensitized and sensitized (treated by electric discharge) tubes. A strong increase of background with increasing temperature was observed for phatosensitive counters, while no change was observed in nonsensetized counters. It is shown that the increase is due to thermionic emission of the brass cathode. (T.R.H.)

  6. Investigation of background radiation levels and geologic unit profiles in Durango, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Triplett, G.H.; Foutz, W.L.; Lesperance, L.R.

    1989-11-01

    As part of the Uranium Mill Tailings Remedial Action (UMTRA) Project, Oak Ridge National Laboratory (ORNL) has performed radiological surveys on 435 vicinity properties (VPs) in the Durango area. This study was undertaken to establish the background radiation levels and geologic unit profiles in the Durango VP area. During the months of May through June, 1986, extensive radiometric measurements and surface soil samples were collected in the Durango VP area by personnel from ORNL's Grand Junction Office. A majority of the Durango VP surveys were conducted at sites underlain by Quaternary alluvium, older Quaternary gravels, and Cretaceous Lewis and Mancosmore » shales. These four geologic units were selected to be evaluated. The data indicated no formation anomalies and established regional background radiation levels. Durango background radionuclide concentrations in surface soil were determined to be 20.3 {plus minus} 3.4 pCi/g for {sup 40}K, 1.6 {plus minus} 0.5 pCi/g for {sup 226}Ra, and 1.2 {plus minus} 0.3 pCi/g for {sup 232}Th. The Durango background gamma exposure rate was found to be 16.5 {plus minus} 1.3 {mu}R/h. Average gamma spectral count rate measurements for {sup 40}K, {sup 226}Ra and {sup 232}Th were determined to be 553, 150, and 98 counts per minute (cpm), respectively. Geologic unit profiles and Durango background radiation measurements are presented and compared with other areas. 19 refs., 15 figs., 5 tabs.« less

  7. Low-background germanium radioassay for the MAJORANA Collaboration

    NASA Astrophysics Data System (ADS)

    Trimble, James E., Jr.

    The focus of the MAJORANA COLLABORATION is the search for nuclear neutrinoless double beta decay. If discovered, this process would prove that the neutrino is its own anti-particle, or a M AJORANA particle. Being constructed at the Sanford Underground Research Facility, the MAJORANA DEMONSTRATOR aims to show that a background rate of 3 counts per region of interest (ROI) per tonne per year in the 4 keV ROI surrounding the 2039-keV Q-value energy of 76Ge is achievable and to demonstrate the technological feasibility of building a tonne-scale Ge-based experiment. Because of the rare nature of this process, detectors in the system must be isolated from ionizing radiation backgrounds as much as possible. This involved building the system with materials containing very low levels of naturally- occurring and anthropogenic radioactive isotopes at a deep underground site. In order to measure the levels of radioactive contamination in some components, the Majorana Demonstrator uses a low background counting facility managed by the Experimental Nuclear and Astroparticle Physics (ENAP) group at UNC. The UNC low background counting (LBC) facility is located at the Kimballton Underground Research Facility (KURF) located in Ripplemead, VA. The facility was used for a neutron activation analysis of samples of polytetrafluoroethylene (PTFE) and fluorinated ethylene propylene (FEP) tubing intended for use in the Demonstrator. Calculated initial activity limits (90% C.L.) of 238U and 232Th in the 0.002-in PTFE samples were 7.6 ppt and 5.1 ppt, respectively. The same limits in the FEP tubing sample were 150 ppt and 45 ppt, respectively. The UNC LBC was also used to gamma-assay a modified stainless steel flange to be used as a vacuum feedthrough. Trace activities of both 238U and 232Th were found in the sample, but all were orders of magnitude below the acceptable threshold for the Majorana experiment. Also discussed is a proposed next generation ultra-low background system designed to utilize technology designed for the Majorana Demonstrator. Fi- nally, a discussion is presented on the design and construction of an azimuthal scanner used by the Majorana collaboration.

  8. Proton-induced radioactivity in NaI (Tl) scintillation detectors

    NASA Technical Reports Server (NTRS)

    Fishman, G. J.

    1977-01-01

    Radioactivity induced by protons in sodium iodide scintillation crystals were calculated and directly measured. These data are useful in determining trapped radiation and cosmic-ray induced, background-counting rates in spaceborne detectors.

  9. 76 FR 56949 - Biomass Crop Assistance Program; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    .... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...

  10. Revised Radiometric Calibration Technique for LANDSAT-4 Thematic Mapper Data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.

  11. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  12. White blood cell counting analysis of blood smear images using various segmentation strategies

    NASA Astrophysics Data System (ADS)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  13. Prompt remediation of water intrusion corrects the resultant mold contamination in a home.

    PubMed

    Rockwell, William

    2005-01-01

    More patients are turning to their allergists with symptoms compatible with allergic rhinitis, allergic sinusitis, and/or bronchial asthma after exposure to mold-contaminated indoor environments. These patients often seek guidance from their allergists in the remediation of the contaminated home or office. The aim of this study was to determine baseline mold spore counts for noncontaminated homes and report a successful mold remediation in one mold-contaminated home. Indoor air quality was tested using volumetric spore counts in 50 homes where homeowners reported no mold-related health problems and in one mold-contaminated home that was remediated. The health of the occupant of the mold-contaminated home also was assessed. Indoor volumetric mold spore counts ranged from 300 to 1200 spores/m3 in the baseline homes. For the successful remediation, the mold counts started at 300 spores/m3, increased to 2800 spores/m3 at the height of the mold contamination, and then fell to 800 spores/m3 after remediation. The occupant's allergic symptoms ceased on complete remediation of the home. Indoor volumetric mold counts taken with the Allergenco MK-3 can reveal a potential indoor mold contamination, with counts above 1000 spores/m3 suggesting indoor mold contamination. Once the presence of indoor mold growth is found, a prompt and thorough remediation can bring mold levels back to near-baseline level and minimize negative health effects for occupants.

  14. Lorentz-Shaped Comet Dust Trail Cross Section from New Hybrid Visual and Video Meteor Counting Technique - Implications for Future Leonid Storm Encounters

    NASA Technical Reports Server (NTRS)

    Jenniskens, Peter; Crawford, Chris; Butow, Steven J.; Nugent, David; Koop, Mike; Holman, David; Houston, Jane; Jobse, Klaas; Kronk, Gary

    2000-01-01

    A new hybrid technique of visual and video meteor observations was developed to provide high precision near real-time flux measurements for satellite operators from airborne platforms. A total of 33,000 Leonids. recorded on video during the 1999 Leonid storm, were watched by a team of visual observers using a video head display and an automatic counting tool. The counts reveal that the activity profile of the Leonid storm is a Lorentz profile. By assuming a radial profile for the dust trail that is also a Lorentzian, we make predictions for future encounters. If that assumption is correct, we passed 0.0003 AU deeper into the 1899 trailet than expected during the storm of 1999 and future encounters with the 1866 trailet will be less intense than. predicted elsewhere.

  15. Collection, analysis, and archival of LDEF activation data

    NASA Technical Reports Server (NTRS)

    Laird, C. E.; Harmon, B. A.; Fishman, G. J.; Parnell, T. A.

    1993-01-01

    The study of the induced radioactivity of samples intentionally placed aboard the Long Duration Exposure Facility (LDEF) and samples obtained from the LDEF structure is reviewed. The eight laboratories involved in the gamma-ray counting are listed and the scientists and the associated counting facilities are described. Presently, most of the gamma-ray counting has been completed and the spectra are being analyzed and corrected for efficiency and self absorption. The acquired spectra are being collected at Eastern Kentucky University for future reference. The results of these analyses are being compiled and reviewed for possible inconsistencies as well as for comparison with model calculations. These model calculations are being revised to include the changes in trapped-proton flux caused by the onset of the period of maximum solar activity and the rapidly decreasing spacecraft orbit. Tentative plans are given for the storage of the approximately 1000 gamma-ray spectra acquired in this study and the related experimental data.

  16. Diffusion processes in tumors: A nuclear medicine approach

    NASA Astrophysics Data System (ADS)

    Amaya, Helman

    2016-07-01

    The number of counts used in nuclear medicine imaging techniques, only provides physical information about the desintegration of the nucleus present in the the radiotracer molecules that were uptaken in a particular anatomical region, but that information is not a real metabolic information. For this reason a mathematical method was used to find a correlation between number of counts and 18F-FDG mass concentration. This correlation allows a better interpretation of the results obtained in the study of diffusive processes in an agar phantom, and based on it, an image from the PETCETIX DICOM sample image set from OsiriX-viewer software was processed. PET-CT gradient magnitude and Laplacian images could show direct information on diffusive processes for radiopharmaceuticals that enter into the cells by simple diffusion. In the case of the radiopharmaceutical 18F-FDG is necessary to include pharmacokinetic models, to make a correct interpretation of the gradient magnitude and Laplacian of counts images.

  17. Low noise and conductively cooled microchannel plates

    NASA Technical Reports Server (NTRS)

    Feller, W. B.

    1990-01-01

    Microchannel plate (MCP) dynamic range has recently been enhanced for both very low and very high input flux conditions. Improvements in MCP manufacturing technology reported earlier have led to MCPs with substantially reduced radioisotope levels, giving dramatically lower internal background-counting rates. An update is given on the Galileo low noise MCP. Also, new results in increasing the MCP linear counting range for high input flux densities are presented. By bonding the active face of a very low resistance MCP (less than 1 megaohm) to a substrate providing a conductive path for heat transport, the bias current limit (hence, MCP output count rate limit) can be increased up to two orders of magnitude. Normal pulse-counting MCP operation was observed at bias currents of several mA when a curved-channel MCP (80:1) was bonded to a ceramic multianode substrate; the MCP temperature rise above ambient was less than 40 C.

  18. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  19. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  20. Half-life determination for {sup 108}Ag and {sup 110}Ag

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zahn, Guilherme S.; Genezini, Frederico A.

    2014-11-11

    In this work, the half-life of the short-lived silver radionuclides {sup 108}Ag and {sup 110}Ag were measured by following the activity of samples after they were irradiated in the IEA-R1 reactor. The results were then fitted using a non-paralizable dead time correction to the regular exponential decay and the individual half-life values obtained were then analyzed using both the Normalized Residuals and the Rajeval techniques, in order to reach the most exact and precise final values. To check the validity of dead-time correction, a second correction method was also employed by means of counting a long-lived {sup 60}Co radioactive sourcemore » together with the samples as a livetime chronometer. The final half-live values obtained using both dead-time correction methods were in good agreement, showing that the correction was properly assessed. The results obtained are partially compatible with the literature values, but with a lower uncertainty, and allow a discussion on the last ENSDF compilations' values.« less

  1. Static and elevated pollen traps do not provide an accurate assessment of personal pollen exposure.

    PubMed

    Penel, V; Calleja, M; Pichot, C; Charpin, D

    2017-03-01

    Background. Volumetric pollen traps are commonly used to assess pollen exposure. These traps are well suited for estimating the regional mean airborne pollen concentration but are likely not to provide an accurate index of personal exposure. In this study, we tested the hypothesis that hair sampling may provide different pollen counts from those from pollen traps, especially when the pollen exposure is diverse. Methods. We compared pollen counts in hair washes to counts provided by stationary volumetric and gravimetric pollen traps in 2 different settings: urban with volunteers living in short distance from one another and from the static trap and suburban in which volunteers live in a scattered environment, quite far from the static trap. Results. Pollen counts in hair washes are in full agreement with trap counts for uniform pollen exposure. In contrast, for diverse pollen exposure, .individual pollen counts in hair washes vary strongly in quantity and taxa composition between individuals and dates. These results demonstrate that the pollen counts method (hair washes vs. stationary pollen traps) may lead to different absolute and relative contributions of taxa to the total pollen count. Conclusions. In a geographic area with a high diversity of environmental exposure to pollen, static pollen traps, in contrast to hair washes, do not provide a reliable estimate of this higher diversity.

  2. Predicted performance of a PG-SPECT system using CZT primary detectors and secondary Compton-suppression anti-coincidence detectors under near-clinical settings for boron neutron capture therapy

    NASA Astrophysics Data System (ADS)

    Hales, Brian; Katabuchi, Tatsuya; Igashira, Masayuki; Terada, Kazushi; Hayashizaki, Noriyosu; Kobayashi, Tooru

    2017-12-01

    A test version of a prompt-gamma single photon emission computed tomography (PG-SPECT) system for boron neutron capture therapy (BNCT) using a CdZnTe (CZT) semiconductor detector with a secondary BGO anti-Compton suppression detector has been designed. A phantom with healthy tissue region of pure water, and 2 tumor regions of 5 wt% borated polyethylene was irradiated to a fluence of 1.3 × 109 n/cm2. The number of 478 keV foreground, background, and net counts were measured for each detector position and angle. Using only experimentally measured net counts, an image of the 478 keV production from the 10B(n , α) 7Li* reaction was reconstructed. Using Monte Carlo simulation and the experimentally measured background counts, the reliability of the system under clinically accurate parameters was extrapolated. After extrapolation, it was found that the value of the maximum-value pixel in the reconstructed 478 keV γ-ray production image overestimates the simulated production by an average of 9.2%, and that the standard deviation associated with the same value is 11.4%.

  3. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  4. A rapid method for the simultaneous determination of gross alpha and beta activities in water samples using a low background liquid scintillation counter.

    PubMed

    Sanchez-Cabeza, J A; Pujol, L

    1995-05-01

    The radiological examination of water requires a rapid screening technique that permits the determination of the gross alpha and beta activities of each sample in order to decide if further radiological analyses are necessary. In this work, the use of a low background liquid scintillation system (Quantulus 1220) is proposed to simultaneously determine the gross activities in water samples. Liquid scintillation is compared to more conventional techniques used in most monitoring laboratories. In order to determine the best counting configuration of the system, pulse shape discrimination was optimized for 6 scintillant/vial combinations. It was concluded that the best counting configuration was obtained with the scintillation cocktail Optiphase Hisafe 3 in Zinsser low diffusion vials. The detection limits achieved were 0.012 Bq L-1 and 0.14 Bq L-1 for gross alpha and beta activity respectively, after a 1:10 concentration process by simple evaporation and for a counting time of only 360 min. The proposed technique is rapid, gives spectral information, and is adequate to determine gross activities according to the World Health Organization (WHO) guideline values.

  5. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Treesearch

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  6. Distinguishing Identical Particles and the Correct Counting of States

    ERIC Educational Resources Information Center

    de la Torre, A. C.; Martin, H. O.

    2009-01-01

    It is shown that quantum systems of identical particles can be treated as different when they are in well-differentiated states. This simplifying assumption allows for the consideration of quantum systems isolated from the rest of the universe and justifies many intuitive statements about identical systems. However, it is shown that this…

  7. Making It Count: Strategies for Improving Mathematics Instruction for Students in Short-Term Facilities. Strategy Guide

    ERIC Educational Resources Information Center

    Leone, Peter; Wilson, Michael; Mulcahy, Candace

    2010-01-01

    This guide is designed to support the development of mathematics proficiency for youth in short-term juvenile correctional facilities. Mathematics proficiency includes mastery and fluency in foundational numeracy; an understanding of complex, grade-appropriate concepts and procedures; and application of those competencies to solve relevant,…

  8. Calculating Probabilistic Distance to Solution in a Complex Problem Solving Domain

    ERIC Educational Resources Information Center

    Sudol, Leigh Ann; Rivers, Kelly; Harris, Thomas K.

    2012-01-01

    In complex problem solving domains, correct solutions are often comprised of a combination of individual components. Students usually go through several attempts, each attempt reflecting an individual solution state that can be observed during practice. Classic metrics to measure student performance over time rely on counting the number of…

  9. The Effects of Using Different Procedures to Score Maze Measures

    ERIC Educational Resources Information Center

    Pierce, Rebecca L.; McMaster, Kristen L.; Deno, Stanley L.

    2010-01-01

    The purpose of this study was to examine how different scoring procedures affect interpretation of maze curriculum-based measurements. Fall and spring data were collected from 199 students receiving supplemental reading instruction. Maze probes were scored first by counting all correct maze choices, followed by four scoring variations designed to…

  10. A loop-counting method for covariate-corrected low-rank biclustering of gene-expression and genome-wide association study data.

    PubMed

    Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael

    2018-05-14

    A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).

  11. [Experimental study and correction of the absorption and enhancement effect between Ti, V and Fe].

    PubMed

    Tuo, Xian-Guo; Mu, Ke-Liang; Li, Zhe; Wang, Hong-Hui; Luo, Hui; Yang, Jian-Bo

    2009-11-01

    The absorption and enhancement effects in X-ray fluorescence analysis for Ti, V and Fe elements were studied in the present paper. Three bogus duality systems of Ti-V/Ti-Fe/V-Fe samples were confected and measured by X-ray fluorescence analysis technique using HPGe semiconductor detector, and the relation curve between unitary coefficient (R(K)) of element count rate and element content (W(K)) were obtained after the experiment. Having analyzed the degree of absorption and enhancement effect between every two elements, the authors get the result, and that is the absorption and enhancement effect between Ti and V is relatively distinctness, while it's not so distinctness in Ti-Fe and V-Fe. After that, a mathematics correction method of exponential fitting was used to fit the R(K)-W(K) curve and get a function equation of X-ray fluorescence count rate and content. Three groups of Ti-V duality samples were used to test the fitting method and the relative errors of Ti and V were less than 0.2% as compared to the actual results.

  12. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  13. Fluorescein dye improves microscopic evaluation and counting of demodex in blepharitis with cylindrical dandruff.

    PubMed

    Kheirkhah, Ahmad; Blanco, Gabriela; Casas, Victoria; Tseng, Scheffer C G

    2007-07-01

    To show whether fluorescein dye helps detect and count Demodex embedded in cylindrical dandruff (CD) of epilated eyelashes from patients with blepharitis. Two eyelashes with CD were removed from each lid of 10 consecutive patients with blepharitis and subjected to microscopic examination with and without fluorescein solution to detect and count Demodex mites. Of 80 eyelashes examined, 36 (45%) lashes retained their CD after removal. Before addition of the fluorescein solution, the mean total Demodex count per patient was 14.9 +/- 10 and the mean Demodex count per lash was 3.1 +/- 2.5 and 0.8 +/- 0.7 in epilated eyelashes with and without retained CD, respectively (P < 0.0001). After addition of the fluorescein solution, opaque and compact CD instantly expanded to reveal embedded mites in a yellowish and semitransparent background. As a result, the mean total Demodex count per patient was significantly increased to 20.2 +/- 13.8 (P = 0.003), and the mean count per lash was significantly increased to 4.4 +/- 2.8 and 1 +/- 0.8 in eyelashes with and without retained CD (P < 0.0001 and P = 0.007), respectively. This new method yielded more mites in 8 of 10 patients and allowed mites to be detected in 3 lashes with retained CD and 1 lash without retained CD that had an initial count of zero. Addition of fluorescein solution after mounting further increases the proficiency of detecting and counting mites embedded in CD of epilated eyelashes.

  14. AURORA on MEGSAT 1: a photon counting observatory for the Earth UV night-sky background and Aurora emission

    NASA Astrophysics Data System (ADS)

    Monfardini, A.; Trampus, P.; Stalio, R.; Mahne, N.; Battiston, R.; Menichelli, M.; Mazzinghi, P.

    2001-08-01

    A low-mass, low-cost photon-counting scientific payload has been developed and launched on a commercial microsatellite in order to study the near-UV night-sky background emission with a telescope nicknamed ``Notte'' and the Aurora emission with ``Alba''. AURORA, this is the name of the experiment, will determine, with the ``Notte'' channel, the overall night-side photon background in the 300-400nm spectral range, together with a particular 2+N2 line (λc=337nm). The ``Alba'' channel, on the other hand, will study the Aurora emissions in four different spectral bands (FWHM=8.4-9.6nm) centered on: 367nm (continuum evaluation), 391nm (1-N+2), 535nm (continuum evaluation), 560nm (OI). The instrument has been launched on the 26 September, 2000 from the Baikonur cosmodrome on a modified SS18 Dnepr-1 ``Satan'' rocket. The satellite orbit is nearly circular (hapogee=648km, /e=0.0022), and the inclination of the orbital plane is 64.56°. An overview of the techniques adopted is given in this paper.

  15. A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1996-02-01

    The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.

  16. Determination of mammalian cell counts, cell size and cell health using the Moxi Z mini automated cell counter.

    PubMed

    Dittami, Gregory M; Sethi, Manju; Rabbitt, Richard D; Ayliffe, H Edward

    2012-06-21

    Particle and cell counting is used for a variety of applications including routine cell culture, hematological analysis, and industrial controls(1-5). A critical breakthrough in cell/particle counting technologies was the development of the Coulter technique by Wallace Coulter over 50 years ago. The technique involves the application of an electric field across a micron-sized aperture and hydrodynamically focusing single particles through the aperture. The resulting occlusion of the aperture by the particles yields a measurable change in electric impedance that can be directly and precisely correlated to cell size/volume. The recognition of the approach as the benchmark in cell/particle counting stems from the extraordinary precision and accuracy of its particle sizing and counts, particularly as compared to manual and imaging based technologies (accuracies on the order of 98% for Coulter counters versus 75-80% for manual and vision-based systems). This can be attributed to the fact that, unlike imaging-based approaches to cell counting, the Coulter Technique makes a true three-dimensional (3-D) measurement of cells/particles which dramatically reduces count interference from debris and clustering by calculating precise volumetric information about the cells/particles. Overall this provides a means for enumerating and sizing cells in a more accurate, less tedious, less time-consuming, and less subjective means than other counting techniques(6). Despite the prominence of the Coulter technique in cell counting, its widespread use in routine biological studies has been prohibitive due to the cost and size of traditional instruments. Although a less expensive Coulter-based instrument has been produced, it has limitations as compared to its more expensive counterparts in the correction for "coincidence events" in which two or more cells pass through the aperture and are measured simultaneously. Another limitation with existing Coulter technologies is the lack of metrics on the overall health of cell samples. Consequently, additional techniques must often be used in conjunction with Coulter counting to assess cell viability. This extends experimental setup time and cost since the traditional methods of viability assessment require cell staining and/or use of expensive and cumbersome equipment such as a flow cytometer. The Moxi Z mini automated cell counter, described here, is an ultra-small benchtop instrument that combines the accuracy of the Coulter Principle with a thin-film sensor technology to enable precise sizing and counting of particles ranging from 3-25 microns, depending on the cell counting cassette used. The M type cassette can be used to count particles from with average diameters of 4 - 25 microns (dynamic range 2 - 34 microns), and the Type S cassette can be used to count particles with and average diameter of 3 - 20 microns (dynamic range 2 - 26 microns). Since the system uses a volumetric measurement method, the 4-25 microns corresponds to a cell volume range of 34 - 8,180 fL and the 3 - 20 microns corresponds to a cell volume range of 14 - 4200 fL, which is relevant when non-spherical particles are being measured. To perform mammalian cell counts using the Moxi Z, the cells to be counted are first diluted with ORFLO or similar diluent. A cell counting cassette is inserted into the instrument, and the sample is loaded into the port of the cassette. Thousands of cells are pulled, single-file through a "Cell Sensing Zone" (CSZ) in the thin-film membrane over 8-15 seconds. Following the run, the instrument uses proprietary curve-fitting in conjunction with a proprietary software algorithm to provide coincidence event correction along with an assessment of overall culture health by determining the ratio of the number of cells in the population of interest to the total number of particles. The total particle counts include shrunken and broken down dead cells, as well as other debris and contaminants. The results are presented in histogram format with an automatic curve fit, with gates that can be adjusted manually as needed. Ultimately, the Moxi Z enables counting with a precision and accuracy comparable to a Coulter Z2, the current gold standard, while providing additional culture health information. Furthermore it achieves these results in less time, with a smaller footprint, with significantly easier operation and maintenance, and at a fraction of the cost of comparable technologies.

  17. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    PubMed Central

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343

  18. Design and performance of an ionisation chamber for the measurement of low alpha-activities

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Hutsch, J.; Krüger, F.; Sobiella, M.; Wilsenach, H.; Zuber, K.

    2016-04-01

    A new ionisation chamber for alpha-spectroscopy has been built from radio-pure materials for the purpose of investigating long lived alpha-decays. The measurement makes use of pulse shape analysis to discriminate between signal and background events. The design and performance of the chamber is described in this paper. A background rate of (10.9 ± 0.6) counts per day in the energy region of 1-9 MeV was achieved with a run period of 30.8 days. The background is dominantly produced by radon daughters.

  19. Simultaneous measurement of tritium and radiocarbon by ultra-low-background proportional counting

    DOE PAGES

    Mace, Emily; Aalseth, Craig; Alexander, Tom; ...

    2016-12-21

    Use of ultra-low-background capabilities at Pacific Northwest National Laboratory provide enhanced sensitivity for measurement of low-activity sources of tritium and radiocarbon using proportional counters. Tritium levels are nearly back to pre-nuclear test backgrounds (~2-8 TU in rainwater), which can complicate their dual measurement with radiocarbon due to overlap in the beta decay spectra. In this paper, we present results of single-isotope proportional counter measurements used to analyze a dual-isotope methane sample synthesized from ~120 mg of H 2O and present sensitivity results.

  20. SU-E-I-88: The Effect of System Dead Time On Real-Time Plastic and GOS Based Fiber-Optic Dosimetry Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoerner, M; Hintenlang, D

    Purpose: A methodology is presented to correct for measurement inaccuracies at high detector count rates using a plastic and GOS scintillation fibers coupled to a photomultiplier tube with digital readout. This system allows temporal acquisition and manipulation of measured data. Methods: The detection system used was a plastic scintillator and a separate gadolinium scintillator, both (0.5 diameter) coupled to an optical fiber with a Hamamatsu photon counter with a built-in microcontroller and digital interface. Count rate performance of the system was evaluated using the nonparalzable detector model. Detector response was investigated across multiple radiation sources including: orthovoltage x-ray system, colbat-60more » gamma rays, proton therapy beam, and a diagnostic radiography x-ray tube. The dead time parameter was calculated by measuring the count rate of the system at different exposure rates using a reference detector. Results: The system dead time was evaluated for the following sources of radiation used clinically: diagnostic energy x-rays, cobalt-60 gamma rays, orthovoltage xrays, particle proton accelerator, and megavoltage x-rays. It was found that dead time increased significantly when exposing the detector to sources capable of generating Cerenkov radiation, all of the sources sans the diagnostic x-rays, with increasing prominence at higher photon energies. Percent depth dose curves generated by a dedicated ionization chamber and compared to the detection system demonstrated that correcting for dead time improves accuracy. On most sources, nonparalzable model fit provided an improved system response. Conclusion: Overall, the system dead time was variable across the investigated radiation particles and energies. It was demonstrated that the system response accuracy was greatly improved by correcting for dead time effects. Cerenkov radiation plays a significant role in the increase in the system dead time through transient absorption effects attributed to electron hole-pair creations within the optical waveguide.« less

Top