Sample records for background corrected count

  1. Color quench correction for low level Cherenkov counting.

    PubMed

    Tsroya, S; Pelled, O; German, U; Marco, R; Katorza, E; Alfassi, Z B

    2009-05-01

    The Cherenkov counting efficiency varies strongly with color quenching, thus correction curves must be used to obtain correct results. The external (152)Eu source of a Quantulus 1220 liquid scintillation counting (LSC) system was used to obtain a quench indicative parameter based on spectra area ratio. A color quench correction curve for aqueous samples containing (90)Sr/(90)Y was prepared. The main advantage of this method over the common spectra indicators is its usefulness also for low level Cherenkov counting.

  2. Effect of background correction on peak detection and quantification in online comprehensive two-dimensional liquid chromatography using diode array detection.

    PubMed

    Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W

    2012-09-07

    A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Compton suppression gamma-counting: The effect of count rate

    USGS Publications Warehouse

    Millard, H.T.

    1984-01-01

    Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.

  4. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  5. Low Background Counting at LBNL

    DOE PAGES

    Smith, A. R.; Thomas, K. J.; Norman, E. B.; ...

    2015-03-24

    The Low Background Facility (LBF) at Lawrence Berkeley National Laboratory in Berkeley, California provides low background gamma spectroscopy services to a wide array of experiments and projects. The analysis of samples takes place within two unique facilities; locally within a carefully-constructed, low background cave and remotely at an underground location that historically has operated underground in Oroville, CA, but has recently been relocated to the Sanford Underground Research Facility (SURF) in Lead, SD. These facilities provide a variety of gamma spectroscopy services to low background experiments primarily in the form of passive material screening for primordial radioisotopes (U, Th, K)more » or common cosmogenic/anthropogenic products, as well as active screening via Neutron Activation Analysis for specific applications. The LBF also provides hosting services for general R&D testing in low background environments on the surface or underground for background testing of detector systems or similar prototyping. A general overview of the facilities, services, and sensitivities is presented. Recent activities and upgrades will also be presented, such as the completion of a 3π anticoincidence shield at the surface station and environmental monitoring of Fukushima fallout. The LBF is open to any users for counting services or collaboration on a wide variety of experiments and projects.« less

  6. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation

  7. Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.

    PubMed

    Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min

    2018-03-01

    High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.

  8. Soudan Low Background Counting Facility (SOLO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attisha, Michael; Viveiros, Luiz de; Gaitksell, Richard

    2005-09-08

    The Soudan Low Background Counting Facility (SOLO) has been in operation at the Soudan Mine, MN since March 2003. In the past two years, we have gamma-screened samples for the Majorana, CDMS and XENON experiments. With individual sample exposure times of up to two weeks we have measured sample contamination down to the 0.1 ppb level for 238U / 232Th, and down to the 0.25 ppm level for 40K.

  9. Which button will I press? Preference for correctly ordered counting sequences in 18-month-olds.

    PubMed

    Ip, Martin Ho Kwan; Imuta, Kana; Slaughter, Virginia

    2018-04-16

    Correct counting respects the stable order principle whereby the count terms are recited in a fixed order every time. The 4 experiments reported here tested whether precounting infants recognize and prefer correct stable-ordered counting. The authors introduced a novel preference paradigm in which infants could freely press two buttons to activate videos of counting events. In the "correct" counting video, number words were always recited in the canonical order ("1, 2, 3, 4, 5, 6"). The "incorrect" counting video was identical except that the number words were recited in a random order (e.g., "5, 3, 1, 6, 4, 2"). In Experiment 1, 18-month-olds (n = 21), but not 15-month-olds (n = 24), significantly preferred to press the button that activated correct counting events. Experiment 2 revealed that English-learning 18-month-olds' (n = 21) preference for stable-ordered counting disappeared when the counting was done in Japanese. By contrast, Experiment 3 showed that multilingual 18-month-olds (n = 24) preferred correct stable-ordered counting in an unfamiliar foreign language. In Experiment 4, multilingual 18-month-olds (N = 21) showed no preference for stable-ordered alphabet sequences, ruling out some alternative explanations for the Experiment 3 results. Overall these findings are consistent with the idea that implicit recognition of the stable order principle of counting is acquired by 18 months of age, and that learning more than one language may accelerate infants' understanding of abstract counting principles. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Multiplicity Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geist, William H.

    2015-12-01

    This set of slides begins by giving background and a review of neutron counting; three attributes of a verification item are discussed: 240Pu eff mass; α, the ratio of (α,n) neutrons to spontaneous fission neutrons; and leakage multiplication. It then takes up neutron detector systems – theory & concepts (coincidence counting, moderation, die-away time); detector systems – some important details (deadtime, corrections); introduction to multiplicity counting; multiplicity electronics and example distributions; singles, doubles, and triples from measured multiplicity distributions; and the point model: multiplicity mathematics.

  11. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  12. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    NASA Astrophysics Data System (ADS)

    Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.

    2018-01-01

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.

  13. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE PAGES

    Lockhart, M.; Henzlova, D.; Croft, S.; ...

    2017-09-20

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  14. Experimental evaluation of the extended Dytlewski-style dead time correction formalism for neutron multiplicity counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lockhart, M.; Henzlova, D.; Croft, S.

    Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less

  15. Apparatus and method for temperature correction and expanded count rate of inorganic scintillation detectors

    DOEpatents

    Ianakiev, Kiril D [Los Alamos, NM; Hsue, Sin Tao [Santa Fe, NM; Browne, Michael C [Los Alamos, NM; Audia, Jeffrey M [Abiquiu, NM

    2006-07-25

    The present invention includes an apparatus and corresponding method for temperature correction and count rate expansion of inorganic scintillation detectors. A temperature sensor is attached to an inorganic scintillation detector. The inorganic scintillation detector, due to interaction with incident radiation, creates light pulse signals. A photoreceiver processes the light pulse signals to current signals. Temperature correction circuitry that uses a fast light component signal, a slow light component signal, and the temperature signal from the temperature sensor to corrected an inorganic scintillation detector signal output and expanded the count rate.

  16. The number counts and infrared backgrounds from infrared-bright galaxies

    NASA Technical Reports Server (NTRS)

    Hacking, P. B.; Soifer, B. T.

    1991-01-01

    Extragalactic number counts and diffuse backgrounds at 25, 60, and 100 microns are predicted using new luminosity functions and improved spectral-energy distribution density functions derived from IRAS observations of nearby galaxies. Galaxies at redshifts z less than 3 that are like those in the local universe should produce a minimum diffuse background of 0.0085, 0.038, and 0.13 MJy/sr at 25, 60, and 100 microns, respectively. Models with significant luminosity evolution predict backgrounds about a factor of 4 greater than this minimum.

  17. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT

  18. Aging and Visual Counting

    PubMed Central

    Li, Roger W.; MacKeben, Manfred; Chat, Sandy W.; Kumar, Maya; Ngo, Charlie; Levi, Dennis M.

    2010-01-01

    Background Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a “single glance”, without the confounding influence of eye movements. Methodology/Principal Findings We recruited 104 observers with normal vision across the age span (age 21–85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which ≈63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61–85: ∼5 dots) when compared with the youngest groups (age 21–40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more. Conclusion/Significance Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin. PMID:20976149

  19. PET attenuation correction for rigid MR Tx/Rx coils from 176Lu background activity

    NASA Astrophysics Data System (ADS)

    Lerche, Christoph W.; Kaltsas, Theodoris; Caldeira, Liliana; Scheins, Jürgen; Rota Kops, Elena; Tellmann, Lutz; Pietrzyk, Uwe; Herzog, Hans; Shah, N. Jon

    2018-02-01

    One challenge for PET-MR hybrid imaging is the correction for attenuation of the 511 keV annihilation radiation by the required RF transmit and/or RF receive coils. Although there are strategies for building PET transparent Tx/Rx coils, such optimised coils still cause significant attenuation of the annihilation radiation leading to artefacts and biases in the reconstructed activity concentrations. We present a straightforward method to measure the attenuation of Tx/Rx coils in simultaneous MR-PET imaging based on the natural 176Lu background contained in the scintillator of the PET detector without the requirement of an external CT scanner or PET scanner with transmission source. The method was evaluated on a prototype 3T MR-BrainPET produced by Siemens Healthcare GmbH, both with phantom studies and with true emission images from patient/volunteer examinations. Furthermore, the count rate stability of the PET scanner and the x-ray properties of the Tx/Rx head coil were investigated. Even without energy extrapolation from the two dominant γ energies of 176Lu to 511 keV, the presented method for attenuation correction, based on the measurement of 176Lu background attenuation, shows slightly better performance than the coil attenuation correction currently used. The coil attenuation correction currently used is based on an external transmission scan with rotating 68Ge sources acquired on a Siemens ECAT HR  +  PET scanner. However, the main advantage of the presented approach is its straightforwardness and ready availability without the need for additional accessories.

  20. Reference analysis of the signal + background model in counting experiments

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  1. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Dilution air background emission...

  2. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Dilution air background emission...

  3. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Dilution air background emission...

  4. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Dilution air background emission...

  5. 40 CFR 1065.667 - Dilution air background emission correction.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.667 Dilution air background emission correction. (a) To determine the mass of background emissions to subtract... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Dilution air background emission...

  6. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  7. Simple automatic strategy for background drift correction in chromatographic data analysis.

    PubMed

    Fu, Hai-Yan; Li, He-Dong; Yu, Yong-Jie; Wang, Bing; Lu, Peng; Cui, Hua-Peng; Liu, Ping-Ping; She, Yuan-Bin

    2016-06-03

    Chromatographic background drift correction, which influences peak detection and time shift alignment results, is a critical stage in chromatographic data analysis. In this study, an automatic background drift correction methodology was developed. Local minimum values in a chromatogram were initially detected and organized as a new baseline vector. Iterative optimization was then employed to recognize outliers, which belong to the chromatographic peaks, in this vector, and update the outliers in the baseline until convergence. The optimized baseline vector was finally expanded into the original chromatogram, and linear interpolation was employed to estimate background drift in the chromatogram. The principle underlying the proposed method was confirmed using a complex gas chromatographic dataset. Finally, the proposed approach was applied to eliminate background drift in liquid chromatography quadrupole time-of-flight samples used in the metabolic study of Escherichia coli samples. The proposed method was comparable with three classical techniques: morphological weighted penalized least squares, moving window minimum value strategy and background drift correction by orthogonal subspace projection. The proposed method allows almost automatic implementation of background drift correction, which is convenient for practical use. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Initial Characterization of Unequal-Length, Low-Background Proportional Counters for Absolute Gas-Counting Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, Emily K.; Aalseth, Craig E.; Bonicalzi, Ricco

    Abstract. Characterization of two sets of custom unequal length proportional counters is underway at Pacific Northwest National Laboratory (PNNL). These detectors will be used in measurements to determine the absolute activity concentration of gaseous radionuclides (e.g., 37Ar). A set of three detectors has been fabricated based on previous PNNL ultra-low-background proportional counters (ULBPC) designs and now operate in PNNL’s shallow underground counting laboratory. A second set of four counters has also been fabricated using clean assembly of OFHC copper components for use in an above-ground counting laboratory. Characterization of both sets of detectors is underway with measurements of background rates,more » gas gain, energy resolution, and shielding considerations. These results will be presented along with uncertainty estimates of future absolute gas counting measurements.« less

  9. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    ERIC Educational Resources Information Center

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  10. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz

  11. [Raman spectroscopy fluorescence background correction and its application in clustering analysis of medicines].

    PubMed

    Chen, Shan; Li, Xiao-ning; Liang, Yi-zeng; Zhang, Zhi-min; Liu, Zhao-xia; Zhang, Qi-ming; Ding, Li-xia; Ye, Fei

    2010-08-01

    During Raman spectroscopy analysis, the organic molecules and contaminations will obscure or swamp Raman signals. The present study starts from Raman spectra of prednisone acetate tablets and glibenclamide tables, which are acquired from the BWTek i-Raman spectrometer. The background is corrected by R package baselineWavelet. Then principle component analysis and random forests are used to perform clustering analysis. Through analyzing the Raman spectra of two medicines, the accurate and validity of this background-correction algorithm is checked and the influences of fluorescence background on Raman spectra clustering analysis is discussed. Thus, it is concluded that it is important to correct fluorescence background for further analysis, and an effective background correction solution is provided for clustering or other analysis.

  12. Photon counting, censor corrections, and lifetime imaging for improved detection in two-photon microscopy

    PubMed Central

    Driscoll, Jonathan D.; Shih, Andy Y.; Iyengar, Satish; Field, Jeffrey J.; White, G. Allen; Squier, Jeffrey A.; Cauwenberghs, Gert

    2011-01-01

    We present a high-speed photon counter for use with two-photon microscopy. Counting pulses of photocurrent, as opposed to analog integration, maximizes the signal-to-noise ratio so long as the uncertainty in the count does not exceed the gain-noise of the photodetector. Our system extends this improvement through an estimate of the count that corrects for the censored period after detection of an emission event. The same system can be rapidly reconfigured in software for fluorescence lifetime imaging, which we illustrate by distinguishing between two spectrally similar fluorophores in an in vivo model of microstroke. PMID:21471395

  13. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  14. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  15. General relativistic corrections in density-shear correlations

    NASA Astrophysics Data System (ADS)

    Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena

    2018-06-01

    We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.

  16. Fluorescence decay data analysis correcting for detector pulse pile-up at very high count rates

    NASA Astrophysics Data System (ADS)

    Patting, Matthias; Reisch, Paja; Sackrow, Marcus; Dowler, Rhys; Koenig, Marcelle; Wahl, Michael

    2018-03-01

    Using time-correlated single photon counting for the purpose of fluorescence lifetime measurements is usually limited in speed due to pile-up. With modern instrumentation, this limitation can be lifted significantly, but some artifacts due to frequent merging of closely spaced detector pulses (detector pulse pile-up) remain an issue to be addressed. We propose a data analysis method correcting for this type of artifact and the resulting systematic errors. It physically models the photon losses due to detector pulse pile-up and incorporates the loss in the decay fit model employed to obtain fluorescence lifetimes and relative amplitudes of the decay components. Comparison of results with and without this correction shows a significant reduction of systematic errors at count rates approaching the excitation rate. This allows quantitatively accurate fluorescence lifetime imaging at very high frame rates.

  17. Energy-correction photon counting pixel for photon energy extraction under pulse pile-up

    NASA Astrophysics Data System (ADS)

    Lee, Daehee; Park, Kyungjin; Lim, Kyung Taek; Cho, Gyuseong

    2017-06-01

    A photon counting detector (PCD) has been proposed as an alternative solution to an energy-integrating detector (EID) in medical imaging field due to its high resolution, high efficiency, and low noise. The PCD has expanded to variety of fields such as spectral CT, k-edge imaging, and material decomposition owing to its capability to count and measure the number and the energy of an incident photon, respectively. Nonetheless, pulse pile-up, which is a superimposition of pulses at the output of a charge sensitive amplifier (CSA) in each PC pixel, occurs frequently as the X-ray flux increases due to the finite pulse processing time (PPT) in CSAs. Pulse pile-up induces not only a count loss but also distortion in the measured X-ray spectrum from each PC pixel and thus it is a main constraint on the use of PCDs in high flux X-ray applications. To minimize these effects, an energy-correction PC (ECPC) pixel is proposed to resolve pulse pile-up without cutting off the PPT by adding an energy correction logic (ECL) via a cross detection method (CDM). The ECPC pixel with a size of 200×200 μm2 was fabricated by using a 6-metal 1-poly 0.18 μm CMOS process with a static power consumption of 7.2 μW/pixel. The maximum count rate of the ECPC pixel was extended by approximately three times higher than that of a conventional PC pixel with a PPT of 500 nsec. The X-ray spectrum of 90 kVp, filtered by 3 mm Al filter, was measured as the X-ray current was increased using the CdTe and the ECPC pixel. As a result, the ECPC pixel dramatically reduced the energy spectrum distortion at 2 Mphotons/pixel/s when compared to that of the ERCP pixel with the same 500 nsec PPT.

  18. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    NASA Astrophysics Data System (ADS)

    Croft, Stephen; Favalli, Andrea

    2017-10-01

    Neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where the next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.

  19. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  20. Extension of the Dytlewski-style dead time correction formalism for neutron multiplicity counting to any order

    DOE PAGES

    Croft, Stephen; Favalli, Andrea

    2017-07-16

    Here, neutron multiplicity counting using shift-register calculus is an established technique in the science of international nuclear safeguards for the identification, verification, and assay of special nuclear materials. Typically passive counting is used for Pu and mixed Pu-U items and active methods are used for U materials. Three measured counting rates, singles, doubles and triples are measured and, in combination with a simple analytical point-model, are used to calculate characteristics of the measurement item in terms of known detector and nuclear parameters. However, the measurement problem usually involves more than three quantities of interest, but even in cases where themore » next higher order count rate, quads, is statistically viable, it is not quantitatively applied because corrections for dead time losses are currently not available in the predominant analysis paradigm. In this work we overcome this limitation by extending the commonly used dead time correction method, developed by Dytlewski, to quads. We also give results for pents, which may be of interest for certain special investigations. Extension to still higher orders, may be accomplished by inspection based on the sequence presented. We discuss the foundations of the Dytlewski method, give limiting cases, and highlight the opportunities and implications that these new results expose. In particular there exist a number of ways in which the new results may be combined with other approaches to extract the correlated rates, and this leads to various practical implementations.« less

  1. Background characterization of an ultra-low background liquid scintillation counter

    DOE PAGES

    Erchinger, J. L.; Orrell, John L.; Aalseth, C. E.; ...

    2017-01-26

    The Ultra-Low Background Liquid Scintillation Counter developed by Pacific Northwest National Laboratory will expand the application of liquid scintillation counting by enabling lower detection limits and smaller sample volumes. By reducing the overall count rate of the background environment approximately 2 orders of magnitude below that of commercially available systems, backgrounds on the order of tens of counts per day over an energy range of ~3–3600 keV can be realized. Finally, initial test results of the ULB LSC show promising results for ultra-low background detection with liquid scintillation counting.

  2. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  3. Initial characterization of unequal-length, low-background proportional counters for absolute gas-counting applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mace, E. K.; Aalseth, C. E.; Bonicalzi, R.

    Characterization of two sets of custom unequal length proportional counters is underway at Pacific Northwest National Laboratory (PNNL). These detectors will be used in measurements to determine the absolute activity concentration of gaseous radionuclides (e.g., {sup 37}Ar). A set of three detectors has been fabricated based on previous PNNL ultra-low-background proportional counter designs and now operate in PNNL's shallow underground counting laboratory. A second set of four counters has also been fabricated using clean assembly of Oxygen-Free High-Conductivity copper components for use in a shielded above-ground counting laboratory. Characterization of both sets of detectors is underway with measurements of backgroundmore » rates, gas gain, and energy resolution. These results will be presented along with a shielding study for the above-ground cave.« less

  4. ELLIPTICAL WEIGHTED HOLICs FOR WEAK LENSING SHEAR MEASUREMENT. III. THE EFFECT OF RANDOM COUNT NOISE ON IMAGE MOMENTS IN WEAK LENSING ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp

    This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less

  5. Background correction in separation techniques hyphenated to high-resolution mass spectrometry - Thorough correction with mass spectrometry scans recorded as profile spectra.

    PubMed

    Erny, Guillaume L; Acunha, Tanize; Simó, Carolina; Cifuentes, Alejandro; Alves, Arminda

    2017-04-07

    Separation techniques hyphenated with high-resolution mass spectrometry have been a true revolution in analytical separation techniques. Such instruments not only provide unmatched resolution, but they also allow measuring the peaks accurate masses that permit identifying monoisotopic formulae. However, data files can be large, with a major contribution from background noise and background ions. Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features. Thus, noise and baseline correction can be a valuable pre-processing step. The methodology that is described here, unlike any other approach, is used to correct the original dataset with the MS scans recorded as profiles spectrum. Using urine metabolic studies as examples, we demonstrate that this thorough correction reduces the data complexity by more than 90%. Such correction not only permits an improved visualisation of secondary peaks in the chromatographic domain, but it also facilitates the complete assignment of each MS scan which is invaluable to detect possible comigration/coeluting species. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Pile-up corrections for high-precision superallowed β decay half-life measurements via γ-ray photopeak counting

    NASA Astrophysics Data System (ADS)

    Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hyland, B.; Kulp, W. D.; Leach, K. G.; Leslie, J. R.; Morton, A. C.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Williams, S. J.; Wong, J.; Wood, J. L.; Zganjar, E. F.

    2007-09-01

    A general technique that corrects γ-ray gated β decay-curve data for detector pulse pile-up is presented. The method includes corrections for non-zero time-resolution and energy-threshold effects in addition to a special treatment of saturating events due to cosmic rays. This technique is verified through a Monte Carlo simulation and experimental data using radioactive beams of Na26 implanted at the center of the 8π γ-ray spectrometer at the ISAC facility at TRIUMF in Vancouver, Canada. The β-decay half-life of Na26 obtained from counting 1809-keV γ-ray photopeaks emitted by the daughter Mg26 was determined to be T=1.07167±0.00055 s following a 27σ correction for detector pulse pile-up. This result is in excellent agreement with the result of a previous measurement that employed direct β counting and demonstrates the feasibility of high-precision β-decay half-life measurements through the use of high-purity germanium γ-ray detectors. The technique presented here, while motivated by superallowed-Fermi β decay studies, is general and can be used for all half-life determinations (e.g. α-, β-, X-ray, fission) in which a γ-ray photopeak is used to select the decays of a particular isotope.

  7. Continuous Glucose Monitoring in Subjects with Type 1 Diabetes: Improvement in Accuracy by Correcting for Background Current

    PubMed Central

    Youssef, Joseph El; Engle, Julia M.; Massoud, Ryan G.; Ward, W. Kenneth

    2010-01-01

    Abstract Background A cause of suboptimal accuracy in amperometric glucose sensors is the presence of a background current (current produced in the absence of glucose) that is not accounted for. We hypothesized that a mathematical correction for the estimated background current of a commercially available sensor would lead to greater accuracy compared to a situation in which we assumed the background current to be zero. We also tested whether increasing the frequency of sensor calibration would improve sensor accuracy. Methods This report includes analysis of 20 sensor datasets from seven human subjects with type 1 diabetes. Data were divided into a training set for algorithm development and a validation set on which the algorithm was tested. A range of potential background currents was tested. Results Use of the background current correction of 4 nA led to a substantial improvement in accuracy (improvement of absolute relative difference or absolute difference of 3.5–5.5 units). An increase in calibration frequency led to a modest accuracy improvement, with an optimum at every 4 h. Conclusions Compared to no correction, a correction for the estimated background current of a commercially available glucose sensor led to greater accuracy and better detection of hypoglycemia and hyperglycemia. The accuracy-optimizing scheme presented here can be implemented in real time. PMID:20879968

  8. [Corrected count].

    PubMed

    1991-11-27

    The data of the 1991 census indicated that the population count of Brazil fell short of a former estimate by 3 million people. The population reached 150 million people with an annual increase of 2%, while projections in the previous decade expected an increase of 2.48% to 153 million people. This reduction indicates more widespread use of family planning (FP) and control of fertility among families of lower social status as more information is being provided to them. However, the Ministry of Health ordered an investigation of foreign family planning organizations because it was suspected that women were forced to undergo tubal ligation during vaccination campaigns. A strange alliance of left wing politicians and the Roman Catholic Church alleges a conspiracy of international FP organizations receiving foreign funds. The FP strategies of Bemfam and Pro-Pater offer women who have little alternative the opportunity to undergo tubal ligation or to receive oral contraceptives to control fertility. The ongoing government program of distributing booklets on FP is feeble and is not backed up by an education campaign. Charges of foreign interference are leveled while the government hypocritically ignores the grave problem of 4 million abortions a year. The population is expected to continue to grow until the year 2040 and then to stabilize at a low growth rate of .4%. In 1980, the number of children per woman was 4.4 whereas the 1991 census figures indicate this has dropped to 3.5. The excess population is associated with poverty and a forsaken caste in the interior. The population actually has decreased in the interior and in cities with 15,000 people. The phenomenon of the drop of fertility associated with rural exodus is contrasted with cities and villages where the population is 20% less than expected.

  9. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  10. Learning to count begins in infancy: evidence from 18 month olds' visual preferences.

    PubMed

    Slaughter, Virginia; Itakura, Shoji; Kutsuki, Aya; Siegal, Michael

    2011-10-07

    We used a preferential looking paradigm to evaluate infants' preferences for correct versus incorrect counting. Infants viewed a video depicting six fish. In the correct counting sequence, a hand pointed to each fish in turn, accompanied by verbal counting up to six. In the incorrect counting sequence, the hand moved between two of the six fish while there was still verbal counting to six, thereby violating the one-to-one correspondence principle of correct counting. Experiment 1 showed that Australian 18 month olds, but not 15 month olds, significantly preferred to watch the correct counting sequence. In experiment 2, Australian infants' preference for correct counting disappeared when the count words were replaced by beeps or by Japanese count words. In experiment 3, Japanese 18 month olds significantly preferred the correct counting video only when counting was in Japanese. These results show that infants start to acquire the abstract principles governing correct counting prior to producing any counting behaviour.

  11. Learning to count begins in infancy: evidence from 18 month olds' visual preferences

    PubMed Central

    Slaughter, Virginia; Itakura, Shoji; Kutsuki, Aya; Siegal, Michael

    2011-01-01

    We used a preferential looking paradigm to evaluate infants' preferences for correct versus incorrect counting. Infants viewed a video depicting six fish. In the correct counting sequence, a hand pointed to each fish in turn, accompanied by verbal counting up to six. In the incorrect counting sequence, the hand moved between two of the six fish while there was still verbal counting to six, thereby violating the one-to-one correspondence principle of correct counting. Experiment 1 showed that Australian 18 month olds, but not 15 month olds, significantly preferred to watch the correct counting sequence. In experiment 2, Australian infants' preference for correct counting disappeared when the count words were replaced by beeps or by Japanese count words. In experiment 3, Japanese 18 month olds significantly preferred the correct counting video only when counting was in Japanese. These results show that infants start to acquire the abstract principles governing correct counting prior to producing any counting behaviour. PMID:21325331

  12. Finite-size effects in transcript sequencing count distribution: its power-law correction necessarily precedes downstream normalization and comparative analysis.

    PubMed

    Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank

    2018-02-12

    Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in

  13. Direct reconstruction of parametric images for brain PET with event-by-event motion correction: evaluation in two tracers across count levels

    NASA Astrophysics Data System (ADS)

    Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.

    2017-07-01

    Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T  =  K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C

  14. Poisson mixture model for measurements using counting.

    PubMed

    Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz

    2010-03-01

    Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.

  15. A square-wave wavelength modulation system for automatic background correction in carbon furnace atomic emission spectrometry

    NASA Astrophysics Data System (ADS)

    Bezur, L.; Marshall, J.; Ottaway, J. M.

    A square-wave wavelength modulation system, based on a rotating quartz chopper with four quadrants of different thicknesses, has been developed and evaluated as a method for automatic background correction in carbon furnace atomic emission spectrometry. Accurate background correction is achieved for the residual black body radiation (Rayleigh scatter) from the tube wall and Mie scatter from particles generated by a sample matrix and formed by condensation of atoms in the optical path. Intensity modulation caused by overlap at the edges of the quartz plates and by the divergence of the optical beam at the position of the modulation chopper has been investigated and is likely to be small.

  16. Cosmic ray neutron background reduction using localized coincidence veto neutron counting

    DOEpatents

    Menlove, Howard O.; Bourret, Steven C.; Krick, Merlyn S.

    2002-01-01

    This invention relates to both the apparatus and method for increasing the sensitivity of measuring the amount of radioactive material in waste by reducing the interference caused by cosmic ray generated neutrons. The apparatus includes: (a) a plurality of neutron detectors, each of the detectors including means for generating a pulse in response to the detection of a neutron; and (b) means, coupled to each of the neutrons detectors, for counting only some of the pulses from each of the detectors, whether cosmic ray or fission generated. The means for counting includes a means that, after counting one of the pulses, vetos the counting of additional pulses for a prescribed period of time. The prescribed period of time is between 50 and 200 .mu.s. In the preferred embodiment the prescribed period of time is 128 .mu.s. The veto means can be an electronic circuit which includes a leading edge pulse generator which passes a pulse but blocks any subsequent pulse for a period of between 50 and 200 .mu.s. Alternately, the veto means is a software program which includes means for tagging each of the pulses from each of the detectors for both time and position, means for counting one of the pulses from a particular position, and means for rejecting those of the pulses which originate from the particular position and in a time interval on the order of the neutron die-away time in polyethylene or other shield material. The neutron detectors are grouped in pods, preferably at least 10. The apparatus also includes means for vetoing the counting of coincidence pulses from all of the detectors included in each of the pods which are adjacent to the pod which includes the detector which produced the pulse which was counted.

  17. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  18. Pile-up correction algorithm based on successive integration for high count rate medical imaging and radiation spectroscopy

    NASA Astrophysics Data System (ADS)

    Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar

    2018-07-01

    In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.

  19. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  20. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  1. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  2. Complete NLO corrections to W+W+ scattering and its irreducible background at the LHC

    NASA Astrophysics Data System (ADS)

    Biedermann, Benedikt; Denner, Ansgar; Pellen, Mathieu

    2017-10-01

    The process pp → μ +ν μ e+νejj receives several contributions of different orders in the strong and electroweak coupling constants. Using appropriate event selections, this process is dominated by vector-boson scattering (VBS) and has recently been measured at the LHC. It is thus of prime importance to estimate precisely each contribution. In this article we compute for the first time the full NLO QCD and electroweak corrections to VBS and its irreducible background processes with realistic experimental cuts. We do not rely on approximations but use complete amplitudes involving two different orders at tree level and three different orders at one-loop level. Since we take into account all interferences, at NLO level the corrections to the VBS process and to the QCD-induced irreducible background process contribute at the same orders. Hence the two processes cannot be unambiguously distinguished, and all contributions to the μ +ν μ e+νejj final state should be preferably measured together.

  3. Determination of serum aluminum by electrothermal atomic absorption spectrometry: A comparison between Zeeman and continuum background correction systems

    NASA Astrophysics Data System (ADS)

    Kruger, Pamela C.; Parsons, Patrick J.

    2007-03-01

    Excessive exposure to aluminum (Al) can produce serious health consequences in people with impaired renal function, especially those undergoing hemodialysis. Al can accumulate in the brain and in bone, causing dialysis-related encephalopathy and renal osteodystrophy. Thus, dialysis patients are routinely monitored for Al overload, through measurement of their serum Al. Electrothermal atomic absorption spectrometry (ETAAS) is widely used for serum Al determination. Here, we assess the analytical performances of three ETAAS instruments, equipped with different background correction systems and heating arrangements, for the determination of serum Al. Specifically, we compare (1) a Perkin Elmer (PE) Model 3110 AAS, equipped with a longitudinally (end) heated graphite atomizer (HGA) and continuum-source (deuterium) background correction, with (2) a PE Model 4100ZL AAS equipped with a transversely heated graphite atomizer (THGA) and longitudinal Zeeman background correction, and (3) a PE Model Z5100 AAS equipped with a HGA and transverse Zeeman background correction. We were able to transfer the method for serum Al previously established for the Z5100 and 4100ZL instruments to the 3110, with only minor modifications. As with the Zeeman instruments, matrix-matched calibration was not required for the 3110 and, thus, aqueous calibration standards were used. However, the 309.3-nm line was chosen for analysis on the 3110 due to failure of the continuum background correction system at the 396.2-nm line. A small, seemingly insignificant overcorrection error was observed in the background channel on the 3110 instrument at the 309.3-nm line. On the 4100ZL, signal oscillation was observed in the atomization profile. The sensitivity, or characteristic mass ( m0), for Al at the 309.3-nm line on the 3110 AAS was found to be 12.1 ± 0.6 pg, compared to 16.1 ± 0.7 pg for the Z5100, and 23.3 ± 1.3 pg for the 4100ZL at the 396.2-nm line. However, the instrumental detection limits (3

  4. The NuSTAR Extragalactic Surveys: The Number Counts Of Active Galactic Nuclei And The Resolved Fraction Of The Cosmic X-ray Background

    NASA Technical Reports Server (NTRS)

    Harrison, F. A.; Aird, J.; Civano, F.; Lansbury, G.; Mullaney, J. R.; Ballentyne, D. R.; Alexander, D. M.; Stern, D.; Ajello, M.; Barret, D.; hide

    2016-01-01

    We present the 3-8 kiloelectronvolts and 8-24 kiloelectronvolts number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33 percent -39 percent of the X-ray background in the 8-24 kiloelectronvolts band, directly identifying AGNs with obscuring columns up to approximately 10 (exp 25) per square centimeter. In the softer 3-8 kiloelectronvolts band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range 5 times 10 (exp -15) less than or approximately equal to S (3-8 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12) probed by NuSTAR. In the hard 8-24 kiloelectronvolts band NuSTAR probes fluxes over the range 2 times 10 (exp -14) less than or approximately equal to S (8-24 kiloelectronvolts) divided by ergs per second per square centimeter less than or approximately equal to 10 (exp -12), a factor approximately 100 times fainter than previous measurements. The 8-24 kiloelectronvolts number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT15-55 kiloelectronvolts number counts measured at higher fluxes (S (15-55 kiloelectronvolts) less than or approximately equal to 10 (exp -11) ergs per second per square centimeter), reflecting the evolution of the AGN population between the Swift/BAT local (redshift is less than 0.1) sample and NuSTAR's redshift approximately equal to 1 sample. CXB (Cosmic X-ray Background) synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured

  5. Inventory count strategies.

    PubMed

    Springer, W H

    1996-02-01

    An important principle of accounting is that asset inventory needs to be correctly valued to ensure that the financial statements of the institution are accurate. Errors is recording the value of ending inventory in one fiscal year result in errors to published financial statements for that year as well as the subsequent fiscal year. Therefore, it is important that accurate physical counts be periodically taken. It is equally important that any system being used to generate inventory valuation, reordering or management reports be based on consistently accurate on-hand balances. At the foundation of conducting an accurate physical count of an inventory is a comprehensive understanding of the process coupled with a written plan. This article presents a guideline of the physical count processes involved in a traditional double-count approach.

  6. Background compensation for a radiation level monitor

    DOEpatents

    Keefe, D.J.

    1975-12-01

    Background compensation in a device such as a hand and foot monitor is provided by digital means using a scaler. With no radiation level test initiated, a scaler is down-counted from zero according to the background measured. With a radiation level test initiated, the scaler is up-counted from the previous down-count position according to the radiation emitted from the monitored object and an alarm is generated if, with the scaler having crossed zero in the positive going direction, a particular number is exceeded in a specific time period after initiation of the test. If the test is initiated while the scale is down-counting, the background count from the previous down- count stored in a memory is used as the initial starting point for the up-count.

  7. A comprehensive numerical analysis of background phase correction with V-SHARP.

    PubMed

    Özbay, Pinar Senay; Deistung, Andreas; Feng, Xiang; Nanz, Daniel; Reichenbach, Jürgen Rainer; Schweser, Ferdinand

    2017-04-01

    Sophisticated harmonic artifact reduction for phase data (SHARP) is a method to remove background field contributions in MRI phase images, which is an essential processing step for quantitative susceptibility mapping (QSM). To perform SHARP, a spherical kernel radius and a regularization parameter need to be defined. In this study, we carried out an extensive analysis of the effect of these two parameters on the corrected phase images and on the reconstructed susceptibility maps. As a result of the dependence of the parameters on acquisition and processing characteristics, we propose a new SHARP scheme with generalized parameters. The new SHARP scheme uses a high-pass filtering approach to define the regularization parameter. We employed the variable-kernel SHARP (V-SHARP) approach, using different maximum radii (R m ) between 1 and 15 mm and varying regularization parameters (f) in a numerical brain model. The local root-mean-square error (RMSE) between the ground-truth, background-corrected field map and the results from SHARP decreased towards the center of the brain. RMSE of susceptibility maps calculated with a spatial domain algorithm was smallest for R m between 6 and 10 mm and f between 0 and 0.01 mm -1 , and for maps calculated with a Fourier domain algorithm for R m between 10 and 15 mm and f between 0 and 0.0091 mm -1 . We demonstrated and confirmed the new parameter scheme in vivo. The novel regularization scheme allows the use of the same regularization parameter irrespective of other imaging parameters, such as image resolution. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Monitoring trends in bird populations: addressing background levels of annual variability in counts

    Treesearch

    Jared Verner; Kathryn L. Purcell; Jennifer G. Turner

    1996-01-01

    Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...

  9. GafChromic EBT film dosimetry with flatbed CCD scanner: a novel background correction method and full dose uncertainty analysis.

    PubMed

    Saur, Sigrun; Frengen, Jomar

    2008-07-01

    Film dosimetry using radiochromic EBT film in combination with a flatbed charge coupled device scanner is a useful method both for two-dimensional verification of intensity-modulated radiation treatment plans and for general quality assurance of treatment planning systems and linear accelerators. Unfortunately, the response over the scanner area is nonuniform, and when not corrected for, this results in a systematic error in the measured dose which is both dose and position dependent. In this study a novel method for background correction is presented. The method is based on the subtraction of a correction matrix, a matrix that is based on scans of films that are irradiated to nine dose levels in the range 0.08-2.93 Gy. Because the response of the film is dependent on the film's orientation with respect to the scanner, correction matrices for both landscape oriented and portrait oriented scans were made. In addition to the background correction method, a full dose uncertainty analysis of the film dosimetry procedure was performed. This analysis takes into account the fit uncertainty of the calibration curve, the variation in response for different film sheets, the nonuniformity after background correction, and the noise in the scanned films. The film analysis was performed for film pieces of size 16 x 16 cm, all with the same lot number, and all irradiations were done perpendicular onto the films. The results show that the 2-sigma dose uncertainty at 2 Gy is about 5% and 3.5% for landscape and portrait scans, respectively. The uncertainty gradually increases as the dose decreases, but at 1 Gy the 2-sigma dose uncertainty is still as good as 6% and 4% for landscape and portrait scans, respectively. The study shows that film dosimetry using GafChromic EBT film, an Epson Expression 1680 Professional scanner and a dedicated background correction technique gives precise and accurate results. For the purpose of dosimetric verification, the calculated dose distribution

  10. Spitzer deep and wide legacy mid- and far-infrared number counts and lower limits of cosmic infrared background

    NASA Astrophysics Data System (ADS)

    Béthermin, M.; Dole, H.; Beelen, A.; Aussel, H.

    2010-03-01

    Aims: We aim to place stronger lower limits on the cosmic infrared background (CIB) brightness at 24 μm, 70 μm and 160 μm and measure the extragalactic number counts at these wavelengths in a homogeneous way from various surveys. Methods: Using Spitzer legacy data over 53.6 deg2 of various depths, we build catalogs with the same extraction method at each wavelength. Completeness and photometric accuracy are estimated with Monte-Carlo simulations. Number count uncertainties are estimated with a counts-in-cells moment method to take galaxy clustering into account. Furthermore, we use a stacking analysis to estimate number counts of sources not detected at 70 μm and 160 μm. This method is validated by simulations. The integration of the number counts gives new CIB lower limits. Results: Number counts reach 35 μJy, 3.5 mJy and 40 mJy at 24 μm, 70 μm, and 160 μm, respectively. We reach deeper flux densities of 0.38 mJy at 70, and 3.1 at 160 μm with a stacking analysis. We confirm the number count turnover at 24 μm and 70 μm, and observe it for the first time at 160 μm at about 20 mJy, together with a power-law behavior below 10 mJy. These mid- and far-infrared counts: 1) are homogeneously built by combining fields of different depths and sizes, providing a legacy over about three orders of magnitude in flux density; 2) are the deepest to date at 70 μm and 160 μm; 3) agree with previously published results in the common measured flux density range; 4) globally agree with the Lagache et al. (2004) model, except at 160 μm, where the model slightly overestimates the counts around 20 and 200 mJy. Conclusions: These counts are integrated to estimate new CIB firm lower limits of 2.29-0.09+0.09 nW m-2 sr-1, 5.4-0.4+0.4 nW m-2 sr-1, and 8.9-1.1+1.1 nW m-2 sr-1 at 24 μm, 70 μm, and 160 μm, respectively, and extrapolated to give new estimates of the CIB due to galaxies of 2.86-0.16+0.19 nW m-2 sr-1, 6.6-0.6+0.7 nW m-2 sr-1, and 14.6-2.9+7.1 nW m-2 sr-1

  11. Automatic vehicle counting system for traffic monitoring

    NASA Astrophysics Data System (ADS)

    Crouzil, Alain; Khoudour, Louahdi; Valiere, Paul; Truong Cong, Dung Nghy

    2016-09-01

    The article is dedicated to the presentation of a vision-based system for road vehicle counting and classification. The system is able to achieve counting with a very good accuracy even in difficult scenarios linked to occlusions and/or presence of shadows. The principle of the system is to use already installed cameras in road networks without any additional calibration procedure. We propose a robust segmentation algorithm that detects foreground pixels corresponding to moving vehicles. First, the approach models each pixel of the background with an adaptive Gaussian distribution. This model is coupled with a motion detection procedure, which allows correctly location of moving vehicles in space and time. The nature of trials carried out, including peak periods and various vehicle types, leads to an increase of occlusions between cars and between cars and trucks. A specific method for severe occlusion detection, based on the notion of solidity, has been carried out and tested. Furthermore, the method developed in this work is capable of managing shadows with high resolution. The related algorithm has been tested and compared to a classical method. Experimental results based on four large datasets show that our method can count and classify vehicles in real time with a high level of performance (>98%) under different environmental situations, thus performing better than the conventional inductive loop detectors.

  12. The NuSTAR Extragalactic Surveys: The Number Counts of Active Galactic Nuclei and The Resolved Fraction of The Cosmic X-Ray Background

    DOE PAGES

    Harrison, F. A.; Aird, J.; Civano, F.; ...

    2016-11-07

    Here, we present the 3–8 keV and 8–24 keV number counts of active galactic nuclei (AGNs) identified in the Nuclear Spectroscopic Telescope Array (NuSTAR) extragalactic surveys. NuSTAR has now resolved 33%–39% of the X-ray background in the 8–24 keV band, directly identifying AGNs with obscuring columns up tomore » $$\\sim {10}^{25}\\,{\\mathrm{cm}}^{-2}$$. In the softer 3–8 keV band the number counts are in general agreement with those measured by XMM-Newton and Chandra over the flux range $$5\\times {10}^{-15}\\,\\lesssim $$ S(3–8 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$ probed by NuSTAR. In the hard 8–24 keV band NuSTAR probes fluxes over the range $$2\\times {10}^{-14}\\,\\lesssim $$ S(8–24 keV)/$$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}\\,\\lesssim \\,{10}^{-12}$$, a factor ~100 fainter than previous measurements. The 8–24 keV number counts match predictions from AGN population synthesis models, directly confirming the existence of a population of obscured and/or hard X-ray sources inferred from the shape of the integrated cosmic X-ray background. The measured NuSTAR counts lie significantly above simple extrapolation with a Euclidian slope to low flux of the Swift/BAT 15–55 keV number counts measured at higher fluxes (S(15–55 keV) gsim 10-11 $$\\mathrm{erg}\\,{{\\rm{s}}}^{-1}\\,{\\mathrm{cm}}^{-2}$$), reflecting the evolution of the AGN population between the Swift/BAT local ($$z\\lt 0.1$$) sample and NuSTAR's $$z\\sim 1$$ sample. CXB synthesis models, which account for AGN evolution, lie above the Swift/BAT measurements, suggesting that they do not fully capture the evolution of obscured AGNs at low redshifts.« less

  13. The 2-24 μm source counts from the AKARI North Ecliptic Pole survey

    NASA Astrophysics Data System (ADS)

    Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.

    2014-11-01

    We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.

  14. Enhanced identification and biological validation of differential gene expression via Illumina whole-genome expression arrays through the use of the model-based background correction methodology

    PubMed Central

    Ding, Liang-Hao; Xie, Yang; Park, Seongmi; Xiao, Guanghua; Story, Michael D.

    2008-01-01

    Despite the tremendous growth of microarray usage in scientific studies, there is a lack of standards for background correction methodologies, especially in single-color microarray platforms. Traditional background subtraction methods often generate negative signals and thus cause large amounts of data loss. Hence, some researchers prefer to avoid background corrections, which typically result in the underestimation of differential expression. Here, by utilizing nonspecific negative control features integrated into Illumina whole genome expression arrays, we have developed a method of model-based background correction for BeadArrays (MBCB). We compared the MBCB with a method adapted from the Affymetrix robust multi-array analysis algorithm and with no background subtraction, using a mouse acute myeloid leukemia (AML) dataset. We demonstrated that differential expression ratios obtained by using the MBCB had the best correlation with quantitative RT–PCR. MBCB also achieved better sensitivity in detecting differentially expressed genes with biological significance. For example, we demonstrated that the differential regulation of Tnfr2, Ikk and NF-kappaB, the death receptor pathway, in the AML samples, could only be detected by using data after MBCB implementation. We conclude that MBCB is a robust background correction method that will lead to more precise determination of gene expression and better biological interpretation of Illumina BeadArray data. PMID:18450815

  15. Techniques for the correction of topographical effects in scanning Auger electron microscopy

    NASA Technical Reports Server (NTRS)

    Prutton, M.; Larson, L. A.; Poppa, H.

    1983-01-01

    A number of ratioing methods for correcting Auger images and linescans for topographical contrast are tested using anisotropically etched silicon substrates covered with Au or Ag. Thirteen well-defined angles of incidence are present on each polyhedron produced on the Si by this etching. If N1 electrons are counted at the energy of an Auger peak and N2 are counted in the background above the peak, then N1, N1 - N2, (N1 - N2)/(N1 + N2) are measured and compared as methods of eliminating topographical contrast. The latter method gives the best compensation but can be further improved by using a measurement of the sample absorption current. Various other improvements are discussed.

  16. Multiparameter linear least-squares fitting to Poisson data one count at a time

    NASA Technical Reports Server (NTRS)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number n(sub i) of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts w(sub i) in the weighted LLSQ method when square root of n(sub i) instead of square root of bar-n(sub i) is used to approximate the uncertainties, sigma(sub i), in the data, where bar-n(sub i) = E(n(sub i)), the expected value of N(sub i). We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 ke

  17. Evaluation of a standardized procedure for [corrected] microscopic cell counts [corrected] in body fluids.

    PubMed

    Emerson, Jane F; Emerson, Scott S

    2005-01-01

    A standardized urinalysis and manual microscopic cell counting system was evaluated for its potential to reduce intra- and interoperator variability in urine and cerebrospinal fluid (CSF) cell counts. Replicate aliquots of pooled specimens were submitted blindly to technologists who were instructed to use either the Kova system with the disposable Glasstic slide (Hycor Biomedical, Inc., Garden Grove, CA) or the standard operating procedure of the University of California-Irvine (UCI), which uses plain glass slides for urine sediments and hemacytometers for CSF. The Hycor system provides a mechanical means of obtaining a fixed volume of fluid in which to resuspend the sediment, and fixes the volume of specimen to be microscopically examined by using capillary filling of a chamber containing in-plane counting grids. Ninety aliquots of pooled specimens of each type of body fluid were used to assess the inter- and intraoperator reproducibility of the measurements. The variability of replicate Hycor measurements made on a single specimen by the same or different observers was compared with that predicted by a Poisson distribution. The Hycor methods generally resulted in test statistics that were slightly lower than those obtained with the laboratory standard methods, indicating a trend toward decreasing the effects of various sources of variability. For 15 paired aliquots of each body fluid, tests for systematically higher or lower measurements with the Hycor methods were performed using the Wilcoxon signed-rank test. Also examined was the average difference between the Hycor and current laboratory standard measurements, along with a 95% confidence interval (CI) for the true average difference. Without increasing labor or the requirement for attention to detail, the Hycor method provides slightly better interrater comparisons than the current method used at UCI. Copyright 2005 Wiley-Liss, Inc.

  18. Evaluation of Normalization Methods on GeLC-MS/MS Label-Free Spectral Counting Data to Correct for Variation during Proteomic Workflows

    NASA Astrophysics Data System (ADS)

    Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.

    2011-12-01

    Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with

  19. Dying dyons don't count

    NASA Astrophysics Data System (ADS)

    Cheng, Miranda C. N.; Verlinde, Erik P.

    2007-09-01

    The dyonic 1/4-BPS states in 4D string theory with Script N = 4 spacetime supersymmetry are counted by a Siegel modular form. The pole structure of the modular form leads to a contour dependence in the counting formula obscuring its duality invariance. We exhibit the relation between this ambiguity and the (dis-)appearance of bound states of 1/2-BPS configurations. Using this insight we propose a precise moduli-dependent contour prescription for the counting formula. We then show that the degeneracies are duality-invariant and are correctly adjusted at the walls of marginal stability to account for the (dis-)appearance of the two-centered bound states. Especially, for large black holes none of these bound states exists at the attractor point and none of these ambiguous poles contributes to the counting formula. Using this fact we also propose a second, moduli-independent contour which counts the ``immortal dyons" that are stable everywhere.

  20. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGES

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; ...

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  1. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. Furthermore the reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  2. Dynamic time-correlated single-photon counting laser ranging

    NASA Astrophysics Data System (ADS)

    Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang

    2018-03-01

    We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.

  3. Bunch mode specific rate corrections for PILATUS3 detectors

    DOE PAGES

    Trueb, P.; Dejoie, C.; Kobas, M.; ...

    2015-04-09

    PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less

  4. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    PubMed

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  6. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  7. AURORA on MEGSAT 1: a photon counting observatory for the Earth UV night-sky background and Aurora emission

    NASA Astrophysics Data System (ADS)

    Monfardini, A.; Trampus, P.; Stalio, R.; Mahne, N.; Battiston, R.; Menichelli, M.; Mazzinghi, P.

    2001-08-01

    A low-mass, low-cost photon-counting scientific payload has been developed and launched on a commercial microsatellite in order to study the near-UV night-sky background emission with a telescope nicknamed ``Notte'' and the Aurora emission with ``Alba''. AURORA, this is the name of the experiment, will determine, with the ``Notte'' channel, the overall night-side photon background in the 300-400nm spectral range, together with a particular 2+N2 line (λc=337nm). The ``Alba'' channel, on the other hand, will study the Aurora emissions in four different spectral bands (FWHM=8.4-9.6nm) centered on: 367nm (continuum evaluation), 391nm (1-N+2), 535nm (continuum evaluation), 560nm (OI). The instrument has been launched on the 26 September, 2000 from the Baikonur cosmodrome on a modified SS18 Dnepr-1 ``Satan'' rocket. The satellite orbit is nearly circular (hapogee=648km, /e=0.0022), and the inclination of the orbital plane is 64.56°. An overview of the techniques adopted is given in this paper.

  8. Quantitative basis for component factors of gas flow proportional counting efficiencies

    NASA Astrophysics Data System (ADS)

    Nichols, Michael C.

    This dissertation investigates the counting efficiency calibration of a gas flow proportional counter with beta-particle emitters in order to (1) determine by measurements and simulation the values of the component factors of beta-particle counting efficiency for a proportional counter, (2) compare the simulation results and measured counting efficiencies, and (3) determine the uncertainty of the simulation and measurements. Monte Carlo simulation results by the MCNP5 code were compared with measured counting efficiencies as a function of sample thickness for 14C, 89Sr, 90Sr, and 90Y. The Monte Carlo model simulated strontium carbonate with areal thicknesses from 0.1 to 35 mg cm-2. The samples were precipitated as strontium carbonate with areal thicknesses from 3 to 33 mg cm-2 , mounted on membrane filters, and counted on a low background gas flow proportional counter. The estimated fractional standard deviation was 2--4% (except 6% for 14C) for efficiency measurements of the radionuclides. The Monte Carlo simulations have uncertainties estimated to be 5 to 6 percent for carbon-14 and 2.4 percent for strontium-89, strontium-90, and yttrium-90. The curves of simulated counting efficiency vs. sample areal thickness agreed within 3% of the curves of best fit drawn through the 25--49 measured points for each of the four radionuclides. Contributions from this research include development of uncertainty budgets for the analytical processes; evaluation of alternative methods for determining chemical yield critical to the measurement process; correcting a bias found in the MCNP normalization of beta spectra histogram; clarifying the interpretation of the commonly used ICRU beta-particle spectra for use by MCNP; and evaluation of instrument parameters as applied to the simulation model to obtain estimates of the counting efficiency from simulated pulse height tallies.

  9. A high dynamic range pulse counting detection system for mass spectrometry.

    PubMed

    Collings, Bruce A; Dima, Martian D; Ivosev, Gordana; Zhong, Feng

    2014-01-30

    A high dynamic range pulse counting system has been developed that demonstrates an ability to operate at up to 2e8 counts per second (cps) on a triple quadrupole mass spectrometer. Previous pulse counting detection systems have typically been limited to about 1e7 cps at the upper end of the systems dynamic range. Modifications to the detection electronics and dead time correction algorithm are described in this paper. A high gain transimpedance amplifier is employed that allows a multi-channel electron multiplier to be operated at a significantly lower bias potential than in previous pulse counting systems. The system utilises a high-energy conversion dynode, a multi-channel electron multiplier, a high gain transimpedance amplifier, non-paralysing detection electronics and a modified dead time correction algorithm. Modification of the dead time correction algorithm is necessary due to a characteristic of the pulse counting electronics. A pulse counting detection system with the capability to count at ion arrival rates of up to 2e8 cps is described. This is shown to provide a linear dynamic range of nearly five orders of magnitude for a sample of aprazolam with concentrations ranging from 0.0006970 ng/mL to 3333 ng/mL while monitoring the m/z 309.1 → m/z 205.2 transition. This represents an upward extension of the detector's linear dynamic range of about two orders of magnitude. A new high dynamic range pulse counting system has been developed demonstrating the ability to operate at up to 2e8 cps on a triple quadrupole mass spectrometer. This provides an upward extension of the detector's linear dynamic range by about two orders of magnitude over previous pulse counting systems. Copyright © 2013 John Wiley & Sons, Ltd.

  10. A neural network-based method for spectral distortion correction in photon counting x-ray CT

    NASA Astrophysics Data System (ADS)

    Touch, Mengheng; Clark, Darin P.; Barber, William; Badea, Cristian T.

    2016-08-01

    Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables both 4 energy bins acquisition, as well as full-spectrum mode in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical effects in the detector and can be very noisy due to photon starvation in narrow energy bins. To address spectral distortions, we propose and demonstrate a novel artificial neural network (ANN)-based spectral distortion correction mechanism, which learns to undo the distortion in spectral CT, resulting in improved material decomposition accuracy. To address noise, post-reconstruction denoising based on bilateral filtration, which jointly enforces intensity gradient sparsity between spectral samples, is used to further improve the robustness of ANN training and material decomposition accuracy. Our ANN-based distortion correction method is calibrated using 3D-printed phantoms and a model of our spectral CT system. To enable realistic simulations and validation of our method, we first modeled the spectral distortions using experimental data acquired from 109Cd and 133Ba radioactive sources measured with our PCXD. Next, we trained an ANN to learn the relationship between the distorted spectral CT projections and the ideal, distortion-free projections in a calibration step. This required knowledge of the ground truth, distortion-free spectral CT projections, which were obtained by simulating a spectral CT scan of the digital version of a 3D-printed phantom. Once the training was completed, the trained ANN was used to perform

  11. Advantages and challenges in automated apatite fission track counting

    NASA Astrophysics Data System (ADS)

    Enkelmann, E.; Ehlers, T. A.

    2012-04-01

    Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for

  12. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klumpp, John

    We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less

  13. Double counting in the density functional plus dynamical mean-field theory of transition metal oxides

    NASA Astrophysics Data System (ADS)

    Dang, Hung

    2015-03-01

    Recently, the combination of density functional theory (DFT) and dynamical mean-field theory (DMFT) has become a widely-used beyond-mean-field approach for strongly correlated materials. However, not only is the correlation treated in DMFT but also in DFT to some extent, a problem arises as the correlation is counted twice in the DFT+DMFT framework. The correction for this problem is still not well-understood. To gain more understanding of this ``double counting'' problem, I provide a detailed study of the metal-insulator transition in transition metal oxides in the subspace of oxygen p and transition metal correlated d orbitals using DFT+DMFT. I will show that the fully charge self-consistent DFT+DMFT calculations with the standard ``fully-localized limit'' (FLL) double counting correction fail to predict correctly materials such as LaTiO3, LaVO3, YTiO3 and SrMnO3 as insulators. Investigations in a wide range of the p- d splitting, the d occupancy, the lattice structure and the double counting correction itself will be presented to understand the reason behind this failure. I will also show that if the double counting correction is chosen to reproduce the p- d splitting consistent with experimental data, the DFT+DMFT approach can still give reasonable results in comparison with experiments.

  14. Characterization of spectrometric photon-counting X-ray detectors at different pitches

    NASA Astrophysics Data System (ADS)

    Jurdit, M.; Brambilla, A.; Moulin, V.; Ouvrier-Buffet, P.; Radisson, P.; Verger, L.

    2017-09-01

    There is growing interest in energy-sensitive photon-counting detectors based on high flux X-ray imaging. Their potential applications include medical imaging, non-destructive testing and security. Innovative detectors of this type will need to count individual photons and sort them into selected energy bins, at several million counts per second and per mm2. Cd(Zn)Te detector grade materials with a thickness of 1.5 to 3 mm and pitches from 800 μm down to 200 μm were assembled onto interposer boards. These devices were tested using in-house-developed full-digital fast readout electronics. The 16-channel demonstrators, with 256 energy bins, were experimentally characterized by determining spectral resolution, count rate, and charge sharing, which becomes challenging at low pitch. Charge sharing correction was found to efficiently correct X-ray spectra up to 40 × 106 incident photons.s-1.mm-2.

  15. Correcting X-ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, K.; Hughes, J. P.; Kellogg, E. M.; Nguyen, D.; Zombek, M.; Joy, M.; Kolodziejezak, J.

    1993-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters, correcting for the reflectivity of the mirror and convolving with the detector response.

  16. Correcting x ray spectra obtained from the AXAF VETA-I mirror calibration for pileup, continuum, background and deadtime

    NASA Technical Reports Server (NTRS)

    Chartas, G.; Flanagan, Kathy; Hughes, John P.; Kellogg, Edwin M.; Nguyen, D.; Zombeck, M.; Joy, M.; Kolodziejezak, J.

    1992-01-01

    The VETA-I mirror was calibrated with the use of a collimated soft X-ray source produced by electron bombardment of various anode materials. The FWHM, effective area and encircled energy were measured with the use of proportional counters that were scanned with a set of circular apertures. The pulsers from the proportional counters were sent through a multichannel analyzer that produced a pulse height spectrum. In order to characterize the properties of the mirror at different discrete photon energies one desires to extract from the pulse height distribution only those photons that originated from the characteristic line emission of the X-ray target source. We have developed a code that fits a modeled spectrum to the observed X-ray data, extracts the counts that originated from the line emission, and estimates the error in these counts. The function that is fitted to the X-ray spectra includes a Prescott function for the resolution of the detector a second Prescott function for a pileup peak and a X-ray continuum function. The continuum component is determined by calculating the absorption of the target Bremsstrahlung through various filters correcting for the reflectivity of the mirror and convolving with the detector response.

  17. The coincidence counting technique for orders of magnitude background reduction in data obtained with the magnetic recoil spectrometer at OMEGA and the NIF.

    PubMed

    Casey, D T; Frenje, J A; Séguin, F H; Li, C K; Rosenberg, M J; Rinderknecht, H; Manuel, M J-E; Gatu Johnson, M; Schaeffer, J C; Frankel, R; Sinenian, N; Childs, R A; Petrasso, R D; Glebov, V Yu; Sangster, T C; Burke, M; Roberts, S

    2011-07-01

    A magnetic recoil spectrometer (MRS) has been built and successfully used at OMEGA for measurements of down-scattered neutrons (DS-n), from which an areal density in both warm-capsule and cryogenic-DT implosions have been inferred. Another MRS is currently being commissioned on the National Ignition Facility (NIF) for diagnosing low-yield tritium-hydrogen-deuterium implosions and high-yield DT implosions. As CR-39 detectors are used in the MRS, the principal sources of background are neutron-induced tracks and intrinsic tracks (defects in the CR-39). The coincidence counting technique was developed to reduce these types of background tracks to the required level for the DS-n measurements at OMEGA and the NIF. Using this technique, it has been demonstrated that the number of background tracks is reduced by a couple of orders of magnitude, which exceeds the requirement for the DS-n measurements at both facilities.

  18. Material screening with HPGe counting station for PandaX experiment

    NASA Astrophysics Data System (ADS)

    Wang, X.; Chen, X.; Fu, C.; Ji, X.; Liu, X.; Mao, Y.; Wang, H.; Wang, S.; Xie, P.; Zhang, T.

    2016-12-01

    A gamma counting station based on high-purity germanium (HPGe) detector was set up for the material screening of the PandaX dark matter experiments in the China Jinping Underground Laboratory. Low background gamma rate of 2.6 counts/min within the energy range of 20 to 2700 keV is achieved due to the well-designed passive shield. The sentivities of the HPGe detetector reach mBq/kg level for isotopes like K, U, Th, and even better for Co and Cs, resulted from the low-background rate and the high relative detection efficiency of 175%. The structure and performance of the counting station are described in this article. Detailed counting results for the radioactivity in materials used by the PandaX dark-matter experiment are presented. The upgrading plan of the counting station is also discussed.

  19. A 1.5k x 1.5k class photon counting HgCdTe linear avalanche photo-diode array for low background space astronomy in the 1-5micron infrared

    NASA Astrophysics Data System (ADS)

    Hall, Donald

    Under a current award, NASA NNX 13AC13G "EXTENDING THE ASTRONOMICAL APPLICATION OF PHOTON COUNTING HgCdTe LINEAR AVALANCHE PHOTODIODE ARRAYS TO LOW BACKGROUND SPACE OBSERVATIONS" UH has used Selex SAPHIRA 320 x 256 MOVPE L-APD HgCdTe arrays developed for Adaptive Optics (AO) wavefront (WF) sensing to investigate the potential of this technology for low background space astronomy applications. After suppressing readout integrated circuit (ROIC) glow, we have placed upper limits on gain normalized dark current of 0.01 e-/sec at up to 8 volts avalanche bias, corresponding to avalanche gain of 5, and have operated with avalanche gains of up to several hundred at higher bias. We have also demonstrated detection of individual photon events. The proposed investigation would scale the format to 1536 x 1536 at 12um (the largest achievable in a standard reticule without requiring stitching) while incorporating reference pixels required at these low dark current levels. The primary objective is to develop, produce and characterize a 1.5k x 1.5k at 12um pitch MOVPE HgCdTe L-APD array, with nearly 30 times the pixel count of the 320 x 256 SAPHIRA, optimized for low background space astronomy. This will involve: 1) Selex design of a 1.5k x 1.5k at 12um pitch ROIC optimized for low background operation, silicon wafer fabrication at the German XFab foundry in 0.35 um 3V3 process and dicing/test at Selex, 2) provision by GL Scientific of a 3-side close-buttable carrier building from the heritage of the HAWAII xRG family, 3) Selex development and fabrication of 1.5k x 1.5k at 12 um pitch MOVPE HgCdTe L-APD detector arrays optimized for low background applications, 4) hybridization, packaging into a sensor chip assembly (SCA) with initial characterization by Selex and, 5) comprehensive characterization of low background performance, both in the laboratory and at ground based telescopes, by UH. The ultimate goal is to produce and eventually market a large format array, the L

  20. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra.

    PubMed

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen

    2017-07-27

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.

  1. Determination of confidence limits for experiments with low numbers of counts. [Poisson-distributed photon counts from astrophysical sources

    NASA Technical Reports Server (NTRS)

    Kraft, Ralph P.; Burrows, David N.; Nousek, John A.

    1991-01-01

    Two different methods, classical and Bayesian, for determining confidence intervals involving Poisson-distributed data are compared. Particular consideration is given to cases where the number of counts observed is small and is comparable to the mean number of background counts. Reasons for preferring the Bayesian over the classical method are given. Tables of confidence limits calculated by the Bayesian method are provided for quick reference.

  2. Background correction in forensic photography. II. Photography of blood under conditions of non-uniform illumination or variable substrate color--practical aspects and limitations.

    PubMed

    Wagner, John H; Miskelly, Gordon M

    2003-05-01

    The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.

  3. State of the art of D&D Instrumentation Technology: Alpha counting in the presence of high background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickerman, C.E.

    1995-08-01

    Discrimination of alpha activity in the presence of a high radiation background has been identified as an area of concern to be studied for D&D applications. Upon evaluating the range of alpha detection needs for D&D operations, we have expanded this study to address the operational concern of greatly expediting alpha counting of rough surfaces and rubble. Note that the term, ``rough surfaces`` includes a wide range of practical cases, including contaminated equipment and work surfaces. We have developed provisional applications requirements for instrumentation of this type; and we also have generated the scope of a program of instrument evaluationmore » and testing, with emphasis on practical implementation. In order to obtain the full operational benefit of alpha discrimination in the presence of strong beta-gamma radiation background, the detection system must be capable of some form of remote or semi-remote operation in order to reduce operator exposure. We have identified a highly promising technique, the long-range alpha detector (LRAD), for alpha discrimination in the presence of high radiation background. This technique operates upon the principle of transporting alphaionized air to an ionization detector. A transport time within a few seconds is adequate. Neither the provisional requirements nor the evaluation and testing scope were expressly tailored to force the selection of a LRAD technology, and they could be used as a basis for studies of other promising technologies. However, a technology that remotely detects alpha-ionized air (e. g., LRAD) is a natural fit to the key requirements of rejection of high background at the survey location and operator protection. Also, LRAD appears to be valuable for D&D applications as a means of greatly expediting surface alpha-activity surveys that otherwise would require performing time-consuming scans over surfaces of interest with alpha detector probes, and even more labor-intensive surface wipe surveys.« less

  4. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  5. Application of point-to-point matching algorithms for background correction in on-line liquid chromatography-Fourier transform infrared spectrometry (LC-FTIR).

    PubMed

    Kuligowski, J; Quintás, G; Garrigues, S; de la Guardia, M

    2010-03-15

    A new background correction method for the on-line coupling of gradient liquid chromatography and Fourier transform infrared spectrometry has been developed. It is based on the use of a point-to-point matching algorithm that compares the absorption spectra of the sample data set with those of a previously recorded reference data set in order to select an appropriate reference spectrum. The spectral range used for the point-to-point comparison is selected with minimal user-interaction, thus facilitating considerably the application of the whole method. The background correction method has been successfully tested on a chromatographic separation of four nitrophenols running acetonitrile (0.08%, v/v TFA):water (0.08%, v/v TFA) gradients with compositions ranging from 35 to 85% (v/v) acetonitrile, giving accurate results for both, baseline resolved and overlapped peaks. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  6. Evaluation of Shifted Excitation Raman Difference Spectroscopy and Comparison to Computational Background Correction Methods Applied to Biochemical Raman Spectra

    PubMed Central

    Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen

    2017-01-01

    Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450

  7. Logistic regression for dichotomized counts.

    PubMed

    Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W

    2016-12-01

    Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.

  8. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that

  9. Neutron counting with cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less

  10. Recursive least squares background prediction of univariate syndromic surveillance data

    PubMed Central

    2009-01-01

    Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the

  11. Laser line illumination scheme allowing the reduction of background signal and the correction of absorption heterogeneities effects for fluorescence reflectance imaging.

    PubMed

    Fantoni, Frédéric; Hervé, Lionel; Poher, Vincent; Gioux, Sylvain; Mars, Jérôme I; Dinten, Jean-Marc

    2015-10-01

    Intraoperative fluorescence imaging in reflectance geometry is an attractive imaging modality as it allows to noninvasively monitor the fluorescence targeted tumors located below the tissue surface. Some drawbacks of this technique are the background fluorescence decreasing the contrast and absorption heterogeneities leading to misinterpretations concerning fluorescence concentrations. We propose a correction technique based on a laser line scanning illumination scheme. We scan the medium with the laser line and acquire, at each position of the line, both fluorescence and excitation images. We then use the finding that there is a relationship between the excitation intensity profile and the background fluorescence one to predict the amount of signal to subtract from the fluorescence images to get a better contrast. As the light absorption information is contained both in fluorescence and excitation images, this method also permits us to correct the effects of absorption heterogeneities. This technique has been validated on simulations and experimentally. Fluorescent inclusions are observed in several configurations at depths ranging from 1 mm to 1 cm. Results obtained with this technique are compared with those obtained with a classical wide-field detection scheme for contrast enhancement and with the fluorescence by an excitation ratio approach for absorption correction.

  12. Effective estimation of correct platelet counts in pseudothrombocytopenia using an alternative anticoagulant based on magnesium salt

    PubMed Central

    Schuff-Werner, Peter; Steiner, Michael; Fenger, Sebastian; Gross, Hans-Jürgen; Bierlich, Alexa; Dreissiger, Katrin; Mannuß, Steffen; Siegert, Gabriele; Bachem, Maximilian; Kohlschein, Peter

    2013-01-01

    Pseudothrombocytopenia remains a challenge in the haematological laboratory. The pre-analytical problem that platelets tend to easily aggregate in vitro, giving rise to lower platelet counts, has been known since ethylenediamine-tetra acetic acid EDTA and automated platelet counting procedures were introduced in the haematological laboratory. Different approaches to avoid the time and temperature dependent in vitro aggregation of platelets in the presence of EDTA were tested, but none of them proved optimal for routine purposes. Patients with unexpectedly low platelet counts or flagged for suspected aggregates, were selected and smears were examined for platelet aggregates. In these cases patients were asked to consent to the drawing of an additional sample of blood anti-coagulated with a magnesium additive. Magnesium was used in the beginning of the last century as anticoagulant for microscopic platelet counts. Using this approach, we documented 44 patients with pseudothrombocytopenia. In all cases, platelet counts were markedly higher in samples anti-coagulated with the magnesium containing anticoagulant when compared to EDTA-anticoagulated blood samples. We conclude that in patients with known or suspected pseudothrombocytopenia the magnesium-anticoagulant blood samples may be recommended for platelet counting. PMID:23808903

  13. Pill counts and pill rental: unintended entrepreneurial opportunities.

    PubMed

    Viscomi, Christopher M; Covington, Melissa; Christenson, Catherine

    2013-07-01

    Prescription opioid diversion and abuse are becoming increasingly prevalent in many regions of the world, particularly the United States. One method advocated to assess compliance with opioid prescriptions is occasional "pill counts." Shortly before a scheduled appointment, a patient is notified that they must bring in the unused portion of their opioid prescription. It has been assumed that if a patient has the correct number and strength of pills that should be present for that point in a prescription interval that they are unlikely to be selling or abusing their opioids. Two cases are presented where patients describe short term rental of opioids from illicit opioid dealers in order to circumvent pill counts. Pill renting appears to be an established method of circumventing pill counts. Pill counts do not assure non-diversion of opioids and provide additional cash flow to illicit opioid dealers.

  14. Real-Time Microfluidic Blood-Counting System for PET and SPECT Preclinical Pharmacokinetic Studies.

    PubMed

    Convert, Laurence; Lebel, Réjean; Gascon, Suzanne; Fontaine, Réjean; Pratte, Jean-François; Charette, Paul; Aimez, Vincent; Lecomte, Roger

    2016-09-01

    Small-animal nuclear imaging modalities have become essential tools in the development process of new drugs, diagnostic procedures, and therapies. Quantification of metabolic or physiologic parameters is based on pharmacokinetic modeling of radiotracer biodistribution, which requires the blood input function in addition to tissue images. Such measurements are challenging in small animals because of their small blood volume. In this work, we propose a microfluidic counting system to monitor rodent blood radioactivity in real time, with high efficiency and small detection volume (∼1 μL). A microfluidic channel is built directly above unpackaged p-i-n photodiodes to detect β-particles with maximum efficiency. The device is embedded in a compact system comprising dedicated electronics, shielding, and pumping unit controlled by custom firmware to enable measurements next to small-animal scanners. Data corrections required to use the input function in pharmacokinetic models were established using calibrated solutions of the most common PET and SPECT radiotracers. Sensitivity, dead time, propagation delay, dispersion, background sensitivity, and the effect of sample temperature were characterized. The system was tested for pharmacokinetic studies in mice by quantifying myocardial perfusion and oxygen consumption with (11)C-acetate (PET) and by measuring the arterial input function using (99m)TcO4 (-) (SPECT). Sensitivity for PET isotopes reached 20%-47%, a 2- to 10-fold improvement relative to conventional catheter-based geometries. Furthermore, the system detected (99m)Tc-based SPECT tracers with an efficiency of 4%, an outcome not possible through a catheter. Correction for dead time was found to be unnecessary for small-animal experiments, whereas propagation delay and dispersion within the microfluidic channel were accurately corrected. Background activity and sample temperature were shown to have no influence on measurements. Finally, the system was successfully

  15. Evaluation of counting methods for oceanic radium-228

    NASA Astrophysics Data System (ADS)

    Orr, James C.

    1988-07-01

    Measurement of open ocean 228Ra is difficult, typically requiring at least 200 L of seawater. The burden of collecting and processing these large-volume samples severely limits the widespread use of this promising tracer. To use smaller-volume samples, a more sensitive means of analysis is required. To seek out new and improved counting method(s), conventional 228Ra counting methods have been compared with some promising techniques which are currently used for other radionuclides. Of the conventional methods, α spectrometry possesses the highest efficiency (3-9%) and lowest background (0.0015 cpm), but it suffers from the need for complex chemical processing after sampling and the need to allow about 1 year for adequate ingrowth of 228Th granddaughter. The other two conventional counting methods measure the short-lived 228Ac daughter while it remains supported by 228Ra, thereby avoiding the complex sample processing and the long delay before counting. The first of these, high-resolution γ spectrometry, offers the simplest processing and an efficiency (4.8%) comparable to α spectrometry; yet its high background (0.16 cpm) and substantial equipment cost (˜30,000) limit its widespread use. The second no-wait method, β-γ coincidence spectrometry, also offers comparable efficiency (5.3%), but it possesses both lower background (0.0054 cpm) and lower initial cost (˜12,000). Three new (i.e., untried for 228Ra) techniques all seem to promise about a fivefold increase in efficiency over conventional methods. By employing liquid scintillation methods, both α spectrometry and β-γ coincidence spectrometry can improve their counter efficiency while retaining low background. The third new 228Ra counting method could be adapted from a technique which measures 224Ra by 220Rn emanation. After allowing for ingrowth and then counting for the 224Ra great-granddaughter, 228Ra could be back calculated, thereby yielding a method with high efficiency, where no sample processing

  16. Publisher Correction: Cluster richness-mass calibration with cosmic microwave background lensing

    NASA Astrophysics Data System (ADS)

    Geach, James E.; Peacock, John A.

    2018-03-01

    Owing to a technical error, the `Additional information' section of the originally published PDF version of this Letter incorrectly gave J.A.P. as the corresponding author; it should have read J.E.G. This has now been corrected. The HTML version is correct.

  17. Characterization of 176Lu background in LSO-based PET scanners

    NASA Astrophysics Data System (ADS)

    Conti, Maurizio; Eriksson, Lars; Rothfuss, Harold; Sjoeholm, Therese; Townsend, David; Rosenqvist, Göran; Carlier, Thomas

    2017-05-01

    LSO and LYSO are today the most common scintillators used in positron emission tomography. Lutetium contains traces of 176Lu, a radioactive isotope that decays β - with a cascade of γ photons in coincidence. Therefore, Lutetium-based scintillators are characterized by a small natural radiation background. In this paper, we investigate and characterize the 176Lu radiation background via experiments performed on LSO-based PET scanners. LSO background was measured at different energy windows and different time coincidence windows, and by using shields to alter the original spectrum. The effect of radiation background in particularly count-starved applications, such as 90Y imaging, is analysed and discussed. Depending on the size of the PET scanner, between 500 and 1000 total random counts per second and between 3 and 5 total true coincidences per second were measured in standard coincidence mode. The LSO background counts in a Siemens mCT in the standard PET energy and time windows are in general negligible in terms of trues, and are comparable to that measured in a BGO scanner of similar size.

  18. Contribution to the G 0 violation of parity experience: calculation and simulation of radiative corrections and the background noise study; Contribution a l'experience G0 de violation de la parite : calcul et simulation des corrections radiatives et etude du bruit de fond (in French)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guler, Hayg

    2003-12-17

    In the framework of quantum chromodynamics, the nucleon is made of three valence quarks surrpounded by a sea of gluons and quark-antiquark pairs. Only the only lightest quarks (u, d and s) contribute significantly to the nucleon properties. In Go we using the property of weak interaction to violate parity symmetry, in order to determine separately the contributions of the three types of quarks to nucleon form factors. The experiment, which takes place at Thomas Jefferson laboratory (USA), aims at measuring parity violation asymmetry in electron-proton scattering. By doing several measurements at different momentum squared of the exchanged photons andmore » for different kinematics (forward angle when the proton is detected and backward angle it will be the electron) will permit to determine separately strange quarks electric and magnetic contributions to nucleon form factors. To extract an asymmetry with small errors, it is necessary to correct all the beam parameters, and to have high enough counting rates in detectors. A special electronics was developed to treat information coming from 16 scintillator pairs for each of the 8 sectors of the Go spectrometer. A complete calculation of radiative corrections has been clone and Monte Carlo simulations with the GEANT program has permitted to determine the shape of the experimental spectra including inelastic background. This work will allow to do a comparison between experimental data and theoretical calculations based on the Standard Model.« less

  19. Recursive least squares background prediction of univariate syndromic surveillance data.

    PubMed

    Najmi, Amir-Homayoon; Burkom, Howard

    2009-01-16

    Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold detection algorithm from the residuals of the RLS forecasts. We

  20. Modeling zero-modified count and semicontinuous data in health services research Part 1: background and overview.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    Health services data often contain a high proportion of zeros. In studies examining patient hospitalization rates, for instance, many patients will have no hospitalizations, resulting in a count of zero. When the number of zeros is greater or less than expected under a standard count model, the data are said to be zero modified relative to the standard model. A similar phenomenon arises with semicontinuous data, which are characterized by a spike at zero followed by a continuous distribution with positive support. When analyzing zero-modified count and semicontinuous data, flexible mixture distributions are often needed to accommodate both the excess zeros and the typically skewed distribution of nonzero values. Various models have been introduced over the past three decades to accommodate such data, including hurdle models, zero-inflated models, and two-part semicontinuous models. This tutorial describes recent modeling strategies for zero-modified count and semicontinuous data and highlights their role in health services research studies. Part 1 of the tutorial, presented here, provides a general overview of the topic. Part 2, appearing as a companion piece in this issue of Statistics in Medicine, discusses three case studies illustrating applications of the methods to health services research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Counting on fine motor skills: links between preschool finger dexterity and numerical skills.

    PubMed

    Fischer, Ursula; Suggate, Sebastian P; Schmirl, Judith; Stoeger, Heidrun

    2017-10-26

    Finger counting is widely considered an important step in children's early mathematical development. Presumably, children's ability to move their fingers during early counting experiences to aid number representation depends in part on their early fine motor skills (FMS). Specifically, FMS should link to children's procedural counting skills through consistent repetition of finger-counting procedures. Accordingly, we hypothesized that (a) FMS are linked to early counting skills, and (b) greater FMS relate to conceptual counting knowledge (e.g., cardinality, abstraction, order irrelevance) via procedural counting skills (i.e., one-one correspondence and correctness of verbal counting). Preschool children (N = 177) were administered measures of procedural counting skills, conceptual counting knowledge, FMS, and general cognitive skills along with parent questionnaires on home mathematics and fine motor environment. FMS correlated with procedural counting skills and conceptual counting knowledge after controlling for cognitive skills, chronological age, home mathematics and FMS environments. Moreover, the relationship between FMS and conceptual counting knowledge was mediated by procedural counting skills. Findings suggest that FMS play a role in early counting and therewith conceptual counting knowledge. © 2017 John Wiley & Sons Ltd.

  2. Significance of Maternal and Cord Blood Nucleated Red Blood Cell Count in Pregnancies Complicated by Preeclampsia

    PubMed Central

    Misha, Mehak; Rai, Lavanya

    2014-01-01

    Objectives. To evaluate the effect of preeclampsia on the cord blood and maternal NRBC count and to correlate NRBC count and neonatal outcome in preeclampsia and control groups. Study Design. This is a prospective case control observational study. Patients and Methods. Maternal and cord blood NRBC counts were studied in 50 preeclamptic women and 50 healthy pregnant women. Using automated cell counter total leucocyte count was obtained and peripheral smear was prepared to obtain NRBC count. Corrected WBC count and NRBC count/100 leucocytes in maternal venous blood and in cord blood were compared between the 2 groups. Results. No significant differences were found in corrected WBC count in maternal and cord blood in cases and controls. Significant differences were found in mean cord blood NRBC count in preeclampsia and control groups (40.0 ± 85.1 and 5.9 ± 6.3, P = 0.006). The mean maternal NRBC count in two groups was 2.4 ± 9.0 and 0.8 ± 1.5, respectively (P = 0.214). Cord blood NRBC count cut off value ≤13 could rule out adverse neonatal outcome with a sensitivity of 63% and specificity of 89%. Conclusion. Cord blood NRBC are significantly raised in preeclampsia. Neonates with elevated cord blood NRBC counts are more likely to have IUGR, low birth weight, neonatal ICU admission, respiratory distress syndrome, and assisted ventilation. Below the count of 13/100 leucocytes, adverse neonatal outcome is quite less likely. PMID:24734183

  3. Counting in Lattices: Combinatorial Problems from Statistical Mechanics.

    NASA Astrophysics Data System (ADS)

    Randall, Dana Jill

    In this thesis we consider two classical combinatorial problems arising in statistical mechanics: counting matchings and self-avoiding walks in lattice graphs. The first problem arises in the study of the thermodynamical properties of monomers and dimers (diatomic molecules) in crystals. Fisher, Kasteleyn and Temperley discovered an elegant technique to exactly count the number of perfect matchings in two dimensional lattices, but it is not applicable for matchings of arbitrary size, or in higher dimensional lattices. We present the first efficient approximation algorithm for computing the number of matchings of any size in any periodic lattice in arbitrary dimension. The algorithm is based on Monte Carlo simulation of a suitable Markov chain and has rigorously derived performance guarantees that do not rely on any assumptions. In addition, we show that these results generalize to counting matchings in any graph which is the Cayley graph of a finite group. The second problem is counting self-avoiding walks in lattices. This problem arises in the study of the thermodynamics of long polymer chains in dilute solution. While there are a number of Monte Carlo algorithms used to count self -avoiding walks in practice, these are heuristic and their correctness relies on unproven conjectures. In contrast, we present an efficient algorithm which relies on a single, widely-believed conjecture that is simpler than preceding assumptions and, more importantly, is one which the algorithm itself can test. Thus our algorithm is reliable, in the sense that it either outputs answers that are guaranteed, with high probability, to be correct, or finds a counterexample to the conjecture. In either case we know we can trust our results and the algorithm is guaranteed to run in polynomial time. This is the first algorithm for counting self-avoiding walks in which the error bounds are rigorously controlled. This work was supported in part by an AT&T graduate fellowship, a University of

  4. Fingerprint Ridge Count: A Polygenic Trait Useful in Classroom Instruction.

    ERIC Educational Resources Information Center

    Mendenhall, Gordon; And Others

    1989-01-01

    Describes the use of the polygenic trait of total fingerprint ridge count in the classroom as a laboratory investigation. Presents information on background of topic, fingerprint patterns which are classified into three major groups, ridge count, the inheritance model, and activities. Includes an example data sheet format for fingerprints. (RT)

  5. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  6. Development of a stained cell nuclei counting system

    NASA Astrophysics Data System (ADS)

    Timilsina, Niranjan; Moffatt, Christopher; Okada, Kazunori

    2011-03-01

    This paper presents a novel cell counting system which exploits the Fast Radial Symmetry Transformation (FRST) algorithm [1]. The driving force behind our system is a research on neurogenesis in the intact nervous system of Manduca Sexta or the Tobacco Hornworm, which was being studied to assess the impact of age, food and environment on neurogenesis. The varying thickness of the intact nervous system in this species often yields images with inhomogeneous background and inconsistencies such as varying illumination, variable contrast, and irregular cell size. For automated counting, such inhomogeneity and inconsistencies must be addressed, which no existing work has done successfully. Thus, our goal is to devise a new cell counting algorithm for the images with non-uniform background. Our solution adapts FRST: a computer vision algorithm which is designed to detect points of interest on circular regions such as human eyes. This algorithm enhances the occurrences of the stained-cell nuclei in 2D digital images and negates the problems caused by their inhomogeneity. Besides FRST, our algorithm employs standard image processing methods, such as mathematical morphology and connected component analysis. We have evaluated the developed cell counting system with fourteen digital images of Tobacco Hornworm's nervous system collected for this study with ground-truth cell counts by biology experts. Experimental results show that our system has a minimum error of 1.41% and mean error of 16.68% which is at least forty-four percent better than the algorithm without FRST.

  7. Tidal radii of the globular clusters M 5, M 12, M 13, M 15, M 53, NGC 5053 and NGC 5466 from automated star counts.

    NASA Astrophysics Data System (ADS)

    Lehmann, I.; Scholz, R.-D.

    1997-04-01

    We present new tidal radii for seven Galactic globular clusters using the method of automated star counts on Schmidt plates of the Tautenburg, Palomar and UK telescopes. The plates were fully scanned with the APM system in Cambridge (UK). Special account was given to a reliable background subtraction and the correction of crowding effects in the central cluster region. For the latter we used a new kind of crowding correction based on a statistical approach to the distribution of stellar images and the luminosity function of the cluster stars in the uncrowded area. The star counts were correlated with surface brightness profiles of different authors to obtain complete projected density profiles of the globular clusters. Fitting an empirical density law (King 1962) we derived the following structural parameters: tidal radius r_t_, core radius r_c_ and concentration parameter c. In the cases of NGC 5466, M 5, M 12, M 13 and M 15 we found an indication for a tidal tail around these objects (cf. Grillmair et al. 1995).

  8. Estimating the Effects of Detection Heterogeneity and Overdispersion on Trends Estimated from Avian Point Counts

    EPA Science Inventory

    Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...

  9. The ALMA Spectroscopic Survey in the Hubble Ultra Deep Field: Continuum Number Counts, Resolved 1.2 mm Extragalactic Background, and Properties of the Faintest Dusty Star-forming Galaxies

    NASA Astrophysics Data System (ADS)

    Aravena, M.; Decarli, R.; Walter, F.; Da Cunha, E.; Bauer, F. E.; Carilli, C. L.; Daddi, E.; Elbaz, D.; Ivison, R. J.; Riechers, D. A.; Smail, I.; Swinbank, A. M.; Weiss, A.; Anguita, T.; Assef, R. J.; Bell, E.; Bertoldi, F.; Bacon, R.; Bouwens, R.; Cortes, P.; Cox, P.; Gónzalez-López, J.; Hodge, J.; Ibar, E.; Inami, H.; Infante, L.; Karim, A.; Le Le Fèvre, O.; Magnelli, B.; Ota, K.; Popping, G.; Sheth, K.; van der Werf, P.; Wagg, J.

    2016-12-01

    We present an analysis of a deep (1σ = 13 μJy) cosmological 1.2 mm continuum map based on ASPECS, the ALMA Spectroscopic Survey in the Hubble Ultra Deep Field. In the 1 arcmin2 covered by ASPECS we detect nine sources at \\gt 3.5σ significance at 1.2 mm. Our ALMA-selected sample has a median redshift of z=1.6+/- 0.4, with only one galaxy detected at z > 2 within the survey area. This value is significantly lower than that found in millimeter samples selected at a higher flux density cutoff and similar frequencies. Most galaxies have specific star formation rates (SFRs) similar to that of main-sequence galaxies at the same epoch, and we find median values of stellar mass and SFRs of 4.0× {10}10 {M}⊙ and ˜ 40 {M}⊙ yr-1, respectively. Using the dust emission as a tracer for the interstellar medium (ISM) mass, we derive depletion times that are typically longer than 300 Myr, and we find molecular gas fractions ranging from ˜0.1 to 1.0. As noted by previous studies, these values are lower than those using CO-based ISM estimates by a factor of ˜2. The 1 mm number counts (corrected for fidelity and completeness) are in agreement with previous studies that were typically restricted to brighter sources. With our individual detections only, we recover 55% ± 4% of the extragalactic background light (EBL) at 1.2 mm measured by the Planck satellite, and we recover 80% ± 7% of this EBL if we include the bright end of the number counts and additional detections from stacking. The stacked contribution is dominated by galaxies at z˜ 1{--}2, with stellar masses of (1-3) × 1010 M {}⊙ . For the first time, we are able to characterize the population of galaxies that dominate the EBL at 1.2 mm.

  10. Bayesian Correction for Misclassification in Multilevel Count Data Models.

    PubMed

    Nelson, Tyler; Song, Joon Jin; Chin, Yoo-Mi; Stamey, James D

    2018-01-01

    Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.

  11. Automated vehicle counting using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Meany, Sean; Eskew, Edward; Martinez-Castro, Rosana; Jang, Shinae

    2017-04-01

    Vehicle counting is used by the government to improve roadways and the flow of traffic, and by private businesses for purposes such as determining the value of locating a new store in an area. A vehicle count can be performed manually or automatically. Manual counting requires an individual to be on-site and tally the traffic electronically or by hand. However, this can lead to miscounts due to factors such as human error A common form of automatic counting involves pneumatic tubes, but pneumatic tubes disrupt traffic during installation and removal, and can be damaged by passing vehicles. Vehicle counting can also be performed via the use of a camera at the count site recording video of the traffic, with counting being performed manually post-recording or using automatic algorithms. This paper presents a low-cost procedure to perform automatic vehicle counting using remote video cameras with an automatic counting algorithm. The procedure would utilize a Raspberry Pi micro-computer to detect when a car is in a lane, and generate an accurate count of vehicle movements. The method utilized in this paper would use background subtraction to process the images and a machine learning algorithm to provide the count. This method avoids fatigue issues that are encountered in manual video counting and prevents the disruption of roadways that occurs when installing pneumatic tubes

  12. Building and Activating Students' Background Knowledge: It's What They Already Know That Counts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy; Lapp, Diane

    2012-01-01

    Students enter the middle grades with varying amounts of background knowledge. Teachers must assess student background knowledge for gaps or misconceptions and then provide instruction to build on that base. This article discusses effective strategies for assessing and developing students' background knowledge so they can become independent…

  13. Approach for counting vehicles in congested traffic flow

    NASA Astrophysics Data System (ADS)

    Tan, Xiaojun; Li, Jun; Liu, Wei

    2005-02-01

    More and more image sensors are used in intelligent transportation systems. In practice, occlusion is always a problem when counting vehicles in congested traffic. This paper tries to present an approach to solve the problem. The proposed approach consists of three main procedures. Firstly, a new algorithm of background subtraction is performed. The aim is to segment moving objects from an illumination-variant background. Secondly, object tracking is performed, where the CONDENSATION algorithm is used. This can avoid the problem of matching vehicles in successive frames. Thirdly, an inspecting procedure is executed to count the vehicles. When a bus firstly occludes a car and then the bus moves away a few frames later, the car will appear in the scene. The inspecting procedure should find the "new" car and add it as a tracking object.

  14. Characterization of the Photon Counting CHASE Jr., Chip Built in a 40-nm CMOS Process With a Charge Sharing Correction Algorithm Using a Collimated X-Ray Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krzyżanowska, A.; Deptuch, G. W.; Maj, P.

    This paper presents the detailed characterization of a single photon counting chip, named CHASE Jr., built in a CMOS 40-nm process, operating with synchrotron radiation. The chip utilizes an on-chip implementation of the C8P1 algorithm. The algorithm eliminates the charge sharing related uncertainties, namely, the dependence of the number of registered photons on the discriminator’s threshold, set for monochromatic irradiation, and errors in the assignment of an event to a certain pixel. The article presents a short description of the algorithm as well as the architecture of the CHASE Jr., chip. The analog and digital functionalities, allowing for proper operationmore » of the C8P1 algorithm are described, namely, an offset correction for two discriminators independently, two-stage gain correction, and different operation modes of the digital blocks. The results of tests of the C8P1 operation are presented for the chip bump bonded to a silicon sensor and exposed to the 3.5- μm -wide pencil beam of 8-keV photons of synchrotron radiation. It was studied how sensitive the algorithm performance is to the chip settings, as well as the uniformity of parameters of the analog front-end blocks. Presented results prove that the C8P1 algorithm enables counting all photons hitting the detector in between readout channels and retrieving the actual photon energy.« less

  15. Does Learning to Count Involve a Semantic Induction?

    ERIC Educational Resources Information Center

    Davidson, Kathryn; Eng, Kortney; Barner, David

    2012-01-01

    We tested the hypothesis that, when children learn to correctly count sets, they make a semantic induction about the meanings of their number words. We tested the logical understanding of number words in 84 children that were classified as "cardinal-principle knowers" by the criteria set forth by Wynn (1992). Results show that these children often…

  16. Power counting to better jet observables

    NASA Astrophysics Data System (ADS)

    Larkoski, Andrew J.; Moult, Ian; Neill, Duff

    2014-12-01

    Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia 8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.

  17. The robust corrective action priority-an improved approach for selecting competing corrective actions in FMEA based on principle of robust design

    NASA Astrophysics Data System (ADS)

    Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan

    2017-11-01

    In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.

  18. {sup 90}Y -PET imaging: Exploring limitations and accuracy under conditions of low counts and high random fraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene

    Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at

  19. Montana Kids Count 1996 Data Book.

    ERIC Educational Resources Information Center

    Healthy Mothers, Healthy Babies--The Montana Coalition, Helena.

    This 1996 KIDS COUNT data book presents comparative data on child well-being for each county in Montana and for the state as a whole. Data in the county profiles, which comprise the bulk of the report, are grouped into: background facts (demographic, mental health, education, security, and income support information); charts showing changes in…

  20. Software electron counting for low-dose scanning transmission electron microscopy.

    PubMed

    Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C

    2018-05-01

    The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  1. A loop-counting method for covariate-corrected low-rank biclustering of gene-expression and genome-wide association study data.

    PubMed

    Rangan, Aaditya V; McGrouther, Caroline C; Kelsoe, John; Schork, Nicholas; Stahl, Eli; Zhu, Qian; Krishnan, Arjun; Yao, Vicky; Troyanskaya, Olga; Bilaloglu, Seda; Raghavan, Preeti; Bergen, Sarah; Jureus, Anders; Landen, Mikael

    2018-05-14

    A common goal in data-analysis is to sift through a large data-matrix and detect any significant submatrices (i.e., biclusters) that have a low numerical rank. We present a simple algorithm for tackling this biclustering problem. Our algorithm accumulates information about 2-by-2 submatrices (i.e., 'loops') within the data-matrix, and focuses on rows and columns of the data-matrix that participate in an abundance of low-rank loops. We demonstrate, through analysis and numerical-experiments, that this loop-counting method performs well in a variety of scenarios, outperforming simple spectral methods in many situations of interest. Another important feature of our method is that it can easily be modified to account for aspects of experimental design which commonly arise in practice. For example, our algorithm can be modified to correct for controls, categorical- and continuous-covariates, as well as sparsity within the data. We demonstrate these practical features with two examples; the first drawn from gene-expression analysis and the second drawn from a much larger genome-wide-association-study (GWAS).

  2. Low-background Gamma Spectroscopy at Sanford Underground Laboratory

    NASA Astrophysics Data System (ADS)

    Chiller, Christopher; Alanson, Angela; Mei, Dongming

    2014-03-01

    Rare-event physics experiments require the use of material with unprecedented radio-purity. Low background counting assay capabilities and detectors are critical for determining the sensitivity of the planned ultra-low background experiments. A low-background counting, LBC, facility has been built at the 4850-Level Davis Campus of the Sanford Underground Research Facility to perform screening of material and detector parts. Like many rare event physics experiments, our LBC uses lead shielding to mitigate background radiation. Corrosion of lead brick shielding in subterranean installations creates radon plate-out potential as well as human risks of ingestible or respirable lead compounds. Our LBC facilities employ an exposed lead shield requiring clean smooth surfaces. A cleaning process of low-activity silica sand blasting and borated paraffin hot coating preservation was employed to guard against corrosion due to chemical and biological exposures. The resulting lead shield maintains low background contribution integrity while fully encapsulating the lead surface. We report the performance of the current LBC and a plan to develop a large germanium well detector for PMT screening. Support provided by Sd governors research center-CUBED, NSF PHY-0758120 and Sanford Lab.

  3. The Significance of an Excess in a Counting Experiment: Assessing the Impact of Systematic Uncertainties and the Case with a Gaussian Background

    NASA Astrophysics Data System (ADS)

    Vianello, Giacomo

    2018-05-01

    Several experiments in high-energy physics and astrophysics can be treated as on/off measurements, where an observation potentially containing a new source or effect (“on” measurement) is contrasted with a background-only observation free of the effect (“off” measurement). In counting experiments, the significance of the new source or effect can be estimated with a widely used formula from Li & Ma, which assumes that both measurements are Poisson random variables. In this paper we study three other cases: (i) the ideal case where the background measurement has no uncertainty, which can be used to study the maximum sensitivity that an instrument can achieve, (ii) the case where the background estimate b in the off measurement has an additional systematic uncertainty, and (iii) the case where b is a Gaussian random variable instead of a Poisson random variable. The latter case applies when b comes from a model fitted on archival or ancillary data, or from the interpolation of a function fitted on data surrounding the candidate new source/effect. Practitioners typically use a formula that is only valid when b is large and when its uncertainty is very small, while we derive a general formula that can be applied in all regimes. We also develop simple methods that can be used to assess how much an estimate of significance is sensitive to systematic uncertainties on the efficiency or on the background. Examples of applications include the detection of short gamma-ray bursts and of new X-ray or γ-ray sources. All the techniques presented in this paper are made available in a Python code that is ready to use.

  4. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  5. Quantum error correction of continuous-variable states against Gaussian noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ralph, T. C.

    2011-08-15

    We describe a continuous-variable error correction protocol that can correct the Gaussian noise induced by linear loss on Gaussian states. The protocol can be implemented using linear optics and photon counting. We explore the theoretical bounds of the protocol as well as the expected performance given current knowledge and technology.

  6. Breast tissue decomposition with spectral distortion correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee

    2014-01-01

    Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953

  7. Different binarization processes validated against manual counts of fluorescent bacterial cells.

    PubMed

    Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W

    2016-09-01

    State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties, Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, T.; Takeuchi, T. T.

    2001-12-01

    A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.

  9. WBC count

    MedlinePlus

    Leukocyte count; White blood cell count; White blood cell differential; WBC differential; Infection - WBC count; Cancer - WBC count ... called leukopenia. A count less than 4,500 cells per microliter (4.5 × 10 9 /L) is ...

  10. Deep galaxy counts in the K band with the Kech telescope

    NASA Technical Reports Server (NTRS)

    Djorgovski, S.; Soifer, B. T.; Pahre, M. A.; Larkin, J. E.; Smith, J. D.; Neugebauer, G.; Smail, I.; Matthews, K.; Hogg, D. W.; Blandford, R. D.

    1995-01-01

    We present deep galaxy counts in the K (lambda 2.2 micrometer) band, obtained at the W. M. Kech 10 m telescope. The data reach limiting magnitudes K approximately 24 mag, about 5 times deeper than the deepest published K-band images to date. The counts are performed in three small (approximately 1 min), widely separated high-latitude fields. Extensive Monte Carlo tests were used to derive the comleteness corrections and minimize photometric biases. The counts continue to rise, with no sign of a turnover, down to the limits of our data, with the logarithmic slope of d log N/dm = 0.315 +/- 0.02 between K = 20 and 24 mag. This implies a cumulative surface density of approximately 5 x 10(exp 5) galaxies/sq deg, or approximately 2 x 10(exp 10) over the entire sky, down to K = 24 mag. Our counts are in good agreement with, although slightly lower than, those from the Hawaii Deep Survey by Cowie and collaborators; the discrepancies may be due to the small differences in the aperture corrections. We compare our counts with some of the available theoretical predictions. The data do not require models with a high value of Omega(sub 0), but can be well fitted by models with no (or little) evolution, and cosmologies with a low value of Omega(sub 0). Given the uncertainties in the models, it may be premature to put useful constrains on the value of Omega(sub 0) from the counts alone. Optical-to-IR colors are computed, using CCD data obtaind previously at Palomar. We find a few red galaxies with (r-K) approximately greater than 5 mag, or (i-K) approximately greater than 5 mag; these may be ellipticals at z approximately 1. While the redshift distribution of galaxies in our counts is still unknown, the flux limits reached would allow us to detect unobscured L(sub *) galaxies out to substantial redshifts (z greater than 3?).

  11. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low

  12. Predictions of CD4 lymphocytes’ count in HIV patients from complete blood count

    PubMed Central

    2013-01-01

    Background HIV diagnosis, prognostic and treatment requires T CD4 lymphocytes’ number from flow cytometry, an expensive technique often not available to people in developing countries. The aim of this work is to apply a previous developed methodology that predicts T CD4 lymphocytes’ value based on total white blood cell (WBC) count and lymphocytes count applying sets theory, from information taken from the Complete Blood Count (CBC). Methods Sets theory was used to classify into groups named A, B, C and D the number of leucocytes/mm3, lymphocytes/mm3, and CD4/μL3 subpopulation per flow cytometry of 800 HIV diagnosed patients. Union between sets A and C, and B and D were assessed, and intersection between both unions was described in order to establish the belonging percentage to these sets. Results were classified into eight ranges taken by 1000 leucocytes/mm3, calculating the belonging percentage of each range with respect to the whole sample. Results Intersection (A ∪ C) ∩ (B ∪ D) showed an effectiveness in the prediction of 81.44% for the range between 4000 and 4999 leukocytes, 91.89% for the range between 3000 and 3999, and 100% for the range below 3000. Conclusions Usefulness and clinical applicability of a methodology based on sets theory were confirmed to predict the T CD4 lymphocytes’ value, beginning with WBC and lymphocytes’ count from CBC. This methodology is new, objective, and has lower costs than the flow cytometry which is currently considered as Gold Standard. PMID:24034560

  13. Chromatographic background drift correction coupled with parallel factor analysis to resolve coelution problems in three-dimensional chromatographic data: quantification of eleven antibiotics in tap water samples by high-performance liquid chromatography coupled with a diode array detector.

    PubMed

    Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin

    2013-08-09

    Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. fatalityCMR: capture-recapture software to correct raw counts of wildlife fatalities using trial experiments for carcass detection probability and persistence time

    USGS Publications Warehouse

    Peron, Guillaume; Hines, James E.

    2014-01-01

    Many industrial and agricultural activities involve wildlife fatalities by collision, poisoning or other involuntary harvest: wind turbines, highway network, utility network, tall structures, pesticides, etc. Impacted wildlife may benefit from official protection, including the requirement to monitor the impact. Carcass counts can often be conducted to quantify the number of fatalities, but they need to be corrected for carcass persistence time (removal by scavengers and decay) and detection probability (searcher efficiency). In this article we introduce a new piece of software that fits a superpopulation capture-recapture model to raw count data. It uses trial data to estimate detection and daily persistence probabilities. A recurrent issue is that fatalities of rare, protected species are infrequent, in which case the software offers the option to switch to an ‘evidence of absence’ mode, i.e., estimate the number of carcasses that may have been missed by field crews. The software allows distinguishing between different turbine types (e.g. different vegetation cover under turbines, or different technical properties), as well between two carcass age-classes or states, with transition between those classes (e.g, fresh and dry). There is a data simulation capacity that may be used at the planning stage to optimize sampling design. Resulting mortality estimates can be used 1) to quantify the required amount of compensation, 2) inform mortality projections for proposed development sites, and 3) inform decisions about management of existing sites.

  15. Background Conditions for the October 29, 2003 Solar Flare by the AVS-F Apparatus Data

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Lyapin, A. R.; Troitskaya, E. V.

    The background model for AVS-F apparatus onboard CORONAS-F satellite for the October 29, 2003 X10-class solar flare is discussed in the presented work. This background model developed for AVS-F counts rate in the low- and high-energy spectral ranges in both individual channels and summarized. Count rate were approximated by polynomials of high order taking into account the mean count rate in the geomagnetic equatorial region at the different orbits parts and Kp-index averaged on 5 bins in time interval from -24 to -12 hours before the time of geomagnetic equator passing. The observed averaged counts rate on equator in the region of geomagnetic latitude ±5o and estimated minimum count rate values are in coincidence within statistical errors for all selected orbits parts used for background modeling. This model will used to refine the estimated energy of registered during the solar flare spectral features and detailed analysis of their temporal profiles behavior both in corresponding energy bands and in summarized energy range.

  16. Deep 3 GHz number counts from a P(D) fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.

    2014-05-01

    Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.

  17. YALINA-booster subcritical assembly pulsed-neutron e xperiments: detector dead time and apatial corrections.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Gohar, Y.; Nuclear Engineering Division

    In almost every detector counting system, a minimal dead time is required to record two successive events as two separated pulses. Due to the random nature of neutron interactions in the subcritical assembly, there is always some probability that a true neutron event will not be recorded because it occurs too close to the preceding event. These losses may become rather severe for counting systems with high counting rates, and should be corrected before any utilization of the experimental data. This report examines the dead time effects for the pulsed neutron experiments of the YALINA-Booster subcritical assembly. The nonparalyzable modelmore » is utilized to correct the experimental data due to dead time. Overall, the reactivity values are increased by 0.19$ and 0.32$ after the spatial corrections for the YALINA-Booster 36% and 21% configurations respectively. The differences of the reactivities obtained with He-3 long or short detectors at the same detector channel diminish after the dead time corrections of the experimental data for the 36% YALINA-Booster configuration. In addition, better agreements between reactivities obtained from different experimental data sets are also observed after the dead time corrections for the 21% YALINA-Booster configuration.« less

  18. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  19. Development of an automated asbestos counting software based on fluorescence microscopy.

    PubMed

    Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio

    2015-01-01

    An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.

  20. How Fred Hoyle Reconciled Radio Source Counts and the Steady State Cosmology

    NASA Astrophysics Data System (ADS)

    Ekers, Ron

    2012-09-01

    In 1969 Fred Hoyle invited me to his Institute of Theoretical Astronomy (IOTA) in Cambridge to work with him on the interpretation of the radio source counts. This was a period of extreme tension with Ryle just across the road using the steep slope of the radio source counts to argue that the radio source population was evolving and Hoyle maintaining that the counts were consistent with the steady state cosmology. Both of these great men had made some correct deductions but they had also both made mistakes. The universe was evolving, but the source counts alone could tell us very little about cosmology. I will try to give some indication of the atmosphere and the issues at the time and look at what we can learn from this saga. I will conclude by briefly summarising the exponential growth of the size of the radio source counts since the early days and ask whether our understanding has grown at the same rate.

  1. Simultaneous measurement of tritium and radiocarbon by ultra-low-background proportional counting.

    PubMed

    Mace, Emily; Aalseth, Craig; Alexander, Tom; Back, Henning; Day, Anthony; Hoppe, Eric; Keillor, Martin; Moran, Jim; Overman, Cory; Panisko, Mark; Seifert, Allen

    2017-08-01

    Use of ultra-low-background capabilities at Pacific Northwest National Laboratory provide enhanced sensitivity for measurement of low-activity sources of tritium and radiocarbon using proportional counters. Tritium levels are nearly back to pre-nuclear test backgrounds (~2-8 TU in rainwater), which can complicate their dual measurement with radiocarbon due to overlap in the beta decay spectra. We present results of single-isotope proportional counter measurements used to analyze a dual-isotope methane sample synthesized from ~120mg of H 2 O and present sensitivity results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Simultaneous measurement of tritium and radiocarbon by ultra-low-background proportional counting

    DOE PAGES

    Mace, Emily; Aalseth, Craig; Alexander, Tom; ...

    2016-12-21

    Use of ultra-low-background capabilities at Pacific Northwest National Laboratory provide enhanced sensitivity for measurement of low-activity sources of tritium and radiocarbon using proportional counters. Tritium levels are nearly back to pre-nuclear test backgrounds (~2-8 TU in rainwater), which can complicate their dual measurement with radiocarbon due to overlap in the beta decay spectra. In this paper, we present results of single-isotope proportional counter measurements used to analyze a dual-isotope methane sample synthesized from ~120 mg of H 2O and present sensitivity results.

  3. Corrections Officer Candidate Information Booklet and User's Manual. Standards and Training for Corrections Program.

    ERIC Educational Resources Information Center

    California State Board of Corrections, Sacramento.

    This package consists of an information booklet for job candidates preparing to take California's Corrections Officer Examination and a user's manual intended for those who will administer the examination. The candidate information booklet provides background information about the development of the Corrections Officer Examination, describes its…

  4. Manners of Speaking: Linguistic Capital and the Rhetoric of Correctness in Late-Nineteenth-Century America

    ERIC Educational Resources Information Center

    Herring, William Rodney, Jr.

    2009-01-01

    A number of arguments appeared in the late-nineteenth-century United States about "correctness" in language, arguments for and against enforcing a standard of correctness and arguments about what should count as correct in language. Insofar as knowledge about and facility with "correct" linguistic usage could affect one's standing in the social…

  5. Estimation and correction of visibility bias in aerial surveys of wintering ducks

    USGS Publications Warehouse

    Pearse, A.T.; Gerard, P.D.; Dinsmore, S.J.; Kaminski, R.M.; Reinecke, K.J.

    2008-01-01

    Incomplete detection of all individuals leading to negative bias in abundance estimates is a pervasive source of error in aerial surveys of wildlife, and correcting that bias is a critical step in improving surveys. We conducted experiments using duck decoys as surrogates for live ducks to estimate bias associated with surveys of wintering ducks in Mississippi, USA. We found detection of decoy groups was related to wetland cover type (open vs. forested), group size (1?100 decoys), and interaction of these variables. Observers who detected decoy groups reported counts that averaged 78% of the decoys actually present, and this counting bias was not influenced by either covariate cited above. We integrated this sightability model into estimation procedures for our sample surveys with weight adjustments derived from probabilities of group detection (estimated by logistic regression) and count bias. To estimate variances of abundance estimates, we used bootstrap resampling of transects included in aerial surveys and data from the bias-correction experiment. When we implemented bias correction procedures on data from a field survey conducted in January 2004, we found bias-corrected estimates of abundance increased 36?42%, and associated standard errors increased 38?55%, depending on species or group estimated. We deemed our method successful for integrating correction of visibility bias in an existing sample survey design for wintering ducks in Mississippi, and we believe this procedure could be implemented in a variety of sampling problems for other locations and species.

  6. Comparison of gene expression microarray data with count-based RNA measurements informs microarray interpretation.

    PubMed

    Richard, Arianne C; Lyons, Paul A; Peters, James E; Biasci, Daniele; Flint, Shaun M; Lee, James C; McKinney, Eoin F; Siegel, Richard M; Smith, Kenneth G C

    2014-08-04

    Although numerous investigations have compared gene expression microarray platforms, preprocessing methods and batch correction algorithms using constructed spike-in or dilution datasets, there remains a paucity of studies examining the properties of microarray data using diverse biological samples. Most microarray experiments seek to identify subtle differences between samples with variable background noise, a scenario poorly represented by constructed datasets. Thus, microarray users lack important information regarding the complexities introduced in real-world experimental settings. The recent development of a multiplexed, digital technology for nucleic acid measurement enables counting of individual RNA molecules without amplification and, for the first time, permits such a study. Using a set of human leukocyte subset RNA samples, we compared previously acquired microarray expression values with RNA molecule counts determined by the nCounter Analysis System (NanoString Technologies) in selected genes. We found that gene measurements across samples correlated well between the two platforms, particularly for high-variance genes, while genes deemed unexpressed by the nCounter generally had both low expression and low variance on the microarray. Confirming previous findings from spike-in and dilution datasets, this "gold-standard" comparison demonstrated signal compression that varied dramatically by expression level and, to a lesser extent, by dataset. Most importantly, examination of three different cell types revealed that noise levels differed across tissues. Microarray measurements generally correlate with relative RNA molecule counts within optimal ranges but suffer from expression-dependent accuracy bias and precision that varies across datasets. We urge microarray users to consider expression-level effects in signal interpretation and to evaluate noise properties in each dataset independently.

  7. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  8. High-rate dead-time corrections in a general purpose digital pulse processing system

    PubMed Central

    Abbene, Leonardo; Gerardi, Gaetano

    2015-01-01

    Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270

  9. HIGH-RESOLUTION IMAGING OF THE ATLBS REGIONS: THE RADIO SOURCE COUNTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thorat, K.; Subrahmanyan, R.; Saripalli, L.

    2013-01-01

    The Australia Telescope Low-brightness Survey (ATLBS) regions have been mosaic imaged at a radio frequency of 1.4 GHz with 6'' angular resolution and 72 {mu}Jy beam{sup -1} rms noise. The images (centered at R.A. 00{sup h}35{sup m}00{sup s}, decl. -67 Degree-Sign 00'00'' and R.A. 00{sup h}59{sup m}17{sup s}, decl. -67 Degree-Sign 00'00'', J2000 epoch) cover 8.42 deg{sup 2} sky area and have no artifacts or imaging errors above the image thermal noise. Multi-resolution radio and optical r-band images (made using the 4 m CTIO Blanco telescope) were used to recognize multi-component sources and prepare a source list; the detection thresholdmore » was 0.38 mJy in a low-resolution radio image made with beam FWHM of 50''. Radio source counts in the flux density range 0.4-8.7 mJy are estimated, with corrections applied for noise bias, effective area correction, and resolution bias. The resolution bias is mitigated using low-resolution radio images, while effects of source confusion are removed by using high-resolution images for identifying blended sources. Below 1 mJy the ATLBS counts are systematically lower than the previous estimates. Showing no evidence for an upturn down to 0.4 mJy, they do not require any changes in the radio source population down to the limit of the survey. The work suggests that automated image analysis for counts may be dependent on the ability of the imaging to reproduce connecting emission with low surface brightness and on the ability of the algorithm to recognize sources, which may require that source finding algorithms effectively work with multi-resolution and multi-wavelength data. The work underscores the importance of using source lists-as opposed to component lists-and correcting for the noise bias in order to precisely estimate counts close to the image noise and determine the upturn at sub-mJy flux density.« less

  10. The effect of blood cell count on coronary flow in patients with coronary slow flow phenomenon

    PubMed Central

    Soylu, Korhan; Gulel, Okan; Yucel, Huriye; Yuksel, Serkan; Aksan, Gokhan; Soylu, Ayşegül İdil; Demircan, Sabri; Yılmaz, Özcan; Sahin, Mahmut

    2014-01-01

    Background and Objective: The coronary slow flow phenomenon (CSFP) is a coronary artery disease with a benign course, but its pathological mechanisms are not yet fully understood.The purpose of this controlled study was to investigate the cellular content of blood in patients diagnosed with CSFP and the relationship of this with coronary flow rates. Methods: Selective coronary angiographies of 3368 patients were analyzed to assess Thrombolysis in Myocardial Infarction (TIMI) frame count (TFC) values. Seventy eight of them had CSFP, and their demographic and laboratory findings were compared with 61 patients with normal coronary flow. Results: Patients’ demographic characteristics were similar in both groups. Mean corrected TFC (cTFC) values were significantly elevated in CSFP patients (p<0.001). Furthermore, hematocrit and hemoglobin values, and eosinophil and basophil counts of the CSFP patients were significantly elevated compared to the values obtained in the control group (p=0.005, p=0.047, p=0.001 and p=0.002, respectively). The increase observed in hematocrit and eosinophil levels showed significant correlations with increased TFC values (r=0.288 and r=0.217, respectively). Conclusion: Significant changes have been observed in the cellular composition of blood in patients diagnosed with CSFP as compared to the patients with normal coronary blood flow. The increases in hematocrit levels and in the eosinophil and basophil counts may have direct or indirect effects on the rate of coronary blood flow. PMID:25225502

  11. Low Background Signal Readout Electronics for the MAJORANA DEMONSTRATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guinn, I.; Abgrall, N.; Arnquist, Isaac J.

    2015-03-18

    The Majorana Demonstrator (MJD)[1] is an array of p-type point contact (PPC) high purity Germanium (HPGe) detectors intended to search for neutrinoless double beta decay (0vBB decay) in 76Ge. MJD will consist of 40 kg of detectors, 30 kg of which will be isotopically enriched to 87% 76Ge. The array will consist of 14 strings of four or ve detectors placed in two separate cryostats. One of the main goals of the experiment is to demonstrate the feasibility of building a tonne-scale array of detectors to search for 0vBB decay with a much higher sensitivity. This involves acheiving backgrounds inmore » the 4 keV region of interest (ROI) around the 2039 keV Q-value of the BB decay of less than 1 count/ROI-t-y. Because many backgrounds will not directly scale with detector mass, the specific background goal of MJD is less than 3 counts/ROI-t-y.« less

  12. The faint galaxy contribution to the diffuse extragalactic background light

    NASA Technical Reports Server (NTRS)

    Cole, Shaun; Treyer, Marie-Agnes; Silk, Joseph

    1992-01-01

    Models of the faint galaxy contribution to the diffuse extragalactic background light are presented, which are consistent with current data on faint galaxy number counts and redshifts. The autocorrelation function of surface brightness fluctuations in the extragalactic diffuse light is predicted, and the way in which these predictions depend on the cosmological model and assumptions of biasing is determined. It is confirmed that the recent deep infrared number counts are most compatible with a high density universe (Omega-0 is approximately equal to 1) and that the steep blue counts then require an extra population of rapidly evolving blue galaxies. The faintest presently detectable galaxies produce an interesting contribution to the extragalactic diffuse light, and still fainter galaxies may also produce a significant contribution. These faint galaxies still only produce a small fraction of the total optical diffuse background light, but on scales of a few arcminutes to a few degrees, they produce a substantial fraction of the fluctuations in the diffuse light.

  13. Observation-Corrected Precipitation Estimates in GEOS-5

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; Liu, Qing

    2014-01-01

    Several GEOS-5 applications, including the GEOS-5 seasonal forecasting system and the MERRA-Land data product, rely on global precipitation data that have been corrected with satellite and or gauge-based precipitation observations. This document describes the methodology used to generate the corrected precipitation estimates and their use in GEOS-5 applications. The corrected precipitation estimates are derived by disaggregating publicly available, observationally based, global precipitation products from daily or pentad totals to hourly accumulations using background precipitation estimates from the GEOS-5 atmospheric data assimilation system. Depending on the specific combination of the observational precipitation product and the GEOS-5 background estimates, the observational product may also be downscaled in space. The resulting corrected precipitation data product is at the finer temporal and spatial resolution of the GEOS-5 background and matches the observed precipitation at the coarser scale of the observational product, separately for each day (or pentad) and each grid cell.

  14. Aerial population estimates of wild horses (Equus caballus) in the adobe town and salt wells creek herd management areas using an integrated simultaneous double-count and sightability bias correction technique

    USGS Publications Warehouse

    Lubow, Bruce C.; Ransom, Jason I.

    2007-01-01

    An aerial survey technique combining simultaneous double-count and sightability bias correction methodologies was used to estimate the population of wild horses inhabiting Adobe Town and Salt Wells Creek Herd Management Areas, Wyoming. Based on 5 surveys over 4 years, we conclude that the technique produced estimates consistent with the known number of horses removed between surveys and an annual population growth rate of 16.2 percent per year. Therefore, evidence from this series of surveys supports the validity of this survey method. Our results also indicate that the ability of aerial observers to see horse groups is very strongly dependent on skill of the individual observer, size of the horse group, and vegetation cover. It is also more modestly dependent on the ruggedness of the terrain and the position of the sun relative to the observer. We further conclude that censuses, or uncorrected raw counts, are inadequate estimates of population size for this herd. Such uncorrected counts were all undercounts in our trials, and varied in magnitude from year to year and observer to observer. As of April 2007, we estimate that the population of the Adobe Town /Salt Wells Creek complex is 906 horses with a 95 percent confidence interval ranging from 857 to 981 horses.

  15. Correction of beam-beam effects in luminosity measurement in the forward region at CLIC

    NASA Astrophysics Data System (ADS)

    Lukić, S.; Božović-Jelisavčić, I.; Pandurović, M.; Smiljanić, I.

    2013-05-01

    Procedures for correcting the beam-beam effects in luminosity measurements at CLIC at 3 TeV center-of-mass energy are described and tested using Monte Carlo simulations. The angular counting loss due to the combined Beamstrahlung and initial-state radiation effects is corrected based on the reconstructed velocity of the collision frame of the Bhabha scattering. The distortion of the luminosity spectrum due to the initial-state radiation is corrected by deconvolution. At the end, the counting bias due to the finite calorimeter energy resolution is numerically corrected. To test the procedures, BHLUMI Bhabha event generator, and Guinea-Pig beam-beam simulation were used to generate the outgoing momenta of Bhabha particles in the bunch collisions at CLIC. The systematic effects of the beam-beam interaction on the luminosity measurement are corrected with precision of 1.4 permille in the upper 5% of the energy, and 2.7 permille in the range between 80 and 90% of the nominal center-of-mass energy.

  16. Leucocyte count in young adults with first-ever ischaemic stroke: associated factors and association on prognosis.

    PubMed

    Heikinheimo, Terttu; Putaala, Jukka; Haapaniemi, Elena; Kaste, Markku; Tatlisumak, Turgut

    2015-02-01

    Limited data exist on the associated factors and correlation of leucocyte count to outcome in young adults with first-ever ischaemic stroke. Our objectives were to investigate factors associated with elevated leucocyte count and whether there is correlation between leucocyte count and short- and long-term outcomes. Of our database of 1008 consecutive patients aged 15 to 49, we included those with leucocyte count measured within the first two days from stroke onset. Outcomes were three-month and long-term disability, death, and vascular events. Linear regression was used to explore baseline variables associated with leucocyte count. Logistic regression and Cox proportional models studied the association between leucocyte count and clinical outcomes. In our study cohort of 781 patients (61.7% males; mean age 41.4 years), mean leucocyte count was high: 8.8 ± 3.1 × 10(9) cells/L (Reference range: 3.4-8.2 × 10(9) cells/L). Higher leucocyte levels were associated with dyslipidaemia, smoking, peripheral arterial disease, stroke severity, and lesion size. After adjustment for age, gender, relevant risk factors, both continuous leucocyte count and the highest quartile of leucocyte count were independently associated with unfavourable three-month outcome. Regarding events in the long-term (follow-up 8.1 ± 4.2 years in survivors), no association between leucocyte count and the event risks appeared. Among young stroke patients, high leucocyte count was a common finding. It was associated with vascular disease and its risk factors as well as severity of stroke, but it was also independently associated with unfavourable three-month outcome in these patients. There was no association with the long-term outcome. [Correction added on 31 October 2013 after first online publication: In the Results section of the Abstract, the cohort of 797 patients in this study was corrected to 781 patients.]. © 2013 The Authors. International Journal of Stroke © 2013 World Stroke Organization.

  17. Potential errors in body composition as estimated by whole body scintillation counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.

    Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r . 0.87, p less than 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less

  18. Potential errors in body composition as estimated by whole body scintillation counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.

    Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts weremore » highly correlated (r = 0.87, p < 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.« less

  19. XUV Photometer System (XPS): New Dark-Count Corrections Model and Improved Data Products

    NASA Astrophysics Data System (ADS)

    Elliott, J. P.; Vanier, B.; Woods, T. N.

    2017-12-01

    We present newly updated dark-count calibrations for the SORCE XUV Photometer System (XPS) and the resultant improved data products released in March of 2017. The SORCE mission has provided a 14-year solar spectral irradiance record, and the XPS contributes to this record in the 0.1 nm to 40 nm range. The SORCE spacecraft has been operating in what is known as Day-Only Operations (DO-Op) mode since February of 2014. In this mode it is not possible to collect data, including dark-counts, when the spacecraft is in eclipse as we did prior to DO-Op. Instead, we take advantage of the position of the XPS filter-wheel, and collect these data when the wheel position is in a "dark" position. Further, in this mode dark data are not always available for all observations, requiring an extrapolation in order to calibrate data at these times. To extrapolate, we model this with a piece-wise 2D nonlinear least squares surface fit in the time and temperature dimensions. Our model allows us to calibrate XPS data into the DO-Op phase of the mission by extrapolating along this surface. The XPS version 11 data product release benefits from this new calibration. We present comparisons of the previous and current calibration methods in addition to planned future upgrades of our data products.

  20. An algorithm for determining the rotation count of pulsars

    NASA Astrophysics Data System (ADS)

    Freire, Paulo C. C.; Ridolfi, Alessandro

    2018-06-01

    We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.

  1. Statistical tests to compare motif count exceptionalities

    PubMed Central

    Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent

    2007-01-01

    Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349

  2. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  3. Dead time corrections using the backward extrapolation method

    NASA Astrophysics Data System (ADS)

    Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.

    2017-05-01

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.

  4. Perturbative corrections to B → D form factors in QCD

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ming; Wei, Yan-Bing; Shen, Yue-Long; Lü, Cai-Dian

    2017-06-01

    We compute perturbative QCD corrections to B → D form factors at leading power in Λ/ m b , at large hadronic recoil, from the light-cone sum rules (LCSR) with B-meson distribution amplitudes in HQET. QCD factorization for the vacuum-to- B-meson correlation function with an interpolating current for the D-meson is demonstrated explicitly at one loop with the power counting scheme {m}_c˜ O(√{Λ {m}_b}) . The jet functions encoding information of the hard-collinear dynamics in the above-mentioned correlation function are complicated by the appearance of an additional hard-collinear scale m c , compared to the counterparts entering the factorization formula of the vacuum-to- B-meson correction function for the construction of B → π from factors. Inspecting the next-to-leading-logarithmic sum rules for the form factors of B → Dℓν indicates that perturbative corrections to the hard-collinear functions are more profound than that for the hard functions, with the default theory inputs, in the physical kinematic region. We further compute the subleading power correction induced by the three-particle quark-gluon distribution amplitudes of the B-meson at tree level employing the background gluon field approach. The LCSR predictions for the semileptonic B → Dℓν form factors are then extrapolated to the entire kinematic region with the z-series parametrization. Phenomenological implications of our determinations for the form factors f BD +,0 ( q 2) are explored by investigating the (differential) branching fractions and the R( D) ratio of B → Dℓν and by determining the CKM matrix element |V cb | from the total decay rate of B → Dμν μ .

  5. Examples of Mesh and NURBS modelling for in vivo lung counting studies.

    PubMed

    Farah, Jad; Broggio, David; Franck, Didier

    2011-03-01

    Realistic calibration coefficients for in vivo counting installations are assessed using voxel phantoms and Monte Carlo calculations. However, voxel phantoms construction is time consuming and their flexibility extremely limited. This paper involves Mesh and non-uniform rational B-splines graphical formats, of greater flexibility, to optimise the calibration of in vivo counting installations. Two studies validating the use of such phantoms and involving geometry deformation and modelling were carried out to study the morphologic effect on lung counting efficiency. The created 3D models fitted with the reference ones, with volumetric differences of <5 %. Moreover, it was found that counting efficiency varies with the inverse of lungs' volume and that the latter primes when compared with chest wall thickness. Finally, a series of different thoracic female phantoms of various cup sizes, chest girths and internal organs' volumes were created starting from the International Commission on Radiological Protection (ICRP) adult female reference computational phantom to give correction factors for the lung monitoring of female workers.

  6. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  7. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  8. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE PAGES

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...

    2018-03-03

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  9. Surgical correction of pectus arcuatum

    PubMed Central

    Ershova, Ksenia; Adamyan, Ruben

    2016-01-01

    Background Pectus arcuatum is a rear congenital chest wall deformity and methods of surgical correction are debatable. Methods Surgical correction of pectus arcuatum always includes one or more horizontal sternal osteotomies, resection of deformed rib cartilages and finally anterior chest wall stabilization. The study is approved by the institutional ethical committee and has obtained the informed consent from every patient. Results In this video we show our modification of pectus arcuatum correction with only partial sternal osteotomy and further stabilization by vertical parallel titanium plates. Conclusions Reported method is a feasible option for surgical correction of pectus arcuatum. PMID:29078483

  10. Gauge backgrounds and zero-mode counting in F-theory

    NASA Astrophysics Data System (ADS)

    Bies, Martin; Mayrhofer, Christoph; Weigand, Timo

    2017-11-01

    Computing the exact spectrum of charged massless matter is a crucial step towards understanding the effective field theory describing F-theory vacua in four dimensions. In this work we further develop a coherent framework to determine the charged massless matter in F-theory compactified on elliptic fourfolds, and demonstrate its application in a concrete example. The gauge background is represented, via duality with M-theory, by algebraic cycles modulo rational equivalence. Intersection theory within the Chow ring allows us to extract coherent sheaves on the base of the elliptic fibration whose cohomology groups encode the charged zero-mode spectrum. The dimensions of these cohomology groups are computed with the help of modern techniques from algebraic geometry, which we implement in the software gap. We exemplify this approach in models with an Abelian and non-Abelian gauge group and observe jumps in the exact massless spectrum as the complex structure moduli are varied. An extended mathematical appendix gives a self-contained introduction to the algebro-geometric concepts underlying our framework.

  11. Evaluation of an improved fiberoptics luminescence skin monitor with background correction.

    PubMed

    Vo-Dinh, T

    1987-06-01

    In this work, an improved version of a fiberoptics luminescence monitor, the prototype luminoscope II, is evaluated for in situ quantitative measurements. The instrument was developed to detect traces of luminescing organic contaminants on skin. An electronic background-nulling system was designed and incorporated into the instrument to compensate for various skin background emissions. A dose-response curve for a coal liquid spotted on mouse skin was established. The results illustrated the usefulness of the instrument for in vivo detection of organic materials on laboratory mouse skin.

  12. The origin and reduction of spurious extrahepatic counts observed in 90Y non-TOF PET imaging post radioembolization

    NASA Astrophysics Data System (ADS)

    Walrand, Stephan; Hesse, Michel; Jamar, François; Lhommel, Renaud

    2018-04-01

    Our literature survey revealed a physical effect unknown to the nuclear medicine community, i.e. internal bremsstrahlung emission, and also the existence of long energy resolution tails in crystal scintillation. None of these effects has ever been modelled in PET Monte Carlo (MC) simulations. This study investigates whether these two effects could be at the origin of two unexplained observations in 90Y imaging by PET: the increasing tails in the radial profile of true coincidences, and the presence of spurious extrahepatic counts post radioembolization in non-TOF PET and their absence in TOF PET. These spurious extrahepatic counts hamper the microsphere delivery check in liver radioembolization. An acquisition of a 32P vial was performed on a GSO PET system. This is the ideal setup to study the impact of bremsstrahlung x-rays on the true coincidence rate when no positron emission and no crystal radioactivity are present. A MC simulation of the acquisition was performed using Gate-Geant4. MC simulations of non-TOF PET and TOF-PET imaging of a synthetic 90Y human liver radioembolization phantom were also performed. Internal bremsstrahlung and long energy resolution tails inclusion in MC simulations quantitatively predict the increasing tails in the radial profile. In addition, internal bremsstrahlung explains the discrepancy previously observed in bremsstrahlung SPECT between the measure of the 90Y bremsstrahlung spectrum and its simulation with Gate-Geant4. However the spurious extrahepatic counts in non-TOF PET mainly result from the failure of conventional random correction methods in such low count rate studies and poor robustness versus emission-transmission inconsistency. A novel proposed random correction method succeeds in cleaning the spurious extrahepatic counts in non-TOF PET. Two physical effects not considered up to now in nuclear medicine were identified to be at the origin of the unusual 90Y true coincidences radial profile. TOF reconstruction removing

  13. A Bridge from Optical to Infrared Galaxies: Explaining Local Properties and Predicting Galaxy Counts and the Cosmic Background Radiation

    NASA Astrophysics Data System (ADS)

    Totani, Tomonori; Takeuchi, Tsutomu T.

    2002-05-01

    We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma

  14. Spectroscopic limits to an extragalactic far-ultraviolet background.

    PubMed

    Martin, C; Hurwitz, M; Bowyer, S

    1991-10-01

    We use a spectrum of the lowest intensity diffuse far-ultraviolet background obtained from a series of observations in a number of celestial view directions to constrain the properties of the extragalactic FUV background. The mean continuum level, IEG = 280 +/- 35 photons cm-2 s-1 angstrom-1 sr-1, was obtained in a direction with very low H I column density, and this represents a firm upper limit to any extragalactic background in the 1400-1900 angstroms band. Previous work has demonstrated that the far-ultraviolet background includes (depending on a view direction) contributions from dust-scattered Galactic light, high-ionization emission lines, two-photon emission from H II, H2 fluorescence, and the integrated light of spiral galaxies. We find no evidence in the spectrum of line or continuum features that would signify additional extragalactic components. Motivated by the observation of steep BJ and U number count distributions, we have made a detailed comparison of galaxy evolution models to optical and UV data. We find that the observations are difficult to reconcile with a dominant contribution from unclustered, starburst galaxies at low redshifts. Our measurement rules out large ionizing fluxes at z = 0, but cannot strongly constrain the QSO background light, which is expected to be 0.5%-4% of IEG. We present improved limits on radiative lifetimes of massive neutrinos. We demonstrated with a simple model that IGM radiation is unlikely to make a significant contribution to IEG. Since dust scattering could produce a significant part of the continuum in this lowest intensity spectrum, we carried out a series of tests to evaluate this possibility. We find that the spectrum of a nearby target with higher NH I, when corrected for H2 fluorescence, is very similar to the spectrum obtained in the low H I view direction. This is evidence that the majority of the continuum observed at low NH I is also dust reflection, indicating either the existence of a hitherto

  15. Retrospective determination of the contamination in the HML's counting chambers.

    PubMed

    Kramer, Gary H; Hauck, Barry; Capello, Kevin; Phan, Quoc

    2008-09-01

    The original documentation surrounding the purchase of the Human Monitoring Laboratory's (HML) counting chambers clearly showed that the steel contained low levels of radioactivity, presumably as a result of A-bomb fallout or perhaps to the inadvertent mixing of radioactive sources with scrap steel. Monte Carlo simulations have been combined with experimental measurements to estimate the level of contamination in the steel of the HML's whole body counting chamber. A 24-h empty chamber background count showed the presence of 137Cs and 60Co. The estimated activity of 137Cs in the 51 tons of steel was 2.7 kBq in 2007 (51.3 microBq g(-1) steel) which would have been 8 kBq at the time of manufacture. The 60Co that was found in the background spectrum is postulated to be contained in the bed-frame. The estimated amount in 2007 was 5 Bq and its origin is likely to be contaminated scrap metal entering the steel production cycle sometime in the past. The estimated activities are 10 to 25 times higher than the estimated minimum detectable activity for this measurement. These amounts have no impact on the usefulness of the whole body counter.

  16. The Herschel-ATLAS: Extragalatic Number Counts from 250 to 500 Microns

    NASA Technical Reports Server (NTRS)

    Clements, D. L.; Rigby, E.; Maddox, S.; Dunne, L.; Mortier, A.; Amblard, A.; Auld, R.; Bonfield, D.; Cooray, A.; Dariush, A.; hide

    2010-01-01

    Aims.The Herschel-ATLAS survey (H-ATLAS) will be the largest area survey to be undertaken by the Herschel Space Observatory. It will cover 550 sq. deg. of extragalactic sky at wavelengths of 100, 160, 250, 350 and 500 microns when completed, reaching flux limits (50-) from 32 to 145mJy. We here present galaxy number counts obtained for SPIRE observations of the first -14 sq. deg. observed at 250, 350 and 500 m. Methods. Number counts are a fundamental tool in constraining models of galaxy evolution. We use source catalogs extracted from the H-ATLAS maps as the basis for such an analysis. Correction factors for completeness and flux boosting are derived by applying our extraction method to model catalogs and then applied to the raw observational counts. Results. We find a steep rise in the number counts at flux levels of 100-200mJy in all three SPIRE bands, consistent with results from BLAST. The counts are compared to a range of galaxy evolution models. None of the current models is an ideal fit to the data but all ascribe the steep rise to a population of luminous, rapidly evolving dusty galaxies at moderate to high redshift.

  17. People counting in classroom based on video surveillance

    NASA Astrophysics Data System (ADS)

    Zhang, Quanbin; Huang, Xiang; Su, Juan

    2014-11-01

    Currently, the switches of the lights and other electronic devices in the classroom are mainly relied on manual control, as a result, many lights are on while no one or only few people in the classroom. It is important to change the current situation and control the electronic devices intelligently according to the number and the distribution of the students in the classroom, so as to reduce the considerable waste of electronic resources. This paper studies the problem of people counting in classroom based on video surveillance. As the camera in the classroom can not get the full shape contour information of bodies and the clear features information of faces, most of the classical algorithms such as the pedestrian detection method based on HOG (histograms of oriented gradient) feature and the face detection method based on machine learning are unable to obtain a satisfied result. A new kind of dual background updating model based on sparse and low-rank matrix decomposition is proposed in this paper, according to the fact that most of the students in the classroom are almost in stationary state and there are body movement occasionally. Firstly, combining the frame difference with the sparse and low-rank matrix decomposition to predict the moving areas, and updating the background model with different parameters according to the positional relationship between the pixels of current video frame and the predicted motion regions. Secondly, the regions of moving objects are determined based on the updated background using the background subtraction method. Finally, some operations including binarization, median filtering and morphology processing, connected component detection, etc. are performed on the regions acquired by the background subtraction, in order to induce the effects of the noise and obtain the number of people in the classroom. The experiment results show the validity of the algorithm of people counting.

  18. Low gamma counting for measuring NORM/TENORM with a radon reducing system

    NASA Astrophysics Data System (ADS)

    Paschoa, Anselmo S.

    2001-06-01

    A detection system for counting low levels of gamma radiation was built by upgrading an existing rectangular chamber made of 18 metric tonne of steel fabricated before World War II. The internal walls, the ceiling, and the floor of the chamber are covered with copper sheets. The new detection system consists of a stainless steel hollow cylinder with variable circular apertures in the cylindrical wall and in the base, to allow introduction of a NaI (Tl) crystal, or alternatively, a HPGe detector in its interior. This counting system is mounted inside the larger chamber, which in turn is located in a subsurface air-conditioned room. The access to the subsurface room is made from a larger entrance room through a tunnel plus a glass anteroom to decrease the air-exchange rate. Both sample and detector are housed inside the stainless steel cylinder. This cylinder is filled with hyper pure nitrogen gas, before counting a sample, to prevent radon coming into contact with the detector surface. As a consequence, the contribution of the 214Bi photopeaks to the background gamma spectra is minimized. The reduction of the gamma radiation background near the detector facilitates measurement of naturally occurring radioactive materials (NORM), and/or technologically enhanced NORM (TENORM), which are usually at concentration levels only slightly higher than those typically found in the natural radioactive background.

  19. Background considerations in the analysis of PIXE spectra by Artificial Neural Systems.

    NASA Astrophysics Data System (ADS)

    Correa, R.; Morales, J. R.; Requena, I.; Miranda, J.; Barrera, V. A.

    2016-05-01

    In order to study the importance of background in PIXE spectra to determine elemental concentrations in atmospheric aerosols using artificial neural systems ANS, two independently trained ANS were constructed, one which considered as input the net number of counts in the peak, and another which included the background. In the training and validation phases thirty eight spectra of aerosols collected in Santiago, Chile, were used. In both cases the elemental concentration values were similar. This fact was due to the intrinsic characteristic of ANS operating with normalized values of the net and total number of counts under the peaks, something that was verified in the analysis of 172 spectra obtained from aerosols collected in Mexico city. Therefore, networks operating under the mode which include background can reduce time and cost when dealing with large number of samples.

  20. On-Orbit Sky Background Measurements with the FOS

    NASA Technical Reports Server (NTRS)

    Lyons, R. W.; Baity, W. A.; Beaver, E. A.; Cohen, R. D.; Junkkarinen, V. T.; Linsky, J. B.; Bohlin, R. C.

    1993-01-01

    Observations of the sky background obtained with the Faint Object Spectrograph during 1991-1992 are discussed. Sky light can be an important contributor to the observed count rate in several of the instrument configurations especially when large apertures are used. In general, the sky background is consistent with the pre-launch expectations and showed the expected effects of zodiacal light and diffuse galactic light. In addition to these sources, there is, particularly during the daytime, a highly variable airglow component which includes a number of emission lines. The sky background will have an impact on the reduction and possibly the interpretation of some spectra.

  1. Photon-Counting Kinetic Inductance Detectors for the Origins Space Telescope

    NASA Astrophysics Data System (ADS)

    Noroozian, Omid

    We propose to develop photon-counting Kinetic Inductance Detectors (KIDs) for the Origins Space Telescope (OST) and any predecessor missions, with the goal of producing background-limited photon-counting sensitivity, and with a preliminary technology demonstration in time to inform the Decadal Survey planning process. The OST, a midto far- infrared observatory concept, is being developed as a major NASA mission to be considered by the next Decadal Survey with support from NASA Headquarters. The objective of such a facility is to allow rapid spectroscopic surveys of the high redshift universe at 420-800 μm, using arrays of integrated spectrometers with moderate resolutions (R=λ/Δλ 1000), to create a powerful new data set for exploring galaxy evolution and the growth of structure in the Universe. A second objective of OST is to perform higher resolution (R 10,000-100,000) spectroscopic surveys at 20-300 µm, a uniquely powerful tool for exploring the evolution of protoplanetary disks into fledgling solar systems. Finally the OST aims to obtain sensitive mid-infrared (5-40 µm) spectroscopy of thermal emission from rocky planets in the habitable zone using the transit method. These OST science objectives are very exciting and represent a wellorganized community agreement. However, they are all impossible to reach without new detector technology, and the OST can’t be recommended or approved if suitable detectors do not exist. In all of the above instrument concepts, photon-counting direct detectors are mission-enabling and essential for reaching the sensitivity permitted by the cryogenic Origins Space Telescope and the performance required for its important science programs. Our group has developed an innovative design for an optically-coupled KID that can reach the photon-counting sensitivity required by the ambitious science goals of the OST mission. A KID is a planar microwave resonator patterned from a superconducting thin film, which

  2. Choral Counting

    ERIC Educational Resources Information Center

    Turrou, Angela Chan; Franke, Megan L.; Johnson, Nicholas

    2017-01-01

    The students in Ms. Moscoso's second-grade class gather on the rug after recess, ready for one of their favorite math warm-ups: Choral Counting. Counting is an important part of doing mathematics throughout the school; students count collections (Schwerdtfeger and Chan 2007) and solve problems using a variety of strategies, many of which are…

  3. Low-background gamma-ray spectrometry for the international monitoring system

    DOE PAGES

    Greenwood, L. R.; Cantaloub, M. G.; Burnett, J. L.; ...

    2016-12-28

    PNNL has developed two low-background gamma-ray spectrometers in a new shallow underground laboratory, thereby significantly improving its ability to detect low levels of gamma-ray emitting fission or activation products in airborne particulate in samples from the IMS (International Monitoring System). Furthermore, the combination of cosmic veto panels, dry nitrogen gas to reduce radon and low background shielding results in a reduction of the background count rate by about a factor of 100 compared to detectors operating above ground at our laboratory.

  4. Low-background germanium radioassay for the MAJORANA Collaboration

    NASA Astrophysics Data System (ADS)

    Trimble, James E., Jr.

    The focus of the MAJORANA COLLABORATION is the search for nuclear neutrinoless double beta decay. If discovered, this process would prove that the neutrino is its own anti-particle, or a M AJORANA particle. Being constructed at the Sanford Underground Research Facility, the MAJORANA DEMONSTRATOR aims to show that a background rate of 3 counts per region of interest (ROI) per tonne per year in the 4 keV ROI surrounding the 2039-keV Q-value energy of 76Ge is achievable and to demonstrate the technological feasibility of building a tonne-scale Ge-based experiment. Because of the rare nature of this process, detectors in the system must be isolated from ionizing radiation backgrounds as much as possible. This involved building the system with materials containing very low levels of naturally- occurring and anthropogenic radioactive isotopes at a deep underground site. In order to measure the levels of radioactive contamination in some components, the Majorana Demonstrator uses a low background counting facility managed by the Experimental Nuclear and Astroparticle Physics (ENAP) group at UNC. The UNC low background counting (LBC) facility is located at the Kimballton Underground Research Facility (KURF) located in Ripplemead, VA. The facility was used for a neutron activation analysis of samples of polytetrafluoroethylene (PTFE) and fluorinated ethylene propylene (FEP) tubing intended for use in the Demonstrator. Calculated initial activity limits (90% C.L.) of 238U and 232Th in the 0.002-in PTFE samples were 7.6 ppt and 5.1 ppt, respectively. The same limits in the FEP tubing sample were 150 ppt and 45 ppt, respectively. The UNC LBC was also used to gamma-assay a modified stainless steel flange to be used as a vacuum feedthrough. Trace activities of both 238U and 232Th were found in the sample, but all were orders of magnitude below the acceptable threshold for the Majorana experiment. Also discussed is a proposed next generation ultra-low background system designed

  5. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  6. Blade counting tool with a 3D borescope for turbine applications

    NASA Astrophysics Data System (ADS)

    Harding, Kevin G.; Gu, Jiajun; Tao, Li; Song, Guiju; Han, Jie

    2014-07-01

    Video borescopes are widely used for turbine and aviation engine inspection to guarantee the health of blades and prevent blade failure during running. When the moving components of a turbine engine are inspected with a video borescope, the operator must view every blade in a given stage. The blade counting tool is video interpretation software that runs simultaneously in the background during inspection. It identifies moving turbine blades in a video stream, tracks and counts the blades as they move across the screen. This approach includes blade detection to identify blades in different inspection scenarios and blade tracking to perceive blade movement even in hand-turning engine inspections. The software is able to label each blade by comparing counting results to a known blade count for the engine type and stage. On-screen indications show the borescope user labels for each blade and how many blades have been viewed as the turbine is rotated.

  7. A burst-mode photon counting receiver with automatic channel estimation and bit rate detection

    NASA Astrophysics Data System (ADS)

    Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.

    2016-04-01

    We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.

  8. Kids Count in Delaware, Families Count in Delaware: Fact Book, 2003.

    ERIC Educational Resources Information Center

    Delaware Univ., Newark. Kids Count in Delaware.

    This Kids Count Fact Book is combined with the Families Count Fact Book to provide information on statewide trends affecting children and families in Delaware. The Kids Count and Families Count indicators have been combined into four new categories: health and health behaviors, educational involvement and achievement, family environment and…

  9. Optimal measurement counting time and statistics in gamma spectrometry analysis: The time balance

    NASA Astrophysics Data System (ADS)

    Joel, Guembou Shouop Cebastien; Penabei, Samafou; Maurice, Ndontchueng Moyo; Gregoire, Chene; Jilbert, Nguelem Mekontso Eric; Didier, Takoukam Serge; Werner, Volker; David, Strivay

    2017-01-01

    The optimal measurement counting time for gamma-ray spectrometry analysis using HPGe detectors was determined in our laboratory by comparing twelve hours measurement counting time at day and twelve hours measurement counting time at night. The day spectrum does not fully cover the night spectrum for the same sample. It is observed that the perturbation come to the sun-light. After several investigations became clearer: to remove all effects of radiation from outside (earth, the sun, and universe) our system, it is necessary to measure the background for 24, 48 or 72 hours. In the same way, the samples have to be measured for 24, 48 or 72 hours to be safe to be purified the measurement (equality of day and night measurement). It is also possible to not use the background of the winter in summer. Depend on to the energy of radionuclide we seek, it is clear that the most important steps of a gamma spectrometry measurement are the preparation of the sample and the calibration of the detector.

  10. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  11. A physics investigation of deadtime losses in neutron counting at low rates with Cf252

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Louise G; Croft, Stephen

    2009-01-01

    {sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less

  12. A New Statistics-Based Online Baseline Restorer for a High Count-Rate Fully Digital System.

    PubMed

    Li, Hongdi; Wang, Chao; Baghaei, Hossain; Zhang, Yuxuan; Ramirez, Rocio; Liu, Shitao; An, Shaohui; Wong, Wai-Hoi

    2010-04-01

    The goal of this work is to develop a novel, accurate, real-time digital baseline restorer using online statistical processing for a high count-rate digital system such as positron emission tomography (PET). In high count-rate nuclear instrumentation applications, analog signals are DC-coupled for better performance. However, the detectors, pre-amplifiers and other front-end electronics would cause a signal baseline drift in a DC-coupling system, which will degrade the performance of energy resolution and positioning accuracy. Event pileups normally exist in a high-count rate system and the baseline drift will create errors in the event pileup-correction. Hence, a baseline restorer (BLR) is required in a high count-rate system to remove the DC drift ahead of the pileup correction. Many methods have been reported for BLR from classic analog methods to digital filter solutions. However a single channel BLR with analog method can only work under 500 kcps count-rate, and normally an analog front-end application-specific integrated circuits (ASIC) is required for the application involved hundreds BLR such as a PET camera. We have developed a simple statistics-based online baseline restorer (SOBLR) for a high count-rate fully digital system. In this method, we acquire additional samples, excluding the real gamma pulses, from the existing free-running ADC in the digital system, and perform online statistical processing to generate a baseline value. This baseline value will be subtracted from the digitized waveform to retrieve its original pulse with zero-baseline drift. This method can self-track the baseline without a micro-controller involved. The circuit consists of two digital counter/timers, one comparator, one register and one subtraction unit. Simulation shows a single channel works at 30 Mcps count-rate with pileup condition. 336 baseline restorer circuits have been implemented into 12 field-programmable-gate-arrays (FPGA) for our new fully digital PET system.

  13. Time-of-day Corrections to Aircraft Noise Metrics

    NASA Technical Reports Server (NTRS)

    Clevenson, S. (Editor); Shepherd, W. T. (Editor)

    1980-01-01

    The historical and background aspects of time-of-day corrections as well as the evidence supporting these corrections are discussed. Health, welfare, and economic impacts, needs a criteria, and government policy and regulation, are also reported.

  14. Pulse pileup statistics for energy discriminating photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Adam S.; Harrison, Daniel; Lobastov, Vladimir

    Purpose: Energy discriminating photon counting x-ray detectors can be subject to a wide range of flux rates if applied in clinical settings. Even when the incident rate is a small fraction of the detector's maximum periodic rate N{sub 0}, pulse pileup leads to count rate losses and spectral distortion. Although the deterministic effects can be corrected, the detrimental effect of pileup on image noise is not well understood and may limit the performance of photon counting systems. Therefore, the authors devise a method to determine the detector count statistics and imaging performance. Methods: The detector count statistics are derived analyticallymore » for an idealized pileup model with delta pulses of a nonparalyzable detector. These statistics are then used to compute the performance (e.g., contrast-to-noise ratio) for both single material and material decomposition contrast detection tasks via the Cramer-Rao lower bound (CRLB) as a function of the detector input count rate. With more realistic unipolar and bipolar pulse pileup models of a nonparalyzable detector, the imaging task performance is determined by Monte Carlo simulations and also approximated by a multinomial method based solely on the mean detected output spectrum. Photon counting performance at different count rates is compared with ideal energy integration, which is unaffected by count rate. Results: The authors found that an ideal photon counting detector with perfect energy resolution outperforms energy integration for our contrast detection tasks, but when the input count rate exceeds 20%N{sub 0}, many of these benefits disappear. The benefit with iodine contrast falls rapidly with increased count rate while water contrast is not as sensitive to count rates. The performance with a delta pulse model is overoptimistic when compared to the more realistic bipolar pulse model. The multinomial approximation predicts imaging performance very close to the prediction from Monte Carlo simulations. The

  15. Evaluation of bias and variance in low-count OSEM list mode reconstruction

    NASA Astrophysics Data System (ADS)

    Jian, Y.; Planeta, B.; Carson, R. E.

    2015-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.

  16. Tower counts

    USGS Publications Warehouse

    Woody, Carol Ann; Johnson, D.H.; Shrier, Brianna M.; O'Neal, Jennifer S.; Knutzen, John A.; Augerot, Xanthippe; O'Neal, Thomas A.; Pearsons, Todd N.

    2007-01-01

    Counting towers provide an accurate, low-cost, low-maintenance, low-technology, and easily mobilized escapement estimation program compared to other methods (e.g., weirs, hydroacoustics, mark-recapture, and aerial surveys) (Thompson 1962; Siebel 1967; Cousens et al. 1982; Symons and Waldichuk 1984; Anderson 2000; Alaska Department of Fish and Game 2003). Counting tower data has been found to be consistent with that of digital video counts (Edwards 2005). Counting towers do not interfere with natural fish migration patterns, nor are fish handled or stressed; however, their use is generally limited to clear rivers that meet specific site selection criteria. The data provided by counting tower sampling allow fishery managers to determine reproductive population size, estimate total return (escapement + catch) and its uncertainty, evaluate population productivity and trends, set harvest rates, determine spawning escapement goals, and forecast future returns (Alaska Department of Fish and Game 1974-2000 and 1975-2004). The number of spawning fish is determined by subtracting subsistence, sport-caught fish, and prespawn mortality from the total estimated escapement. The methods outlined in this protocol for tower counts can be used to provide reasonable estimates ( plus or minus 6%-10%) of reproductive salmon population size and run timing in clear rivers. 

  17. Noise suppressed partial volume correction for cardiac SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu

    Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived frommore » a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the

  18. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  19. Effect of Non-Alignment/Alignment of Attenuation Map Without/With Emission Motion Correction in Cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Dey, Joyoni; Segars, W. Paul; Pretorius, P. Hendrik; King, Michael A.

    2015-08-01

    Purpose: We investigate the differences without/with respiratory motion correction in apparent imaging agent localization induced in reconstructed emission images when the attenuation maps used for attenuation correction (from CT) are misaligned with the patient anatomy during emission imaging due to differences in respiratory state. Methods: We investigated use of attenuation maps acquired at different states of a 2 cm amplitude respiratory cycle (at end-expiration, at end-inspiration, the center map, the average transmission map, and a large breath-hold beyond range of respiration during emission imaging) to correct for attenuation in MLEM reconstruction for several anatomical variants of the NCAT phantom which included both with and without non-rigid motion between heart and sub-diaphragmatic regions (such as liver, kidneys etc). We tested these cases with and without emission motion correction and attenuation map alignment/non-alignment. Results: For the NCAT default male anatomy the false count-reduction due to breathing was largely removed upon emission motion correction for the large majority of the cases. Exceptions (for the default male) were for the cases when using the large-breathhold end-inspiration map (TI_EXT), when we used the end-expiration (TE) map, and to a smaller extent, the end-inspiration map (TI). However moving the attenuation maps rigidly to align the heart region, reduced the remaining count-reduction artifacts. For the female patient count-reduction remained post motion correction using rigid map-alignment due to the breast soft-tissue misalignment. Quantitatively, after the transmission (rigid) alignment correction, the polar-map 17-segment RMS error with respect to the reference (motion-less case) reduced by 46.5% on average for the extreme breathhold case. The reductions were 40.8% for end-expiration map and 31.9% for end-inspiration cases on the average, comparable to the semi-ideal case where each state uses its own attenuation map

  20. Elementary review of electron microprobe techniques and correction requirements

    NASA Technical Reports Server (NTRS)

    Hart, R. K.

    1968-01-01

    Report contains requirements for correction of instrumented data on the chemical composition of a specimen, obtained by electron microprobe analysis. A condensed review of electron microprobe techniques is presented, including background material for obtaining X ray intensity data corrections and absorption, atomic number, and fluorescence corrections.

  1. Power counting and modes in SCET

    NASA Astrophysics Data System (ADS)

    Goerke, Raymond; Luke, Michael

    2018-02-01

    We present a formulation of soft-collinear effective theory (SCET) in the two-jet sector as a theory of decoupled sectors of QCD coupled to Wilson lines. The formulation is manifestly boost-invariant, does not require the introduction of ultrasoft modes at the hard matching scale Q, and has manifest power counting in inverse powers of Q. The spurious infrared divergences which arise in SCET when ultrasoft modes are not included in loops disappear when the overlap between the sectors is correctly subtracted, in a manner similar to the familiar zero-bin subtraction of SCET. We illustrate this approach by analyzing deep inelastic scattering in the endpoint region in SCET and comment on other applications.

  2. Clustering method for counting passengers getting in a bus with single camera

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Zhang, Yanning; Shao, Dapei; Li, Ying

    2010-03-01

    Automatic counting of passengers is very important for both business and security applications. We present a single-camera-based vision system that is able to count passengers in a highly crowded situation at the entrance of a traffic bus. The unique characteristics of the proposed system include, First, a novel feature-point-tracking- and online clustering-based passenger counting framework, which performs much better than those of background-modeling-and foreground-blob-tracking-based methods. Second, a simple and highly accurate clustering algorithm is developed that projects the high-dimensional feature point trajectories into a 2-D feature space by their appearance and disappearance times and counts the number of people through online clustering. Finally, all test video sequences in the experiment are captured from a real traffic bus in Shanghai, China. The results show that the system can process two 320×240 video sequences at a frame rate of 25 fps simultaneously, and can count passengers reliably in various difficult scenarios with complex interaction and occlusion among people. The method achieves high accuracy rates up to 96.5%.

  3. Minor Distortions with Major Consequences: Correcting Distortions in Imaging Spectrographs

    PubMed Central

    Esmonde-White, Francis W. L.; Esmonde-White, Karen A.; Morris, Michael D.

    2010-01-01

    Projective transformation is a mathematical correction (implemented in software) used in the remote imaging field to produce distortion-free images. We present the application of projective transformation to correct minor alignment and astigmatism distortions that are inherent in dispersive spectrographs. Patterned white-light images and neon emission spectra were used to produce registration points for the transformation. Raman transects collected on microscopy and fiber-optic systems were corrected using established methods and compared with the same transects corrected using the projective transformation. Even minor distortions have a significant effect on reproducibility and apparent fluorescence background complexity. Simulated Raman spectra were used to optimize the projective transformation algorithm. We demonstrate that the projective transformation reduced the apparent fluorescent background complexity and improved reproducibility of measured parameters of Raman spectra. Distortion correction using a projective transformation provides a major advantage in reducing the background fluorescence complexity even in instrumentation where slit-image distortions and camera rotation were minimized using manual or mechanical means. We expect these advantages should be readily applicable to other spectroscopic modalities using dispersive imaging spectrographs. PMID:21211158

  4. Reducing the Child Poverty Rate. KIDS COUNT Indicator Brief

    ERIC Educational Resources Information Center

    Shore, Rima; Shore, Barbara

    2009-01-01

    In 2007, nearly one in five or 18 percent of children in the U.S. lived in poverty (KIDS COUNT Data Center, 2009). Many of these children come from minority backgrounds. African American (35 percent), American Indian (33 percent) and Latino (27 percent) children are more likely to live in poverty than their white (11 percent) and Asian (12…

  5. Infant Maltreatment-Related Mortality in Alaska: Correcting the Count and Using Birth Certificates to Predict Mortality

    ERIC Educational Resources Information Center

    Parrish, Jared W.; Gessner, Bradford D.

    2010-01-01

    Objectives: To accurately count the number of infant maltreatment-related fatalities and to use information from the birth certificates to predict infant maltreatment-related deaths. Methods: A population-based retrospective cohort study of infants born in Alaska for the years 1992 through 2005 was conducted. Risk factor variables were ascertained…

  6. Pixel-based CTE Correction of ACS/WFC: New Constraints from Short Darks

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; ACS Team

    2012-01-01

    The original Anderson & Bedin (2010) pixel-based correction for imperfect charge-transfer efficiency (CTE) in HST's ACS was based on a study of Warm Pixels (WPs) in a series of 1000s dark exposures. WPs with more than about 25 electrons were sufficiently isolated in these images that we could examine and model their trails. However, WPs with fewer electrons than this were more plentiful and suffered from significant crowding. To remedy this, we have taken a series of shorter dark exposures: 30s, 100s, and 339s. These supplemental exposures have two benefits. The first is that in the shorter exposures, 10 electron WPs are more sparse and their trails can be measured in isolation. The second benefit is that we can now get a handle on the absolute CTE losses, since the long-dark exposures can be used to accurately predict how many counts the WPs in the short-dark exposures should see. Any missing counts are a reflection of imperfect CTE. This new absolute handle on the CTE losses allows us to probe CTE even for very low charge packets. We find that CTE losses reach a nearly pathological level for charge packets with fewer than 20 electrons. Most ACS observations have backgrounds that are higher than this, so this does not have a large impact on science. Nevertheless, understanding CTE losses at all charge-packet levels is still important, as biases and darks often have low backgrounds. We note that these WP-based approaches to understanding CTE losses could be used in laboratory studies, as well. At present, many laboratory studies focus on Iron-55 sources, which all have 1620 electrons. Astronomical sources of interest are often fainter than this. By varying the dark exposure time, a wide diversity of WP intensities can be generated and cross-checked.

  7. The isotropic radio background revisited

    NASA Astrophysics Data System (ADS)

    Fornengo, Nicolao; Lineros, Roberto A.; Regis, Marco; Taoso, Marco

    2014-04-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.

  8. Association of BPD and IVH with early neutrophil and white counts in VLBW neonates with gestational age <32 weeks

    PubMed Central

    Palta, Mari; Sadek-Badawi, Mona; Carlton, David P

    2008-01-01

    Objectives To investigate associations between early low neutrophil count from routine blood samples, white blood count (WBC), pregnancy complications and neonatal outcomes for very low birth weight infants (VLBW ≤1500g) with gestational age <32 weeks. Patients and Methods Information was abstracted on all infants admitted to level III NICUs in Wisconsin 2003-2004. 1002 (78%) had differential and corrected total white counts within 2 ½ hours of birth. Data analyses included frequency tables, binary logistic, ordinal logistc and ordinary regression. Results Low neutrophil count (<1000/μL) was strongly associated with low WBC, pregnancy complications and antenatal steroids. Low neutrophil count predicted bronchopulmonary dysplasia severity level (BPD) (OR: 1.7, 95% CI: 1.1-2.7) and intraventricular hemorrhage (IVH) grade (OR: 2.2, 95% CI: 1.3-3.8). Conclusions Early neutrophil counts may have multiple causes interfering with their routine use as an inflammatory marker. Nonetheless, low neutrophil count has consistent independent associations with outcomes. PMID:18563166

  9. Film Vetoes for Alpha Background Rejection in Bolometer Detectors

    NASA Astrophysics Data System (ADS)

    Deporzio, Nicholas; Bucci, Carlo; Canonica, Lucia; Divacri, Marialaura; Cuore Collaboration; Absurd Team

    2015-04-01

    This study characterizes the effectiveness of encasing bolometer detectors in scintillator, metal ionization, and more exotic films to veto alpha radiation background. Bolometers are highly susceptible to alpha background and a successful veto should boost the statistical strength, speed, and signal-background ratio of bolometer particle searches. Plastic scintillator films are cooled to bolometer temperatures and bombarded with 1.4 MeV to 6.0 MeV alpha particles representative of detector conditions. Photomultipliers detect the keV range scintillation light and produce a veto signal. Also, layered films of a primary metal, dielectric, and secondary metal, such as gold-polyethylene-gold films, are cooled to milli-kelvin temperatures and biased with 0.1V to 100V to produce a current signal when incident 1.4 MeV to 6.0 MeV alpha particles ionize conduction paths through the film. Veto signals are characterized by their affect on bolometer detection of 865 keV target signals. Similar methods are applied to more exotic films. Early results show scintillator films raise target signal count rate and suppress counts above target energy by at least a factor of 10. This indicates scintillation vetoes are effective and that metal ionization and other films under study will also be effective.

  10. Holographic corrections to the Veneziano amplitude

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-08-01

    We propose a holographic computation of the 2 → 2 meson scattering in a curved string background, dual to a QCD-like theory. We recover the Veneziano amplitude and compute a perturbative correction due to the background curvature. The result implies a small deviation from a linear trajectory, which is a requirement of the UV regime of QCD.

  11. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  12. Fast radio burst event rate counts - I. Interpreting the observations

    NASA Astrophysics Data System (ADS)

    Macquart, J.-P.; Ekers, R. D.

    2018-02-01

    The fluence distribution of the fast radio burst (FRB) population (the `source count' distribution, N (>F) ∝Fα), is a crucial diagnostic of its distance distribution, and hence the progenitor evolutionary history. We critically reanalyse current estimates of the FRB source count distribution. We demonstrate that the Lorimer burst (FRB 010724) is subject to discovery bias, and should be excluded from all statistical studies of the population. We re-examine the evidence for flat, α > -1, source count estimates based on the ratio of single-beam to multiple-beam detections with the Parkes multibeam receiver, and show that current data imply only a very weak constraint of α ≲ -1.3. A maximum-likelihood analysis applied to the portion of the Parkes FRB population detected above the observational completeness fluence of 2 Jy ms yields α = -2.6_{-1.3}^{+0.7 }. Uncertainties in the location of each FRB within the Parkes beam render estimates of the Parkes event rate uncertain in both normalizing survey area and the estimated post-beam-corrected completeness fluence; this uncertainty needs to be accounted for when comparing the event rate against event rates measured at other telescopes.

  13. Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction

    PubMed Central

    Jian, Y; Planeta, B; Carson, R E

    2016-01-01

    Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254

  14. Rain-induced increase in background radiation detected by Radiation Portal Monitors.

    PubMed

    Livesay, R J; Blessinger, C S; Guzzardo, T F; Hausladen, P A

    2014-11-01

    A complete understanding of both the steady state and transient background measured by Radiation Portal Monitors (RPMs) is essential to predictable system performance, as well as maximization of detection sensitivity. To facilitate this understanding, a test bed for the study of natural background in RPMs has been established at the Oak Ridge National Laboratory. This work was performed in support of the Second Line of Defense Program's mission to enhance partner country capability to deter, detect, and interdict the illicit movement of special nuclear material. In the present work, transient increases in gamma-ray counting rates in RPMs due to rain are investigated. The increase in background activity associated with rain, which has been well documented in the field of environmental radioactivity, originates primarily from the wet-deposition of two radioactive daughters of (222)Rn, namely, (214)Pb and (214)Bi. In this study, rainfall rates recorded by a co-located weather station are compared with RPM count rates and high-purity germanium spectra. The data verify that these radionuclides are responsible for the largest environmental background fluctuations in RPMs. Analytical expressions for the detector response function in Poly-Vinyl Toluene have been derived. Effects on system performance and potential mitigation strategies are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The edge artifact in the point-spread function-based PET reconstruction at different sphere-to-background ratios of radioactivity.

    PubMed

    Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki

    2016-02-01

    The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.

  16. Systematic measurement of fast neutron background fluctuations in an urban area using a mobile detection system

    DOE PAGES

    Iyengar, Anagha; Beach, Matthew; Newby, Robert J.; ...

    2015-11-12

    Neutron background measurements using a mobile trailer-based system were conducted in Knoxville, Tennessee. The 0.5 m 2 system consisting of 8 EJ-301 liquid scintillation detectors was used to collect neutron background measurements in order to better understand the systematic background variations that depend solely on the street-level measurement position in a local, downtown area. Data was collected along 5 different streets in the downtown Knoxville area, and the measurements were found to be repeatable. Using 10-min measurements, fractional uncertainty in each measured data point was <2%. Compared with fast neutron background count rates measured away from downtown Knoxville, a reductionmore » in background count rates ranging from 10-50% was observed in the downtown area, sometimes varying substantially over distances of tens of meters. These reductions are attributed to the shielding of adjacent buildings, quantified in part here by the metric angle-of-open-sky. The adjacent buildings may serve to shield cosmic ray neutron flux.« less

  17. Systematic measurement of fast neutron background fluctuations in an urban area using a mobile detection system

    NASA Astrophysics Data System (ADS)

    Iyengar, A.; Beach, M.; Newby, R. J.; Fabris, L.; Heilbronn, L. H.; Hayward, J. P.

    2015-02-01

    Neutron background measurements using a mobile trailer-based system were conducted in Knoxville, Tennessee, USA. The 0.5 m2 system, consisting of eight EJ-301 liquid scintillation detectors, was used to collect neutron background measurements in order to better understand the systematic variations in background that depend solely on the street-level measurement position in a downtown area. Data was collected along 5 different streets, and the measurements were found to be repeatable. Using 10-min measurements, the fractional uncertainty in each measured data point was <2%. Compared with fast neutron background count rates measured away from downtown Knoxville, a reduction in background count rates ranging from 10% to 50% was observed in the downtown area, sometimes varying substantially over distances of tens of meters. These reductions are attributed to the net shielding of the cosmic ray neutron flux by adjacent buildings. For reference, the building structure as observed at street level is quantified in part here by a measured angle-of-open-sky metric.

  18. Lensing corrections to features in the angular two-point correlation function and power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam

    2008-01-15

    It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less

  19. Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.

    PubMed

    Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien

    2017-01-01

    Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.

  20. Photon-counting intensified random-access charge injection device

    NASA Astrophysics Data System (ADS)

    Norton, Timothy J.; Morrissey, Patrick F.; Haas, Patrick; Payne, Leslie J.; Carbone, Joseph; Kimble, Randy A.

    1999-11-01

    At NASA GSFC we are developing a high resolution solar-blind photon counting detector system for UV space based astronomy. The detector comprises a high gain MCP intensifier fiber- optically coupled to a charge injection device (CID). The detector system utilizes an FPGA based centroiding system to locate the center of photon events from the intensifier to high accuracy. The photon event addresses are passed via a PCI interface with a GPS derived time stamp inserted per frame to an integrating memory. Here we present imaging performance data which show resolution of MCP tube pore structure at an MCP pore diameter of 8 micrometer. This data validates the ICID concept for intensified photon counting readout. We also discuss correction techniques used in the removal of fixed pattern noise effects inherent in the centroiding algorithms used and present data which shows the local dynamic range of the device. Progress towards development of a true random access CID (RACID 810) is also discussed and astronomical data taken with the ICID detector system demonstrating the photon event time-tagging mode of the system is also presented.

  1. Visits, Hits, Caching and Counting on the World Wide Web: Old Wine in New Bottles?

    ERIC Educational Resources Information Center

    Berthon, Pierre; Pitt, Leyland; Prendergast, Gerard

    1997-01-01

    Although web browser caching speeds up retrieval, reduces network traffic, and decreases the load on servers and browser's computers, an unintended consequence for marketing research is that Web servers undercount hits. This article explores counting problems, caching, proxy servers, trawler software and presents a series of correction factors…

  2. Probing cluster potentials through gravitational lensing of background X-ray sources

    NASA Technical Reports Server (NTRS)

    Refregier, A.; Loeb, A.

    1996-01-01

    The gravitational lensing effect of a foreground galaxy cluster, on the number count statistics of background X-ray sources, was examined. The lensing produces a deficit in the number of resolved sources in a ring close to the critical radius of the cluster. The cluster lens can be used as a natural telescope to study the faint end of the (log N)-(log S) relation for the sources which account for the X-ray background.

  3. A unified genetic association test robust to latent population structure for a count phenotype.

    PubMed

    Song, Minsun

    2018-06-04

    Confounding caused by latent population structure in genome-wide association studies has been a big concern despite the success of genome-wide association studies at identifying genetic variants associated with complex diseases. In particular, because of the growing interest in association mapping using count phenotype data, it would be interesting to develop a testing framework for genetic associations that is immune to population structure when phenotype data consist of count measurements. Here, I propose a solution for testing associations between single nucleotide polymorphisms and a count phenotype in the presence of an arbitrary population structure. I consider a classical range of models for count phenotype data. Under these models, a unified test for genetic associations that protects against confounding was derived. An algorithm was developed to efficiently estimate the parameters that are required to fit the proposed model. I illustrate the proposed approach using simulation studies and an empirical study. Both simulated and real-data examples suggest that the proposed method successfully corrects population structure. Copyright © 2018 John Wiley & Sons, Ltd.

  4. 76 FR 56949 - Biomass Crop Assistance Program; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-15

    .... ACTION: Interim rule; correction. SUMMARY: The Commodity Credit Corporation (CCC) is amending the Biomass... funds in favor of the ``project area'' portion of BCAP. CCC is also correcting errors in the regulation... INFORMATION: Background CCC published a final rule on October 27, 2010 (75 FR 66202-66243) implementing BCAP...

  5. Pile-up corrections in laser-driven pulsed X-ray sources

    NASA Astrophysics Data System (ADS)

    Hernández, G.; Fernández, F.

    2018-06-01

    A formalism for treating the pile-up produced in solid-state detectors by laser-driven pulsed X-ray sources has been developed. It allows the direct use of X-ray spectroscopy without artificially decreasing the number of counts in the detector, assuming the duration of a pulse is much shorter than the detector response time and the loss of counts from the energy window of the detector can be modeled or neglected. Experimental application shows that having a small amount of pile-up subsequently corrected improves the signal-to-noise ratio, which would be more beneficial than the strict single-hit condition usually imposed on this detectors.

  6. Endothelial Progenitor Cells (EPC) Count by Multicolor Flow Cytometry in Healthy Individuals and Diabetes Mellitus (DM) Patients.

    PubMed

    Falay, Mesude; Aktas, Server

    2016-11-01

    The present study aimed to determine circulating Endothelial Progenitor Cell (EPC) counts by multicolor flow cytometry in healthy individuals and diabetic subjects by means of forming an analysis procedure using a combination of monoclonal antibodies (moAbs), which would correctly detect the circulating EPC count. The circulating EPC count was detected in 40 healthy individuals (20 Female, 20 Male; age range: 26 - 50 years) and 30 Diabetes Mellitus (DM) patients (15 Female, 15 Male; age range: 42 - 55) by multicolor flow cytometry (FCM) in a single-tube panel consisting of 5 CD45/CD31/CD34/CD309/ SYTO® and 16 monoclonal antibodies. Circulating EPC count was 11.33 (7.89 - 15.25) cells/µL in the healthy control group and 4.80 (0.70 - 10.85) cells/µL in the DM group. EPC counts were significantly lower in DM cases that developed coronary artery disease (53.3%) as compared to those that did not (p < 0.001). In the present study, we describe a method that identifies circulating EPC counts by multicolor flow cytometry in a single tube and determines the circulating EPC count in healthy individuals. This is the first study conducted on EPC count in Turkish population. We think that the EPC count found in the present study will be a guide for future studies.

  7. Short communication: Repeatability of differential goat bulk milk culture and associations with somatic cell count, total bacterial count, and standard plate count.

    PubMed

    Koop, G; Dik, N; Nielen, M; Lipman, L J A

    2010-06-01

    The aims of this study were to assess how different bacterial groups in bulk milk are related to bulk milk somatic cell count (SCC), bulk milk total bacterial count (TBC), and bulk milk standard plate count (SPC) and to measure the repeatability of bulk milk culturing. On 53 Dutch dairy goat farms, 3 bulk milk samples were collected at intervals of 2 wk. The samples were cultured for SPC, coliform count, and staphylococcal count and for the presence of Staphylococcus aureus. Furthermore, SCC (Fossomatic 5000, Foss, Hillerød, Denmark) and TBC (BactoScan FC 150, Foss) were measured. Staphylococcal count was correlated to SCC (r=0.40), TBC (r=0.51), and SPC (r=0.53). Coliform count was correlated to TBC (r=0.33), but not to any of the other variables. Staphylococcus aureus did not correlate to SCC. The contribution of the staphylococcal count to the SPC was 31%, whereas the coliform count comprised only 1% of the SPC. The agreement of the repeated measurements was low. This study indicates that staphylococci in goat bulk milk are related to SCC and make a significant contribution to SPC. Because of the high variation in bacterial counts, repeated sampling is necessary to draw valid conclusions from bulk milk culturing. 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.

    PubMed

    Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer

    2015-11-01

    Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.

  9. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  10. Analysis techniques for background rejection at the Majorana Demonstrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cuestra, Clara; Rielage, Keith Robert; Elliott, Steven Ray

    2015-06-11

    The MAJORANA Collaboration is constructing the MAJORANA DEMONSTRATOR, an ultra-low background, 40-kg modular HPGe detector array to search for neutrinoless double beta decay in 76Ge. In view of the next generation of tonne-scale Ge-based 0νββ-decay searches that will probe the neutrino mass scale in the inverted-hierarchy region, a major goal of the MAJORANA DEMONSTRATOR is to demonstrate a path forward to achieving a background rate at or below 1 count/tonne/year in the 4 keV region of interest around the Q-value at 2039 keV. The background rejection techniques to be applied to the data include cuts based on data reduction, pulsemore » shape analysis, event coincidences, and time correlations. The Point Contact design of the DEMONSTRATOR's germanium detectors allows for significant reduction of gamma background.« less

  11. Electroweak Corrections to pp→μ^{+}μ^{-}e^{+}e^{-}+X at the LHC: A Higgs Boson Background Study.

    PubMed

    Biedermann, B; Denner, A; Dittmaier, S; Hofer, L; Jäger, B

    2016-04-22

    The first complete calculation of the next-to-leading-order electroweak corrections to four-lepton production at the LHC is presented, where all off-shell effects of intermediate Z bosons and photons are taken into account. Focusing on the mixed final state μ^{+}μ^{-}e^{+}e^{-}, we study differential cross sections that are particularly interesting for Higgs boson analyses. The electroweak corrections are divided into photonic and purely weak corrections. The former exhibit patterns familiar from similar W- or Z-boson production processes with very large radiative tails near resonances and kinematical shoulders. The weak corrections are of the generic size of 5% and show interesting variations, in particular, a sign change between the regions of resonant Z-pair production and the Higgs signal.

  12. Kids Count in Delaware, Families Count in Delaware: Fact Book, 2002.

    ERIC Educational Resources Information Center

    Delaware Univ., Newark. Kids Count in Delaware.

    This Kids Count Fact Book is combined with the Families Count Fact Book to provide information on statewide trends affecting children and families in Delaware. The Kids Count statistical profile is based on 11 main indicators of child well-being: (1) births to teens 15-17 years; (2) births to teens 10 to 14 years; (3) low birth weight babies; (3)…

  13. High-energy electrons from the muon decay in orbit: Radiative corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szafron, Robert; Czarnecki, Andrzej

    2015-12-07

    We determine the Ο(α) correction to the energy spectrum of electrons produced in the decay of muons bound in atoms. We focus on the high-energy end of the spectrum that constitutes a background for the muon-electron conversion and will be precisely measured by the upcoming experiments Mu2e and COMET. As a result, the correction suppresses the background by about 20%.

  14. Prioritizing CD4 Count Monitoring in Response to ART in Resource-Constrained Settings: A Retrospective Application of Prediction-Based Classification

    PubMed Central

    Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.

    2012-01-01

    Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation

  15. Nutsedge Counts Predict Meloidogyne incognita Juvenile Counts in an Integrated Management System.

    PubMed

    Ou, Zhining; Murray, Leigh; Thomas, Stephen H; Schroeder, Jill; Libbin, James

    2008-06-01

    The southern root-knot nematode (Meloidogyne incognita), yellow nutsedge (Cyperus esculentus) and purple nutsedge (Cyperus rotundus) are important pests in crops grown in the southern US. Management of the individual pests rather than the pest complex is often unsuccessful due to mutually beneficial pest interactions. In an integrated pest management scheme using alfalfa to suppress nutsedges and M. incognita, we evaluated quadratic polynomial regression models for prediction of the number of M. incognita J2 in soil samples as a function of yellow and purple nutsedge plant counts, squares of nutsedge counts and the cross-product between nutsedge counts . In May 2005, purple nutsedge plant count was a significant predictor of M. incognita count. In July and September 2005, counts of both nutsedges and the cross-product were significant predictors. In 2006, the second year of the alfalfa rotation, counts of all three species were reduced. As a likely consequence, the predictive relationship between nutsedges and M. incognita was not significant for May and July. In September 2006, purple nutsedge was a significant predictor of M. incognita. These results lead us to conclude that nutsedge plant counts in a field infested with the M. incognita-nutsedge pest complex can be used as a visual predictor of M. incognita J2 populations, unless the numbers of nutsedge plants and M. incognita are all very low.

  16. Nutsedge Counts Predict Meloidogyne incognita Juvenile Counts in an Integrated Management System

    PubMed Central

    Ou, Zhining; Murray, Leigh; Thomas, Stephen H.; Schroeder, Jill; Libbin, James

    2008-01-01

    The southern root-knot nematode (Meloidogyne incognita), yellow nutsedge (Cyperus esculentus) and purple nutsedge (Cyperus rotundus) are important pests in crops grown in the southern US. Management of the individual pests rather than the pest complex is often unsuccessful due to mutually beneficial pest interactions. In an integrated pest management scheme using alfalfa to suppress nutsedges and M. incognita, we evaluated quadratic polynomial regression models for prediction of the number of M. incognita J2 in soil samples as a function of yellow and purple nutsedge plant counts, squares of nutsedge counts and the cross-product between nutsedge counts . In May 2005, purple nutsedge plant count was a significant predictor of M. incognita count. In July and September 2005, counts of both nutsedges and the cross-product were significant predictors. In 2006, the second year of the alfalfa rotation, counts of all three species were reduced. As a likely consequence, the predictive relationship between nutsedges and M. incognita was not significant for May and July. In September 2006, purple nutsedge was a significant predictor of M. incognita. These results lead us to conclude that nutsedge plant counts in a field infested with the M. incognita-nutsedge pest complex can be used as a visual predictor of M. incognita J2 populations, unless the numbers of nutsedge plants and M. incognita are all very low. PMID:19259526

  17. Language and counting: Some recent results

    NASA Astrophysics Data System (ADS)

    Bell, Garry

    1990-02-01

    It has long been recognised that the language of mathematics is an important variable in the learning of mathematics, and there has been useful work in isolating and describing the linkage. Steffe and his co-workers at Georgia, for example, (Steffe, von Glasersfeld, Richardson and Cobb, 1983) have suggested that young children may construct verbal countable items to count objects which are hidden from their view. Although there has been a surge of research interest in counting and early childhood mathematics, and in cultural differences in mathematics attainment, there has been little work reported on the linkage between culture as exemplified by language, and initial concepts of numeration. This paper reports on some recent clinical research with kindergarten children of European and Asian background in Australia and America. The research examines the influence that number naming grammar appears to have on young children's understandings of two-digit numbers and place value. It appears that Transparent Standard Number Word Sequences such as Japanese, Chinese and Vietnamese which follow the numerical representation pattern by naming tens and units in order ("two tens three"), may be associated with distinctive place value concepts which may support sophisticated mental algorithms.

  18. White blood cell counting analysis of blood smear images using various segmentation strategies

    NASA Astrophysics Data System (ADS)

    Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza

    2017-09-01

    In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.

  19. On the use of positron counting for radio-Assay in nuclear pharmaceutical production.

    PubMed

    Maneuski, D; Giacomelli, F; Lemaire, C; Pimlott, S; Plenevaux, A; Owens, J; O'Shea, V; Luxen, A

    2017-07-01

    Current techniques for the measurement of radioactivity at various points during PET radiopharmaceutical production and R&D are based on the detection of the annihilation gamma rays from the radionuclide in the labelled compound. The detection systems to measure these gamma rays are usually variations of NaI or CsF scintillation based systems requiring costly and heavy lead shielding to reduce background noise. These detectors inherently suffer from low detection efficiency, high background noise and very poor linearity. They are also unable to provide any reasonably useful position information. A novel positron counting technique is proposed for the radioactivity assay during radiopharmaceutical manufacturing that overcomes these limitations. Detection of positrons instead of gammas offers an unprecedented level of position resolution of the radiation source (down to sub-mm) thanks to the nature of the positron interaction with matter. Counting capability instead of charge integration in the detector brings the sensitivity down to the statistical limits at the same time as offering very high dynamic range and linearity from zero to any arbitrarily high activity. This paper reports on a quantitative comparison between conventional detector systems and the proposed positron counting detector. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  1. Review of approaches to the recording of background lesions in toxicologic pathology studies in rats.

    PubMed

    McInnes, E F; Scudamore, C L

    2014-08-17

    Pathological evaluation of lesions caused directly by xenobiotic treatment must always take into account the recognition of background (incidental) findings. Background lesions can be congenital or hereditary, histological variations, changes related to trauma or normal aging and physiologic or hormonal changes. This review focuses on the importance and correct approach to recording of background changes and includes discussion on sources of variability in background changes, the correct use of terminology, the concept of thresholds, historical control data, diagnostic drift, blind reading of slides, scoring and artifacts. The review is illustrated with background lesions in Sprague Dawley and Wistar rats. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Novel Photon-Counting Detectors for Free-Space Communication

    NASA Technical Reports Server (NTRS)

    Krainak, M. A.; Yang, G.; Sun, X.; Lu, W.; Merritt, S.; Beck, J.

    2016-01-01

    We present performance data for novel photon-counting detectors for free space optical communication. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We present and compare dark count, photon-detection efficiency, wavelength response and communication performance data for these detectors. We successfully measured real-time communication performance using both the 2 detected-photon threshold and AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects. The HgCdTe APD array routinely demonstrated photon detection efficiencies of greater than 50% across 5 arrays, with one array reaching a maximum PDE of 70%. We performed high-resolution pixel-surface spot scans and measured the junction diameters of its diodes. We found that decreasing the junction diameter from 31 micrometers to 25 micrometers doubled the e- APD gain from 470 for an array produced in the year 2010 to a gain of 1100 on an array delivered to NASA GSFC recently. The mean single-photon SNR was over 12 and the excess noise factors measurements were 1.2-1.3. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output.

  3. Disk-based k-mer counting on a PC

    PubMed Central

    2013-01-01

    Background The k-mer counting problem, which is to build the histogram of occurrences of every k-symbol long substring in a given text, is important for many bioinformatics applications. They include developing de Bruijn graph genome assemblers, fast multiple sequence alignment and repeat detection. Results We propose a simple, yet efficient, parallel disk-based algorithm for counting k-mers. Experiments show that it usually offers the fastest solution to the considered problem, while demanding a relatively small amount of memory. In particular, it is capable of counting the statistics for short-read human genome data, in input gzipped FASTQ file, in less than 40 minutes on a PC with 16 GB of RAM and 6 CPU cores, and for long-read human genome data in less than 70 minutes. On a more powerful machine, using 32 GB of RAM and 32 CPU cores, the tasks are accomplished in less than half the time. No other algorithm for most tested settings of this problem and mammalian-size data can accomplish this task in comparable time. Our solution also belongs to memory-frugal ones; most competitive algorithms cannot efficiently work on a PC with 16 GB of memory for such massive data. Conclusions By making use of cheap disk space and exploiting CPU and I/O parallelism we propose a very competitive k-mer counting procedure, called KMC. Our results suggest that judicious resource management may allow to solve at least some bioinformatics problems with massive data on a commodity personal computer. PMID:23679007

  4. A normalization strategy for comparing tag count data

    PubMed Central

    2012-01-01

    Background High-throughput sequencing, such as ribonucleic acid sequencing (RNA-seq) and chromatin immunoprecipitation sequencing (ChIP-seq) analyses, enables various features of organisms to be compared through tag counts. Recent studies have demonstrated that the normalization step for RNA-seq data is critical for a more accurate subsequent analysis of differential gene expression. Development of a more robust normalization method is desirable for identifying the true difference in tag count data. Results We describe a strategy for normalizing tag count data, focusing on RNA-seq. The key concept is to remove data assigned as potential differentially expressed genes (DEGs) before calculating the normalization factor. Several R packages for identifying DEGs are currently available, and each package uses its own normalization method and gene ranking algorithm. We compared a total of eight package combinations: four R packages (edgeR, DESeq, baySeq, and NBPSeq) with their default normalization settings and with our normalization strategy. Many synthetic datasets under various scenarios were evaluated on the basis of the area under the curve (AUC) as a measure for both sensitivity and specificity. We found that packages using our strategy in the data normalization step overall performed well. This result was also observed for a real experimental dataset. Conclusion Our results showed that the elimination of potential DEGs is essential for more accurate normalization of RNA-seq data. The concept of this normalization strategy can widely be applied to other types of tag count data and to microarray data. PMID:22475125

  5. Simulation of Rate-Related (Dead-Time) Losses In Passive Neutron Multiplicity Counting Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, L.G.; Norman, P.I.; Leadbeater, T.W.

    Passive Neutron Multiplicity Counting (PNMC) based on Multiplicity Shift Register (MSR) electronics (a form of time correlation analysis) is a widely used non-destructive assay technique for quantifying spontaneously fissile materials such as Pu. At high event rates, dead-time losses perturb the count rates with the Singles, Doubles and Triples being increasingly affected. Without correction these perturbations are a major source of inaccuracy in the measured count rates and assay values derived from them. This paper presents the simulation of dead-time losses and investigates the effect of applying different dead-time models on the observed MSR data. Monte Carlo methods have beenmore » used to simulate neutron pulse trains for a variety of source intensities and with ideal detection geometry, providing an event by event record of the time distribution of neutron captures within the detection system. The action of the MSR electronics was modelled in software to analyse these pulse trains. Stored pulse trains were perturbed in software to apply the effects of dead-time according to the chosen physical process; for example, the ideal paralysable (extending) and non-paralysable models with an arbitrary dead-time parameter. Results of the simulations demonstrate the change in the observed MSR data when the system dead-time parameter is varied. In addition, the paralysable and non-paralysable models of deadtime are compared. These results form part of a larger study to evaluate existing dead-time corrections and to extend their application to correlated sources. (authors)« less

  6. Reducing background contributions in fluorescence fluctuation time-traces for single-molecule measurements in solution.

    PubMed

    Földes-Papp, Zeno; Liao, Shih-Chu Jeff; You, Tiefeng; Barbieri, Beniamino

    2009-08-01

    We first report on the development of new microscope means that reduce background contributions in fluorescence fluctuation methods: i) excitation shutter, ii) electronic switches, and iii) early and late time-gating. The elements allow for measuring molecules at low analyte concentrations. We first found conditions of early and late time-gating with time-correlated single-photon counting that made the fluorescence signal as bright as possible compared with the fluctuations in the background count rate in a diffraction-limited optical set-up. We measured about a 140-fold increase in the amplitude of autocorrelated fluorescence fluctuations at the lowest analyte concentration of about 15 pM, which gave a signal-to-background advantage of more than two-orders of magnitude. The results of this original article pave the way for single-molecule detection in solution and in live cells without immobilization or hydrodynamic/electrokinetic focusing at longer observation times than are currently available.

  7. A new approach to counting measurements: Addressing the problems with ISO-11929

    NASA Astrophysics Data System (ADS)

    Klumpp, John; Miller, Guthrie; Poudel, Deepesh

    2018-06-01

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: "what is the probability distribution of the true amount in the sample, given the data?" The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the "measurement strength" that depends only on measurement-stage count quantities. We show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an "action threshold" on the measurement strength which is similar to the decision threshold recommended by the current standard. We further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.

  8. Delegation in Correctional Nursing Practice.

    PubMed

    Tompkins, Frances

    2016-07-01

    Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. © The Author(s) 2016.

  9. Design Study of an Incinerator Ash Conveyor Counting System - 13323

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaederstroem, Henrik; Bronson, Frazier

    A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and themore » uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)« less

  10. Prognostic value of mitotic counts in breast cancer of Saudi Arabian patients.

    PubMed

    Buhmeida, Abdelbaset; Al-Maghrabi, Jaudah; Merdad, Adnan; Al-Thubaity, Fatima; Chaudhary, Adeel; Gari, Mamdooh; Abuzenadah, Adel; Collan, Yrjö; Syrjänen, Kari; Al-Qahtani, Mohammed

    2011-01-01

    Quantitative methods in combination with other objective prognostic criteria can improve the evaluation of a cancer patient's prognosis, and possibly predict response to therapy. One of the important prognostic and predictive markers is the mitotic count, which has proven valuable in many aspects. In this study, the prognostic value of the mitotic count was assessed in breast cancer (BC) patients in Saudi Arabia. The study comprised a series of 87 patients diagnosed and treated for breast cancer at the Departments of Surgery and Oncology, King Abdul-Aziz University Hospital, between 2000 and 2008. Mitotic counts were carried out using a standard laboratory microscope (objective, × 40; field diameter, 420 μm). The number of mitotic figures in 10 consecutive high-power fields (hpf) from the most cellular area of the sample gave the mitotic activity index (MAI, mitotic figures/10 hpf). The standardized mitotic index (SMI) recorded the mitotic count as the number of mitotic figures by area of the neoplastic tissue in the microscopic field, thus the number of mitoses in 10 consecutive fields was corrected for the volume fraction and field size (mitotic figures/mm²). The means of MAI and SMI of the tumors in the entire series of 87 patients were 15 mitotic figures/10 hpf (range 4-45) and 4 mitotic figures/mm² (range 1-9), respectively. The mitotic counts were higher in advanced stages than in early cancer (p < 0.04). The mitotic counts were significantly larger in patients with high-grade tumor (p < 0.004) and in cases with tumor metastasis (p < 0.004). The mitotic counts were also significantly larger in the recurrent cases than in non-recurrent ones (p < 0.02). The quantitatively measurable mitotic counts of cancer cell nuclei are of significant prognostic value in invasive ductal carcinoma of the breast in Saudi Arabia and the mean cut-off values of MAI and SMI can be applied as objective (quantitative) criteria to distinguish breast cancer patients into groups

  11. A video-based real-time adaptive vehicle-counting system for urban roads.

    PubMed

    Liu, Fei; Zeng, Zhiyuan; Jiang, Rong

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.

  12. A video-based real-time adaptive vehicle-counting system for urban roads

    PubMed Central

    2017-01-01

    In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984

  13. Relationship between platelet count and hemodialysis membranes

    PubMed Central

    Nasr, Rabih; Saifan, Chadi; Barakat, Iskandar; Azzi, Yorg Al; Naboush, Ali; Saad, Marc; Sayegh, Suzanne El

    2013-01-01

    Background One factor associated with poor outcomes in hemodialysis patients is exposure to a foreign membrane. Older membranes are very bioincompatible and increase complement activation, cause leukocytosis by activating circulating factors, which sequesters leukocytes in the lungs, and activates platelets. Recently, newer membranes have been developed that were designed to be more biocompatible. We tested if the different “optiflux” hemodialysis membranes had different effects on platelet levels. Methods Ninety-nine maintenance hemodialysis patients with no known systemic or hematologic diseases affecting their platelets had blood drawn immediately prior to, 90 minutes into, and immediately following their first hemodialysis session of the week. All patients were dialyzed using a Fresenius Medical Care Optiflux polysulfone membrane F160, F180, or F200 (polysulfone synthetic dialyzer membranes, 1.6 m2, 1.8 m2, and 2.0 m2 surface area, respectively, electron beam sterilized). Platelet counts were measured from each sample by analysis using a CBC analyzer. Results The average age of the patients was 62.7 years; 36 were female and 63 were male. The mean platelet count pre, mid, and post dialysis was 193 (standard deviation ±74.86), 191 (standard deviation ±74.67), and 197 (standard deviation ±79.34) thousand/mm3, respectively, with no statistical differences. Conclusion Newer membranes have no significant effect on platelet count. This suggests that they are, in fact, more biocompatible than their predecessors and may explain their association with increased survival. PMID:23983482

  14. How to normalize metatranscriptomic count data for differential expression analysis.

    PubMed

    Klingenberg, Heiner; Meinicke, Peter

    2017-01-01

    Differential expression analysis on the basis of RNA-Seq count data has become a standard tool in transcriptomics. Several studies have shown that prior normalization of the data is crucial for a reliable detection of transcriptional differences. Until now it has not been clear whether and how the transcriptomic approach can be used for differential expression analysis in metatranscriptomics. We propose a model for differential expression in metatranscriptomics that explicitly accounts for variations in the taxonomic composition of transcripts across different samples. As a main consequence the correct normalization of metatranscriptomic count data under this model requires the taxonomic separation of the data into organism-specific bins. Then the taxon-specific scaling of organism profiles yields a valid normalization and allows us to recombine the scaled profiles into a metatranscriptomic count matrix. This matrix can then be analyzed with statistical tools for transcriptomic count data. For taxon-specific scaling and recombination of scaled counts we provide a simple R script. When applying transcriptomic tools for differential expression analysis directly to metatranscriptomic data with an organism-independent (global) scaling of counts the resulting differences may be difficult to interpret. The differences may correspond to changing functional profiles of the contributing organisms but may also result from a variation of taxonomic abundances. Taxon-specific scaling eliminates this variation and therefore the resulting differences actually reflect a different behavior of organisms under changing conditions. In simulation studies we show that the divergence between results from global and taxon-specific scaling can be drastic. In particular, the variation of organism abundances can imply a considerable increase of significant differences with global scaling. Also, on real metatranscriptomic data, the predictions from taxon-specific and global scaling can differ

  15. An Ensemble Method for Spelling Correction in Consumer Health Questions

    PubMed Central

    Kilicoglu, Halil; Fiszman, Marcelo; Roberts, Kirk; Demner-Fushman, Dina

    2015-01-01

    Orthographic and grammatical errors are a common feature of informal texts written by lay people. Health-related questions asked by consumers are a case in point. Automatic interpretation of consumer health questions is hampered by such errors. In this paper, we propose a method that combines techniques based on edit distance and frequency counts with a contextual similarity-based method for detecting and correcting orthographic errors, including misspellings, word breaks, and punctuation errors. We evaluate our method on a set of spell-corrected questions extracted from the NLM collection of consumer health questions. Our method achieves a F1 score of 0.61, compared to an informed baseline of 0.29, achieved using ESpell, a spelling correction system developed for biomedical queries. Our results show that orthographic similarity is most relevant in spelling error correction in consumer health questions and that frequency and contextual information are complementary to orthographic features. PMID:26958208

  16. Fluorescein dye improves microscopic evaluation and counting of demodex in blepharitis with cylindrical dandruff.

    PubMed

    Kheirkhah, Ahmad; Blanco, Gabriela; Casas, Victoria; Tseng, Scheffer C G

    2007-07-01

    To show whether fluorescein dye helps detect and count Demodex embedded in cylindrical dandruff (CD) of epilated eyelashes from patients with blepharitis. Two eyelashes with CD were removed from each lid of 10 consecutive patients with blepharitis and subjected to microscopic examination with and without fluorescein solution to detect and count Demodex mites. Of 80 eyelashes examined, 36 (45%) lashes retained their CD after removal. Before addition of the fluorescein solution, the mean total Demodex count per patient was 14.9 +/- 10 and the mean Demodex count per lash was 3.1 +/- 2.5 and 0.8 +/- 0.7 in epilated eyelashes with and without retained CD, respectively (P < 0.0001). After addition of the fluorescein solution, opaque and compact CD instantly expanded to reveal embedded mites in a yellowish and semitransparent background. As a result, the mean total Demodex count per patient was significantly increased to 20.2 +/- 13.8 (P = 0.003), and the mean count per lash was significantly increased to 4.4 +/- 2.8 and 1 +/- 0.8 in eyelashes with and without retained CD (P < 0.0001 and P = 0.007), respectively. This new method yielded more mites in 8 of 10 patients and allowed mites to be detected in 3 lashes with retained CD and 1 lash without retained CD that had an initial count of zero. Addition of fluorescein solution after mounting further increases the proficiency of detecting and counting mites embedded in CD of epilated eyelashes.

  17. Uncertainties in internal gas counting

    NASA Astrophysics Data System (ADS)

    Unterweger, M.; Johansson, L.; Karam, L.; Rodrigues, M.; Yunoki, A.

    2015-06-01

    The uncertainties in internal gas counting will be broken down into counting uncertainties and gas handling uncertainties. Counting statistics, spectrum analysis, and electronic uncertainties will be discussed with respect to the actual counting of the activity. The effects of the gas handling and quantities of counting and sample gases on the uncertainty in the determination of the activity will be included when describing the uncertainties arising in the sample preparation.

  18. Low background signal readout electronics for the Majorana Demonstrator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guinn, I.; Abgrall, N.; Avignone, F. T.

    The Majorana Demonstrator is a planned 40 kg array of Germanium detectors intended to demonstrate the feasibility of constructing a tonne-scale experiment that will seek neutrinoless double beta decay (0νββ) in 76Ge. In such an experiment we require backgrounds of less than 1 count/tonne-year in the 4 keV region of interest around the 2039 keV Q-value of the ββ decay. Moreover, designing low-noise electronics, which must be placed in close proximity to the detectors, presents a challenge to reaching this background target. Finally, this paper will discuss the Majorana collaboration's solutions to some of these challenges.

  19. Low background signal readout electronics for the Majorana Demonstrator

    DOE PAGES

    Guinn, I.; Abgrall, N.; Avignone, F. T.; ...

    2015-05-01

    The Majorana Demonstrator is a planned 40 kg array of Germanium detectors intended to demonstrate the feasibility of constructing a tonne-scale experiment that will seek neutrinoless double beta decay (0νββ) in 76Ge. In such an experiment we require backgrounds of less than 1 count/tonne-year in the 4 keV region of interest around the 2039 keV Q-value of the ββ decay. Moreover, designing low-noise electronics, which must be placed in close proximity to the detectors, presents a challenge to reaching this background target. Finally, this paper will discuss the Majorana collaboration's solutions to some of these challenges.

  20. Low Background Signal Readout Electronics for the MAJORANA DEMONSTRATOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guinn, I.; Abgrall, N.; Avignone, III, F. T.

    The MAJORANA DEMONSTRATOR is a planned 40 kg array of Germanium detectors intended to demonstrate the feasibility of constructing a tonne-scale experiment that will seek neutrinoless double beta decay (0 nu beta beta) in Ge-76. Such an experiment would require backgrounds of less than 1 count/tonne-year in the 4 keV region of interest around the 2039 keV Q-value of the beta beta decay. Designing low-noise electronics, which must be placed in close proximity to the detectors, presents a challenge to reaching this background target. This paper will discuss the MAJORANA collaboration's solutions to some of these challenges.

  1. Radon Detection and Counting

    NASA Astrophysics Data System (ADS)

    Peterson, David

    2004-11-01

    One of the daughter products of the naturally occuring U 238 decay chain is the colorless, odorless, inert gas radon. The daughter products of the radon, from Po 218 through Po 214, can remain in the lungs after breathing radon that has diffused into the atmosphere. Radon testing of homes before sale or purchase is necessary in many parts of the U.S. Testing can be accomplished by the simple procedure of exposing a canister of activated charcoal to the ambient air. Radon atoms in the air are adsorbed onto the surface of the charcoal, which is then sealed in the canister. Gamma rays of the daughter products of the radon, in particular Pb 214 and Bi 214, can then be detected in low background counting system. Radon remediation procedures are encouraged for radon activities in the air greater than 4 pCi/L.

  2. The State of the World's Children 2014 in Numbers: Every Child Counts. Revealing Disparities, Advancing Children's Rights

    ERIC Educational Resources Information Center

    Aslam, Abid; Grojec, Anna; Little, Céline; Maloney, Ticiana; Tamagni, Jordan

    2014-01-01

    "The State of the World's Children 2014 In Numbers: Every Child Counts" highlights the critical role data and monitoring play in realizing children's rights. Credible data, disseminated effectively and used correctly, make it possible to target interventions that help right the wrong of exclusion. Data do not, of themselves, change the…

  3. Quantum corrections for spinning particles in de Sitter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröb, Markus B.; Verdaguer, Enric, E-mail: mbf503@york.ac.uk, E-mail: enric.verdaguer@ub.edu

    We compute the one-loop quantum corrections to the gravitational potentials of a spinning point particle in a de Sitter background, due to the vacuum polarisation induced by conformal fields in an effective field theory approach. We consider arbitrary conformal field theories, assuming only that the theory contains a large number N of fields in order to separate their contribution from the one induced by virtual gravitons. The corrections are described in a gauge-invariant way, classifying the induced metric perturbations around the de Sitter background according to their behaviour under transformations on equal-time hypersurfaces. There are six gauge-invariant modes: two scalarmore » Bardeen potentials, one transverse vector and one transverse traceless tensor, of which one scalar and the vector couple to the spinning particle. The quantum corrections consist of three different parts: a generalisation of the flat-space correction, which is only significant at distances of the order of the Planck length; a constant correction depending on the undetermined parameters of the renormalised effective action; and a term which grows logarithmically with the distance from the particle. This last term is the most interesting, and when resummed gives a modified power law, enhancing the gravitational force at large distances. As a check on the accuracy of our calculation, we recover the linearised Kerr-de Sitter metric in the classical limit and the flat-space quantum correction in the limit of vanishing Hubble constant.« less

  4. Dropout Count Procedural Handbook.

    ERIC Educational Resources Information Center

    Nevada State Dept. of Education, Carson City. Planning, Research and Evaluation Branch.

    This manual outlines the procedure for counting dropouts from the Nevada schools. The State Department of Education instituted a new dropout counting procedure to its student accounting system in January 1988 as part of its response to recommendations of a task force on at-risk youth. The count is taken from each secondary school and includes…

  5. Spectroscopic micro-tomography of metallic-organic composites by means of photon-counting detectors

    NASA Astrophysics Data System (ADS)

    Pichotka, M.; Jakubek, J.; Vavrik, D.

    2015-12-01

    The presumed capabilities of photon counting detectors have aroused major expectations in several fields of research. In the field of nuclear imaging ample benefits over standard detectors are to be expected from photon counting devices. First of all a very high contrast, as has by now been verified in numerous experiments. The spectroscopic capabilities of photon counting detectors further allow material decomposition in computed tomography and therefore inherently adequate beam hardening correction. For these reasons measurement setups featuring standard X-ray tubes combined with photon counting detectors constitute a possible replacement of the much more cost intensive tomographic setups at synchrotron light-sources. The actual application of photon counting detectors in radiographic setups in recent years has been impeded by a number of practical issues, above all by restrictions in the detectors size. Currently two tomographic setups in Czech Republic feature photon counting large-area detectors (LAD) fabricated in Prague. The employed large area hybrid pixel-detector assemblies [1] consisting of 10×10/10×5 Timepix devices have a surface area of 143×143 mm2 / 143×71,5 mm2 respectively, suitable for micro-tomographic applications. In the near future LAD devices featuring the Medipix3 readout chip as well as heavy sensors (CdTe, GaAs) will become available. Data analysis is obtained by a number of in house software tools including iterative multi-energy volume reconstruction.In this paper tomographic analysis of of metallic-organic composites is employed to illustrate the capabilities of our technology. Other than successful material decomposition by spectroscopic tomography we present a method to suppress metal artefacts under certain conditions.

  6. Sensitivity analysis of pulse pileup model parameter in photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2017-03-01

    Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.

  7. A new approach to counting measurements: Addressing the problems with ISO-11929

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less

  8. A new approach to counting measurements: Addressing the problems with ISO-11929

    DOE PAGES

    Klumpp, John Allan; Poudel, Deepesh; Miller, Guthrie

    2017-12-23

    We present an alternative approach to making counting measurements of radioactivity which offers probabilistic interpretations of the measurements. Unlike the approach in the current international standard (ISO-11929), our approach, which uses an assumed prior probability distribution of the true amount in the sample, is able to answer the question of interest for most users of the standard: “what is the probability distribution of the true amount in the sample, given the data?” The final interpretation of the measurement requires information not necessarily available at the measurement stage. However, we provide an analytical formula for what we term the “measurement strength”more » that depends only on measurement-stage count quantities. Here, we show that, when the sources are rare, the posterior odds that the sample true value exceeds ε are the measurement strength times the prior odds, independently of ε, the prior odds, and the distribution of the calibration coefficient. We recommend that the measurement lab immediately follow-up on unusually high samples using an “action threshold” on the measurement strength which is similar to the decision threshold recommended by the current standard. Finally, we further recommend that the measurement lab perform large background studies in order to characterize non constancy of background, including possible time correlation of background.« less

  9. Exposed and Embedded Corrections in Aphasia Therapy: Issues of Voice and Identity

    ERIC Educational Resources Information Center

    Simmons-Mackie, Nina; Damico, Jack S.

    2008-01-01

    Background: Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially…

  10. Kids Count [and] Families Count in Delaware: Fact Book, 1998.

    ERIC Educational Resources Information Center

    Nelson, Carl, Ed.; Wilson, Nancy, Ed.

    This Kids Count report is combined with Families Count, and provides information on statewide trends affecting children and families in Delaware. The first statistical profile is based on 10 main indicators of child well-being: (1) births to teens; (2) low birth weight babies; (3) infant mortality; (4) child deaths; (5) teen deaths; (6) juvenile…

  11. Youth Count: Exploring How KIDS COUNT Grantees Address Youth Issues

    ERIC Educational Resources Information Center

    Wilson-Ahlstrom, Alicia; Gaines, Elizabeth; Ferber, Thaddeus; Yohalem, Nicole

    2005-01-01

    Inspired by the 2004 Kids Count Databook essay, "Moving Youth From Risk to Opportunity," this new report highlights the history of data collection, challenges and innovative strategies of 12 Annie E. Casey Foundation KIDS COUNT grantees in their work to serve the needs of older youth. (Contains 3 figures, 2 tables, and 9 notes.)

  12. AUTOMATIC COUNTING APPARATUS

    DOEpatents

    Howell, W.D.

    1957-08-20

    An apparatus for automatically recording the results of counting operations on trains of electrical pulses is described. The disadvantages of prior devices utilizing the two common methods of obtaining the count rate are overcome by this apparatus; in the case of time controlled operation, the disclosed system automatically records amy information stored by the scaler but not transferred to the printer at the end of the predetermined time controlled operations and, in the case of count controlled operation, provision is made to prevent a weak sample from occupying the apparatus for an excessively long period of time.

  13. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  14. Graviton propagator from background-independent quantum gravity.

    PubMed

    Rovelli, Carlo

    2006-10-13

    We study the graviton propagator in Euclidean loop quantum gravity. We use spin foam, boundary-amplitude, and group-field-theory techniques. We compute a component of the propagator to first order, under some approximations, obtaining the correct large-distance behavior. This indicates a way for deriving conventional spacetime quantities from a background-independent theory.

  15. Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis

    PubMed Central

    Hagen, Nils T.

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201

  16. Influence of electrolytes in the QCM response: discrimination and quantification of the interference to correct microgravimetric data.

    PubMed

    Encarnação, João M; Stallinga, Peter; Ferreira, Guilherme N M

    2007-02-15

    In this work we demonstrate that the presence of electrolytes in solution generates desorption-like transients when the resonance frequency is measured. Using impedance spectroscopy analysis and Butterworth-Van Dyke (BVD) equivalent electrical circuit modeling we demonstrate that non-Kanazawa responses are obtained in the presence of electrolytes mainly due to the formation of a diffuse electric double layer (DDL) at the sensor surface, which also causes a capacitor like signal. We extend the BVD equivalent circuit by including additional parallel capacitances in order to account for such capacitor like signal. Interfering signals from electrolytes and DDL perturbations were this way discriminated. We further quantified as 8.0+/-0.5 Hz pF-1 the influence of electrolytes to the sensor resonance frequency and we used this factor to correct the data obtained by frequency counting measurements. The applicability of this approach is demonstrated by the detection of oligonucleotide sequences. After applying the corrective factor to the frequency counting data, the mass contribution to the sensor signal yields identical values when estimated by impedance analysis and frequency counting.

  17. Optimization of simultaneous tritium–radiocarbon internal gas proportional counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonicalzi, R. M.; Aalseth, C. E.; Day, A. R.

    Specific environmental applications can benefit from dual tritium and radiocarbon measurements in a single compound. Assuming typical environmental levels, it is often the low tritium activity relative to the higher radiocarbon activity that limits the dual measurement. In this paper, we explore the parameter space for a combined tritium and radiocarbon measurement using a methane sample mixed with an argon fill gas in low-background proportional counters of a specific design. We present an optimized methane percentage, detector fill pressure, and analysis energy windows to maximize measurement sensitivity while minimizing count time. The final optimized method uses a 9-atm fill ofmore » P35 (35% methane, 65% argon), and a tritium analysis window from 1.5 to 10.3 keV, which stops short of the tritium beta decay endpoint energy of 18.6 keV. This method optimizes tritium counting efficiency while minimizing radiocarbon beta decay interference.« less

  18. Attenuation correction strategies for multi-energy photon emitters using SPECT

    NASA Astrophysics Data System (ADS)

    Pretorius, P. H.; King, M. A.; Pan, T.-S.; Hutton, B. F.

    1997-06-01

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojection (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation maximization (ML-OS) reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: 1) the 93 keV attenuation map for attenuation correction, 2) the 185 keV attenuation map for attenuation correction, 3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and 4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCRs of sphere 4 (in proximity to the liver, spleen and backbone) were under-estimated, although TCRs were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable

  19. Attenuation correction strategies for multi-energy photon emitters using SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pretorius, P.H.; King, M.A.; Pan, T.S.

    1996-12-31

    The aim of this study was to investigate whether the photopeak window projections from different energy photons can be combined into a single window for reconstruction or if it is better to not combine the projections due to differences in the attenuation maps required for each photon energy. The mathematical cardiac torso (MCAT) phantom was modified to simulate the uptake of Ga-67 in the human body. Four spherical hot tumors were placed in locations which challenged attenuation correction. An analytical 3D projector with attenuation and detector response included was used to generate projection sets. Data were reconstructed using filtered backprojectionmore » (FBP) reconstruction with Butterworth filtering in conjunction with one iteration of Chang attenuation correction, and with 5 and 10 iterations of ordered-subset maximum-likelihood expectation-maximization reconstruction. To serve as a standard for comparison, the projection sets obtained from the two energies were first reconstructed separately using their own attenuation maps. The emission data obtained from both energies were added and reconstructed using the following attenuation strategies: (1) the 93 keV attenuation map for attenuation correction, (2) the 185 keV attenuation map for attenuation correction, (3) using a weighted mean obtained from combining the 93 keV and 185 keV maps, and (4) an ordered subset approach which combines both energies. The central count ratio (CCR) and total count ratio (TCR) were used to compare the performance of the different strategies. Compared to the standard method, results indicate an over-estimation with strategy 1, an under-estimation with strategy 2 and comparable results with strategies 3 and 4. In all strategies, the CCR`s of sphere 4 were under-estimated, although TCR`s were comparable to that of the other locations. The weighted mean and ordered subset strategies for attenuation correction were of comparable accuracy to reconstruction of the windows

  20. Investigation of background radiation levels and geologic unit profiles in Durango, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Triplett, G.H.; Foutz, W.L.; Lesperance, L.R.

    1989-11-01

    As part of the Uranium Mill Tailings Remedial Action (UMTRA) Project, Oak Ridge National Laboratory (ORNL) has performed radiological surveys on 435 vicinity properties (VPs) in the Durango area. This study was undertaken to establish the background radiation levels and geologic unit profiles in the Durango VP area. During the months of May through June, 1986, extensive radiometric measurements and surface soil samples were collected in the Durango VP area by personnel from ORNL's Grand Junction Office. A majority of the Durango VP surveys were conducted at sites underlain by Quaternary alluvium, older Quaternary gravels, and Cretaceous Lewis and Mancosmore » shales. These four geologic units were selected to be evaluated. The data indicated no formation anomalies and established regional background radiation levels. Durango background radionuclide concentrations in surface soil were determined to be 20.3 {plus minus} 3.4 pCi/g for {sup 40}K, 1.6 {plus minus} 0.5 pCi/g for {sup 226}Ra, and 1.2 {plus minus} 0.3 pCi/g for {sup 232}Th. The Durango background gamma exposure rate was found to be 16.5 {plus minus} 1.3 {mu}R/h. Average gamma spectral count rate measurements for {sup 40}K, {sup 226}Ra and {sup 232}Th were determined to be 553, 150, and 98 counts per minute (cpm), respectively. Geologic unit profiles and Durango background radiation measurements are presented and compared with other areas. 19 refs., 15 figs., 5 tabs.« less

  1. Avian leucocyte counting using the hemocytometer

    USGS Publications Warehouse

    Dein, F.J.; Wilson, A.; Fischer, D.; Langenberg, P.

    1994-01-01

    Automated methods for counting leucocytes in avian blood are not available because of the presence of nucleated erythrocytes and thrombocytes. Therefore, total white blood cell counts are performed by hand using a hemocytometer. The Natt and Herrick and the Unopette methods are the most common stain and diluent preparations for this procedure. Replicate hemocytometer counts using these two methods were performed on blood from four birds of different species. Cells present in each square of the hemocytometer were counted. Counting cells in the corner, side, or center hemocytometer squares produced statistically equivalent results; counting four squares per chamber provided a result similar to that obtained by counting nine squares; and the Unopette method was more precise for hemocytometer counting than was the Natt and Herrick method. The Unopette method is easier to learn and perform but is an indirect process, utilizing the differential count from a stained smear. The Natt and Herrick method is a direct total count, but cell identification is more difficult.

  2. LWIR pupil imaging and prospects for background compensation

    NASA Astrophysics Data System (ADS)

    LeVan, Paul; Sakoglu, Ünal; Stegall, Mark; Pierce, Greg

    2015-08-01

    A previous paper described LWIR Pupil Imaging with a sensitive, low-flux focal plane array, and behavior of this type of system for higher flux operations as understood at the time. We continue this investigation, and report on a more detailed characterization of the system over a broad range of pixel fluxes. This characterization is then shown to enable non-uniformity correction over the flux range, using a standard approach. Since many commercial tracking platforms include a "guider port" that accepts pulse width modulation (PWM) error signals, we have also investigated a variation on the use of this port to "dither" the tracking platform in synchronization with the continuous collection of infrared images. The resulting capability has a broad range of applications that extend from generating scene motion in the laboratory for quantifying performance of "realtime, scene-based non-uniformity correction" approaches, to effectuating subtraction of bright backgrounds by alternating viewing aspect between a point source and adjacent, source-free backgrounds.

  3. Background of SAM atom-fraction profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernst, Frank

    Atom-fraction profiles acquired by SAM (scanning Auger microprobe) have important applications, e.g. in the context of alloy surface engineering by infusion of carbon or nitrogen through the alloy surface. However, such profiles often exhibit an artifact in form of a background with a level that anti-correlates with the local atom fraction. This article presents a theory explaining this phenomenon as a consequence of the way in which random noise in the spectrum propagates into the discretized differentiated spectrum that is used for quantification. The resulting model of “energy channel statistics” leads to a useful semi-quantitative background reduction procedure, which ismore » validated by applying it to simulated data. Subsequently, the procedure is applied to an example of experimental SAM data. The analysis leads to conclusions regarding optimum experimental acquisition conditions. The proposed method of background reduction is based on general principles and should be useful for a broad variety of applications. - Highlights: • Atom-fraction–depth profiles of carbon measured by scanning Auger microprobe • Strong background, varies with local carbon concentration. • Needs correction e.g. for quantitative comparison with simulations • Quantitative theory explains background. • Provides background removal strategy and practical advice for acquisition.« less

  4. Correction of bias in belt transect studies of immotile objects

    USGS Publications Warehouse

    Anderson, D.R.; Pospahala, R.S.

    1970-01-01

    Unless a correction is made, population estimates derived from a sample of belt transects will be biased if a fraction of, the individuals on the sample transects are not counted. An approach, useful for correcting this bias when sampling immotile populations using transects of a fixed width, is presented. The method assumes that a searcher's ability to find objects near the center of the transect is nearly perfect. The method utilizes a mathematical equation, estimated from the data, to represent the searcher's inability to find all objects at increasing distances from the center of the transect. An example of the analysis of data, formation of the equation, and application is presented using waterfowl nesting data collected in Colorado.

  5. Detection of bremsstrahlung radiation of 90Sr-90Y for emergency lung counting.

    PubMed

    Ho, A; Hakmana Witharana, S S; Jonkmans, G; Li, L; Surette, R A; Dubeau, J; Dai, X

    2012-09-01

    This study explores the possibility of developing a field-deployable (90)Sr detector for rapid lung counting in emergency situations. The detection of beta-emitters (90)Sr and its daughter (90)Y inside the human lung via bremsstrahlung radiation was performed using a 3″ × 3″ NaI(Tl) crystal detector and a polyethylene-encapsulated source to emulate human lung tissue. The simulation results show that this method is a viable technique for detecting (90)Sr with a minimum detectable activity (MDA) of 1.07 × 10(4) Bq, using a realistic dual-shielded detector system in a 0.25-µGy h(-1) background field for a 100-s scan. The MDA is sufficiently sensitive to meet the requirement for emergency lung counting of Type S (90)Sr intake. The experimental data were verified using Monte Carlo calculations, including an estimate for internal bremsstrahlung, and an optimisation of the detector geometry was performed. Optimisations in background reduction techniques and in the electronic acquisition systems are suggested.

  6. MicroCT with energy-resolved photon-counting detectors

    PubMed Central

    Wang, X; Meier, D; Mikkelsen, S; Maehlum, G E; Wagenaar, D J; Tsui, BMW; Patt, B E; Frey, E C

    2011-01-01

    The goal of this paper was to investigate the benefits that could be realistically achieved on a microCT imaging system with an energy-resolved photon-counting x-ray detector. To this end, we built and evaluated a prototype microCT system based on such a detector. The detector is based on cadmium telluride (CdTe) radiation sensors and application-specific integrated circuit (ASIC) readouts. Each detector pixel can simultaneously count x-ray photons above six energy thresholds, providing the capability for energy-selective x-ray imaging. We tested the spectroscopic performance of the system using polychromatic x-ray radiation and various filtering materials with Kabsorption edges. Tomographic images were then acquired of a cylindrical PMMA phantom containing holes filled with various materials. Results were also compared with those acquired using an intensity-integrating x-ray detector and single-energy (i.e. non-energy-selective) CT. This paper describes the functionality and performance of the system, and presents preliminary spectroscopic and tomographic results. The spectroscopic experiments showed that the energy-resolved photon-counting detector was capable of measuring energy spectra from polychromatic sources like a standard x-ray tube, and resolving absorption edges present in the energy range used for imaging. However, the spectral quality was degraded by spectral distortions resulting from degrading factors, including finite energy resolution and charge sharing. We developed a simple charge-sharing model to reproduce these distortions. The tomographic experiments showed that the availability of multiple energy thresholds in the photon-counting detector allowed us to simultaneously measure target-to-background contrasts in different energy ranges. Compared with single-energy CT with an integrating detector, this feature was especially useful to improve differentiation of materials with different attenuation coefficient energy dependences. PMID:21464527

  7. MicroCT with energy-resolved photon-counting detectors.

    PubMed

    Wang, X; Meier, D; Mikkelsen, S; Maehlum, G E; Wagenaar, D J; Tsui, B M W; Patt, B E; Frey, E C

    2011-05-07

    The goal of this paper was to investigate the benefits that could be realistically achieved on a microCT imaging system with an energy-resolved photon-counting x-ray detector. To this end, we built and evaluated a prototype microCT system based on such a detector. The detector is based on cadmium telluride (CdTe) radiation sensors and application-specific integrated circuit (ASIC) readouts. Each detector pixel can simultaneously count x-ray photons above six energy thresholds, providing the capability for energy-selective x-ray imaging. We tested the spectroscopic performance of the system using polychromatic x-ray radiation and various filtering materials with K-absorption edges. Tomographic images were then acquired of a cylindrical PMMA phantom containing holes filled with various materials. Results were also compared with those acquired using an intensity-integrating x-ray detector and single-energy (i.e. non-energy-selective) CT. This paper describes the functionality and performance of the system, and presents preliminary spectroscopic and tomographic results. The spectroscopic experiments showed that the energy-resolved photon-counting detector was capable of measuring energy spectra from polychromatic sources like a standard x-ray tube, and resolving absorption edges present in the energy range used for imaging. However, the spectral quality was degraded by spectral distortions resulting from degrading factors, including finite energy resolution and charge sharing. We developed a simple charge-sharing model to reproduce these distortions. The tomographic experiments showed that the availability of multiple energy thresholds in the photon-counting detector allowed us to simultaneously measure target-to-background contrasts in different energy ranges. Compared with single-energy CT with an integrating detector, this feature was especially useful to improve differentiation of materials with different attenuation coefficient energy dependences.

  8. Calculation of background effects on the VESUVIO eV neutron spectrometer

    NASA Astrophysics Data System (ADS)

    Mayers, J.

    2011-01-01

    The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.

  9. Isospectral discrete and quantum graphs with the same flip counts and nodal counts

    NASA Astrophysics Data System (ADS)

    Juul, Jonas S.; Joyner, Christopher H.

    2018-06-01

    The existence of non-isomorphic graphs which share the same Laplace spectrum (to be referred to as isospectral graphs) leads naturally to the following question: what additional information is required in order to resolve isospectral graphs? It was suggested by Band, Shapira and Smilansky that this might be achieved by either counting the number of nodal domains or the number of times the eigenfunctions change sign (the so-called flip count) (Band et al 2006 J. Phys. A: Math. Gen. 39 13999–4014 Band and Smilansky 2007 Eur. Phys. J. Spec. Top. 145 171–9). Recent examples of (discrete) isospectral graphs with the same flip count and nodal count have been constructed by Ammann by utilising Godsil–McKay switching (Ammann private communication). Here, we provide a simple alternative mechanism that produces systematic examples of both discrete and quantum isospectral graphs with the same flip and nodal counts.

  10. Detector response function of an energy-resolved CdTe single photon counting detector.

    PubMed

    Liu, Xin; Lee, Hyoung Koo

    2014-01-01

    While spectral CT using single photon counting detector has shown a number of advantages in diagnostic imaging, knowledge of the detector response function of an energy-resolved detector is needed to correct the signal bias and reconstruct the image more accurately. The objective of this paper is to study the photo counting detector response function using laboratory sources, and investigate the signal bias correction method. Our approach is to model the detector response function over the entire diagnostic energy range (20 keV correct this spectrum distortion is also proposed. In spectral and fluorescence CT, the spectrum distortion caused by detector response function poses a problem and cannot be ignored in any quantitative analysis. The detector response function of a CdTe detector can be obtained by a semi-analytical method.

  11. Modeling zero-modified count and semicontinuous data in health services research part 2: case studies.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    This article is the second installment of a two-part tutorial on the analysis of zero-modified count and semicontinuous data. Part 1, which appears as a companion piece in this issue of Statistics in Medicine, provides a general background and overview of the topic, with particular emphasis on applications to health services research. Here, we present three case studies highlighting various approaches for the analysis of zero-modified data. The first case study describes methods for analyzing zero-inflated longitudinal count data. Case study 2 considers the use of hurdle models for the analysis of spatiotemporal count data. The third case study discusses an application of marginalized two-part models to the analysis of semicontinuous health expenditure data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Holographic corrections to meson scattering amplitudes

    NASA Astrophysics Data System (ADS)

    Armoni, Adi; Ireson, Edwin

    2017-06-01

    We compute meson scattering amplitudes using the holographic duality between confining gauge theories and string theory, in order to consider holographic corrections to the Veneziano amplitude and associated higher-point functions. The generic nature of such computations is explained, thanks to the well-understood nature of confining string backgrounds, and two different examples of the calculation in given backgrounds are used to illustrate the details. The effect we discover, whilst only qualitative, is re-obtainable in many such examples, in four-point but also higher point amplitudes.

  13. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting.

    PubMed

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D W

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments

  14. QuantiFly: Robust Trainable Software for Automated Drosophila Egg Counting

    PubMed Central

    Waithe, Dominic; Rennert, Peter; Brostow, Gabriel; Piper, Matthew D. W.

    2015-01-01

    We report the development and testing of software called QuantiFly: an automated tool to quantify Drosophila egg laying. Many laboratories count Drosophila eggs as a marker of fitness. The existing method requires laboratory researchers to count eggs manually while looking down a microscope. This technique is both time-consuming and tedious, especially when experiments require daily counts of hundreds of vials. The basis of the QuantiFly software is an algorithm which applies and improves upon an existing advanced pattern recognition and machine-learning routine. The accuracy of the baseline algorithm is additionally increased in this study through correction of bias observed in the algorithm output. The QuantiFly software, which includes the refined algorithm, has been designed to be immediately accessible to scientists through an intuitive and responsive user-friendly graphical interface. The software is also open-source, self-contained, has no dependencies and is easily installed (https://github.com/dwaithe/quantifly). Compared to manual egg counts made from digital images, QuantiFly achieved average accuracies of 94% and 85% for eggs laid on transparent (defined) and opaque (yeast-based) fly media. Thus, the software is capable of detecting experimental differences in most experimental situations. Significantly, the advanced feature recognition capabilities of the software proved to be robust to food surface artefacts like bubbles and crevices. The user experience involves image acquisition, algorithm training by labelling a subset of eggs in images of some of the vials, followed by a batch analysis mode in which new images are automatically assessed for egg numbers. Initial training typically requires approximately 10 minutes, while subsequent image evaluation by the software is performed in just a few seconds. Given the average time per vial for manual counting is approximately 40 seconds, our software introduces a timesaving advantage for experiments

  15. Counting It Twice.

    ERIC Educational Resources Information Center

    Schattschneider, Doris

    1991-01-01

    Provided are examples from many domains of mathematics that illustrate the Fubini Principle in its discrete version: the value of a summation over a rectangular array is independent of the order of summation. Included are: counting using partitions as in proof by pictures, combinatorial arguments, indirect counting as in the inclusion-exclusion…

  16. The projected background for the CUORE experiment

    DOE PAGES

    Alduino, C.; Alfonso, K.; Artusa, D. R.; ...

    2017-08-14

    The Cryogenic Underground Observatory for Rare Events (CUORE) is designed to search for neutrinoless double beta decay of 130Te with an array of 988 TeO 2 bolometers operating at temperatures around 10 mK. The experiment is currently being commissioned in Hall A of Laboratori Nazionali del Gran Sasso, Italy. The goal of CUORE is to reach a 90% C.L. exclusion sensitivity on the 130Te decay half-life of 9 × 10 25 years after 5 years of data taking. The main issue to be addressed to accomplish this aim is the rate of background events in the region of interest, whichmore » must not be higher than 10 –2 counts/keV/kg/year. We developed a detailed Monte Carlo simulation, based on results from a campaign of material screening, radioassays, and bolometric measurements, to evaluate the expected background. This was used over the years to guide the construction strategies of the experiment and we use it here to project a background model for CUORE. In this paper we report the results of our study and our expectations for the background rate in the energy region where the peak signature of neutrinoless double beta decay of 130Te is expected.« less

  17. The projected background for the CUORE experiment

    NASA Astrophysics Data System (ADS)

    Alduino, C.; Alfonso, K.; Artusa, D. R.; Avignone, F. T.; Azzolini, O.; Banks, T. I.; Bari, G.; Beeman, J. W.; Bellini, F.; Benato, G.; Bersani, A.; Biassoni, M.; Branca, A.; Brofferio, C.; Bucci, C.; Camacho, A.; Caminata, A.; Canonica, L.; Cao, X. G.; Capelli, S.; Cappelli, L.; Carbone, L.; Cardani, L.; Carniti, P.; Casali, N.; Cassina, L.; Chiesa, D.; Chott, N.; Clemenza, M.; Copello, S.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Cushman, J. S.; D'Addabbo, A.; Dafinei, I.; Davis, C. J.; Dell'Oro, S.; Deninno, M. M.; Di Domizio, S.; Di Vacri, M. L.; Drobizhev, A.; Fang, D. Q.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Gladstone, L.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Hansen, E.; Heeger, K. M.; Hennings-Yeomans, R.; Hickerson, K. P.; Huang, H. Z.; Kadel, R.; Keppel, G.; Kolomensky, Yu. G.; Leder, A.; Ligi, C.; Lim, K. E.; Ma, Y. G.; Maino, M.; Marini, L.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Mosteiro, P. J.; Napolitano, T.; Nastasi, M.; Nones, C.; Norman, E. B.; Novati, V.; Nucciotti, A.; O'Donnell, T.; Ouellet, J. L.; Pagliarone, C. E.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pessina, G.; Pettinacci, V.; Piperno, G.; Pira, C.; Pirro, S.; Pozzi, S.; Previtali, E.; Rosenfeld, C.; Rusconi, C.; Sakai, M.; Sangiorgio, S.; Santone, D.; Schmidt, B.; Schmidt, J.; Scielzo, N. D.; Singh, V.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tomei, C.; Trentalange, S.; Vignati, M.; Wagaarachchi, S. L.; Wang, B. S.; Wang, H. W.; Welliver, B.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zhang, G. Q.; Zhu, B. X.; Zimmermann, S.; Zucchelli, S.; Laubenstein, M.

    2017-08-01

    The Cryogenic Underground Observatory for Rare Events (CUORE) is designed to search for neutrinoless double beta decay of ^{130}Te with an array of 988 TeO_2 bolometers operating at temperatures around 10 mK. The experiment is currently being commissioned in Hall A of Laboratori Nazionali del Gran Sasso, Italy. The goal of CUORE is to reach a 90% C.L. exclusion sensitivity on the ^{130}Te decay half-life of 9 × 10^{25} years after 5 years of data taking. The main issue to be addressed to accomplish this aim is the rate of background events in the region of interest, which must not be higher than 10^{-2} counts/keV/kg/year. We developed a detailed Monte Carlo simulation, based on results from a campaign of material screening, radioassays, and bolometric measurements, to evaluate the expected background. This was used over the years to guide the construction strategies of the experiment and we use it here to project a background model for CUORE. In this paper we report the results of our study and our expectations for the background rate in the energy region where the peak signature of neutrinoless double beta decay of ^{130}Te is expected.

  18. Impact of double counting and transfer bias on estimated rates and outcomes of acute myocardial infarction.

    PubMed

    Westfall, J M; McGloin, J

    2001-05-01

    Ischemic heart disease is the leading cause of death in the United States. Recent studies report inconsistent findings on the changes in the incidence of hospitalizations for ischemic heart disease. These reports have relied primarily on hospital discharge data. Preliminary data suggest that a significant percentage of patients suffering acute myocardial infarction (MI) in rural communities are transferred to urban centers for care. Patients transferred to a second hospital may be counted twice for one episode of ischemic heart disease. To describe the impact of double counting and transfer bias on the estimation of incidence rates and outcomes of ischemic heart disease, specifically acute MI, in the United States. Analysis of state hospital discharge data from Kansas, Colorado (State Inpatient Database [SID]), Nebraska, Arizona, New Jersey, Michigan, Pennsylvania, and Illinois (SID) for the years 1995 to 1997. A matching algorithm was developed for hospital discharges to determine patients counted twice for one episode of ischemic heart disease. Validation of our matching algorithm. Patients reported to have suffered ischemic heart disease (ICD9 codes 410-414, 786.5). Number of patients counted twice for one episode of acute MI. It is estimated that double count rates range from 10% to 15% for all states and increased over the 3 years. Moderate sized rural counties had the highest estimated double count rates at 15% to 20% with a few counties having estimated double count rates a high as 35% to 50%. Older patients and females were less likely to be double counted (P <0.05). Double counting patients has resulted in a significant overestimation in the incidence rate for hospitalization for acute MI. Correction of this double counting reveals a significantly lower incidence rate and a higher in-hospital mortality rate for acute MI. Transferred patients differ significantly from nontransferred patients, introducing significant bias into MI outcome studies. Double

  19. Precise predictions for V+jets dark matter backgrounds

    NASA Astrophysics Data System (ADS)

    Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.

    2017-12-01

    High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.

  20. Observer error structure in bull trout redd counts in Montana streams: Implications for inference on true redd numbers

    USGS Publications Warehouse

    Muhlfeld, Clint C.; Taper, Mark L.; Staples, David F.; Shepard, Bradley B.

    2006-01-01

    Despite the widespread use of redd counts to monitor trends in salmonid populations, few studies have evaluated the uncertainties in observed counts. We assessed the variability in redd counts for migratory bull trout Salvelinus confluentus among experienced observers in Lion and Goat creeks, which are tributaries to the Swan River, Montana. We documented substantially lower observer variability in bull trout redd counts than did previous studies. Observer counts ranged from 78% to 107% of our best estimates of true redd numbers in Lion Creek and from 90% to 130% of our best estimates in Goat Creek. Observers made both errors of omission and errors of false identification, and we modeled this combination by use of a binomial probability of detection and a Poisson count distribution of false identifications. Redd detection probabilities were high (mean = 83%) and exhibited no significant variation among observers (SD = 8%). We applied this error structure to annual redd counts in the Swan River basin (1982–2004) to correct for observer error and thus derived more accurate estimates of redd numbers and associated confidence intervals. Our results indicate that bias in redd counts can be reduced if experienced observers are used to conduct annual redd counts. Future studies should assess both sources of observer error to increase the validity of using redd counts for inferring true redd numbers in different basins. This information will help fisheries biologists to more precisely monitor population trends, identify recovery and extinction thresholds for conservation and recovery programs, ascertain and predict how management actions influence distribution and abundance, and examine effects of recovery and restoration activities.

  1. A semi-automated technique for labeling and counting of apoptosing retinal cells

    PubMed Central

    2014-01-01

    Background Retinal ganglion cell (RGC) loss is one of the earliest and most important cellular changes in glaucoma. The DARC (Detection of Apoptosing Retinal Cells) technology enables in vivo real-time non-invasive imaging of single apoptosing retinal cells in animal models of glaucoma and Alzheimer’s disease. To date, apoptosing RGCs imaged using DARC have been counted manually. This is time-consuming, labour-intensive, vulnerable to bias, and has considerable inter- and intra-operator variability. Results A semi-automated algorithm was developed which enabled automated identification of apoptosing RGCs labeled with fluorescent Annexin-5 on DARC images. Automated analysis included a pre-processing stage involving local-luminance and local-contrast “gain control”, a “blob analysis” step to differentiate between cells, vessels and noise, and a method to exclude non-cell structures using specific combined ‘size’ and ‘aspect’ ratio criteria. Apoptosing retinal cells were counted by 3 masked operators, generating ‘Gold-standard’ mean manual cell counts, and were also counted using the newly developed automated algorithm. Comparison between automated cell counts and the mean manual cell counts on 66 DARC images showed significant correlation between the two methods (Pearson’s correlation coefficient 0.978 (p < 0.001), R Squared = 0.956. The Intraclass correlation coefficient was 0.986 (95% CI 0.977-0.991, p < 0.001), and Cronbach’s alpha measure of consistency = 0.986, confirming excellent correlation and consistency. No significant difference (p = 0.922, 95% CI: −5.53 to 6.10) was detected between the cell counts of the two methods. Conclusions The novel automated algorithm enabled accurate quantification of apoptosing RGCs that is highly comparable to manual counting, and appears to minimise operator-bias, whilst being both fast and reproducible. This may prove to be a valuable method of quantifying apoptosing retinal

  2. Mealtime Insulin Dosing by Carbohydrate Counting in Hospitalized Cardiology Patients: A Retrospective Cohort Study.

    PubMed

    Thurber, Kristina M; Dierkhising, Ross A; Reiland, Sarah A; Pearson, Kristina K; Smith, Steven A; O'Meara, John G

    2016-01-01

    Carbohydrate counting may improve glycemic control in hospitalized cardiology patients by providing individualized insulin doses tailored to meal consumption. The purpose of this study was to compare glycemic outcomes with mealtime insulin dosed by carbohydrate counting versus fixed dosing in the inpatient setting. This single-center retrospective cohort study included 225 adult medical cardiology patients who received mealtime, basal, and correction-scale insulin concurrently for at least 72 h and up to 7 days in the interval March 1, 2010-November 7, 2013. Mealtime insulin was dosed by carbohydrate counting or with fixed doses determined prior to meal intake. An inpatient diabetes consult service was responsible for insulin management. Exclusion criteria included receipt of an insulin infusion. The primary end point compared mean daily postprandial glucose values, whereas secondary end points included comparison of preprandial glucose values and mean daily rates of hypoglycemia. Mean postprandial glucose level on Day 7 was 204 and 183 mg/dL in the carbohydrate counting and fixed mealtime dose groups, respectively (unadjusted P=0.04, adjusted P=0.12). There were no statistical differences between groups on Days 2-6. Greater rates of preprandial hypoglycemia were observed in the carbohydrate counting cohort on Day 5 (8.6% vs. 1.5%, P=0.02), Day 6 (1.7% vs. 0%, P=0.01), and Day 7 (7.1% vs. 0%, P=0.008). No differences in postprandial hypoglycemia were seen. Mealtime insulin dosing by carbohydrate counting was associated with similar glycemic outcomes as fixed mealtime insulin dosing, except for a greater incidence of preprandial hypoglycemia. Additional comparative studies that include hospital outcomes are needed.

  3. dropEst: pipeline for accurate estimation of molecular counts in droplet-based single-cell RNA-seq experiments.

    PubMed

    Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V

    2018-06-19

    Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.

  4. Complexities of Counting.

    ERIC Educational Resources Information Center

    Stake, Bernadine Evans

    This document focuses on one child's skip counting methods. The pupil, a second grade student at Steuben School, in Kankakee, Illinois, was interviewed as she made several attempts at counting twenty-five poker chips on a circular piece of paper. The interview was part of a larger study of "Children's Conceptions of Number and Numeral,"…

  5. Inter-rater reliability of malaria parasite counts and comparison of methods

    PubMed Central

    2009-01-01

    Background The introduction of artemesinin-based treatment for falciparum malaria has led to a shift away from symptom-based diagnosis. Diagnosis may be achieved by using rapid non-microscopic diagnostic tests (RDTs), of which there are many available. Light microscopy, however, has a central role in parasite identification and quantification and remains the main method of parasite-based diagnosis in clinic and hospital settings and is necessary for monitoring the accuracy of RDTs. The World Health Organization has prepared a proficiency testing panel containing a range of malaria-positive blood samples of known parasitaemia, to be used for the assessment of commercially available malaria RDTs. Different blood film and counting methods may be used for this purpose, which raises questions regarding accuracy and reproducibility. A comparison was made of the established methods for parasitaemia estimation to determine which would give the least inter-rater and inter-method variation Methods Experienced malaria microscopists counted asexual parasitaemia on different slides using three methods; the thin film method using the total erythrocyte count, the thick film method using the total white cell count and the Earle and Perez method. All the slides were stained using Giemsa pH 7.2. Analysis of variance (ANOVA) models were used to find the inter-rater reliability for the different methods. The paired t-test was used to assess any systematic bias between the two methods, and a regression analysis was used to see if there was a changing bias with parasite count level. Results The thin blood film gave parasite counts around 30% higher than those obtained by the thick film and Earle and Perez methods, but exhibited a loss of sensitivity with low parasitaemia. The thick film and Earle and Perez methods showed little or no bias in counts between the two methods, however, estimated inter-rater reliability was slightly better for the thick film method. Conclusion The thin film

  6. Direct Validation of the Wall Interference Correction System of the Ames 11-Foot Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Boone, Alan R.

    2003-01-01

    Data from the test of a large semispan model was used to perform a direct validation of a wall interference correction system for a transonic slotted wall wind tunnel. At first, different sets of uncorrected aerodynamic coefficients were generated by physically changing the boundary condition of the test section walls. Then, wall interference corrections were computed and applied to all data points. Finally, an interpolation of the corrected aerodynamic coefficients was performed. This interpolation made sure that the corrected Mach number of a given run would be constant. Overall, the agreement between corresponding interpolated lift, drag, and pitching moment coefficient sets was very good. Buoyancy corrections were also investigated. These studies showed that the accuracy goal of one drag count may only be achieved if reliable estimates of the wall interference induced buoyancy correction are available during a test.

  7. Repeatability of paired counts.

    PubMed

    Alexander, Neal; Bethony, Jeff; Corrêa-Oliveira, Rodrigo; Rodrigues, Laura C; Hotez, Peter; Brooker, Simon

    2007-08-30

    The Bland and Altman technique is widely used to assess the variation between replicates of a method of clinical measurement. It yields the repeatability, i.e. the value within which 95 per cent of repeat measurements lie. The valid use of the technique requires that the variance is constant over the data range. This is not usually the case for counts of items such as CD4 cells or parasites, nor is the log transformation applicable to zero counts. We investigate the properties of generalized differences based on Box-Cox transformations. For an example, in a data set of hookworm eggs counted by the Kato-Katz method, the square root transformation is found to stabilize the variance. We show how to back-transform the repeatability on the square root scale to the repeatability of the counts themselves, as an increasing function of the square mean root egg count, i.e. the square of the average of square roots. As well as being more easily interpretable, the back-transformed results highlight the dependence of the repeatability on the sample volume used.

  8. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  9. Loop corrections to primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Boran, Sibel; Kahya, E. O.

    2018-02-01

    We discuss quantum gravitational loop effects to observable quantities such as curvature power spectrum and primordial non-Gaussianity of cosmic microwave background (CMB) radiation. We first review the previously shown case where one gets a time dependence for zeta-zeta correlator due to loop corrections. Then we investigate the effect of loop corrections to primordial non-Gaussianity of CMB. We conclude that, even with a single scalar inflaton, one might get a huge value for non-Gaussianity which would exceed the observed value by at least 30 orders of magnitude. Finally we discuss the consequences of this result for scalar driven inflationary models.

  10. Error analysis and corrections to pupil diameter measurements with Langley Research Center's oculometer

    NASA Technical Reports Server (NTRS)

    Fulton, C. L.; Harris, R. L., Jr.

    1980-01-01

    Factors that can affect oculometer measurements of pupil diameter are: horizontal (azimuth) and vertical (elevation) viewing angle of the pilot; refraction of the eye and cornea; changes in distance of eye to camera; illumination intensity of light on the eye; and counting sensitivity of scan lines used to measure diameter, and output voltage. To estimate the accuracy of the measurements, an artificial eye was designed and a series of runs performed with the oculometer system. When refraction effects are included, results show that pupil diameter is a parabolic function of the azimuth angle similar to the cosine function predicted by theory: this error can be accounted for by using a correction equation, reducing the error from 6% to 1.5% of the actual diameter. Elevation angle and illumination effects were found to be negligible. The effects of counting sensitivity and output voltage can be calculated directly from system documentation. The overall accuracy of the unmodified system is about 6%. After correcting for the azimuth angle errors, the overall accuracy is approximately 2%.

  11. The use of flow cytometry to accurately ascertain total and viable counts of Lactobacillus rhamnosus in chocolate.

    PubMed

    Raymond, Yves; Champagne, Claude P

    2015-04-01

    The goals of this study were to evaluate the precision and accuracy of flow cytometry (FC) methodologies in the evaluation of populations of probiotic bacteria (Lactobacillus rhamnosus R0011) in two commercial dried forms, and ascertain the challenges in enumerating them in a chocolate matrix. FC analyses of total (FC(T)) and viable (FC(V)) counts in liquid or dried cultures were almost two times more precise (reproducible) than traditional direct microscopic counts (DCM) or colony forming units (CFU). With FC, it was possible to ascertain low levels of dead cells (FC(D)) in fresh cultures, which is not possible with traditional CFU and DMC methodologies. There was no interference of chocolate solids on FC counts of probiotics when inoculation was above 10(7) bacteria per g. Addition of probiotics in chocolate at 40 °C resulted in a 37% loss in viable cells. Blending of the probiotic powder into chocolate was not uniform which raised a concern that the precision of viable counts could suffer. FCT data can serve to identify the correct inoculation level of a sample, and viable counts (FCV or CFU) can subsequently be better interpreted. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  12. Count-doubling time safety circuit

    DOEpatents

    Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.

    1981-01-01

    There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.

  13. Relativistic Corrections to the Sunyaev-Zeldovich Effect for Clusters of Galaxies. III. Polarization Effect

    NASA Astrophysics Data System (ADS)

    Itoh, Naoki; Nozawa, Satoshi; Kohyama, Yasuharu

    2000-04-01

    We extend the formalism of relativistic thermal and kinematic Sunyaev-Zeldovich effects and include the polarization of the cosmic microwave background photons. We consider the situation of a cluster of galaxies moving with a velocity β≡v/c with respect to the cosmic microwave background radiation. In the present formalism, polarization of the scattered cosmic microwave background radiation caused by the proper motion of a cluster of galaxies is naturally derived as a special case of the kinematic Sunyaev-Zeldovich effect. The relativistic corrections are also included in a natural way. Our results are in complete agreement with the recent results of relativistic corrections obtained by Challinor, Ford, & Lasenby with an entirely different method, as well as the nonrelativistic limit obtained by Sunyaev & Zeldovich. The relativistic correction becomes significant in the Wien region.

  14. Performance Evaluation of High Fluorescence Lymphocyte Count: Comparability to Atypical Lymphocyte Count and Clinical Significance.

    PubMed

    Tantanate, Chaicharoen; Klinbua, Cherdsak

    2018-06-15

    To investigate the association between high-fluorescence lymphocyte cell (HFLC) and atypical lymphocyte (AL) counts, and to determine the clinical significance of HFLC. We compared automated HFLC and microscopic AL counts and analyzed the findings. Patient clinical data for each specimen were reviewed. A total of 320 blood specimens were included. The correlation between HFLC and microscopic AL counts was 0.865 and 0.893 for absolute and percentage counts, respectively. Sensitivity, specificity, and accuracy of HFLC at the cutoff value of 0.1 × 109 per L for detection of AL were 0.8, 0.77, and 0.8, respectively. Studied patients were classified into 4 groups: infection, immunological disorders, malignant neoplasms, and others. Patients with infections had the highest HFLC. Most of those patients (67.7%) had dengue infection. HFLC counts were well-correlated with AL counts with the acceptable test characteristics. Applying HFLC flagging may alert laboratory staff to be aware of ALs.

  15. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  16. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  17. Fast neutron background characterization with the Radiological Multi-sensor Analysis Platform (RadMAP)

    DOE PAGES

    Davis, John R.; Brubaker, Erik; Vetter, Kai

    2017-03-29

    In an effort to characterize the fast neutron radiation background, 16 EJ-309 liquid scintillator cells were installed in the Radiological Multi-sensor Analysis Platform (RadMAP) to collect data in the San Francisco Bay Area. Each fast neutron event was associated with specific weather metrics (pressure, temperature, absolute humidity) and GPS coordinates. Furthermore, the expected exponential dependence of the fast neutron count rate on atmospheric pressure was demonstrated and event rates were subsequently adjusted given the measured pressure at the time of detection. Pressure adjusted data was also used to investigate the influence of other environmental conditions on the neutron background rate.more » Using National Oceanic and Atmospheric Administration (NOAA) coastal area lidar data, an algorithm was implemented to approximate sky-view factors (the total fraction of visible sky) for points along RadMAPs route. In the three areas we analyzed, San Francisco, Downtown Oakland, and Berkeley, all demonstrated a suppression in the background rate of over 50% for the range of sky-view factors measured. This effect, which is due to the shielding of cosmic-ray produced neutrons by surrounding buildings, was comparable to the pressure influence which yielded a 32% suppression in the count rate over the range of pressures measured.« less

  18. CT-based attenuation correction and resolution compensation for I-123 IMP brain SPECT normal database: a multicenter phantom study.

    PubMed

    Inui, Yoshitaka; Ichihara, Takashi; Uno, Masaki; Ishiguro, Masanobu; Ito, Kengo; Kato, Katsuhiko; Sakuma, Hajime; Okazawa, Hidehiko; Toyama, Hiroshi

    2018-06-01

    Statistical image analysis of brain SPECT images has improved diagnostic accuracy for brain disorders. However, the results of statistical analysis vary depending on the institution even when they use a common normal database (NDB), due to different intrinsic spatial resolutions or correction methods. The present study aimed to evaluate the correction of spatial resolution differences between equipment and examine the differences in skull bone attenuation to construct a common NDB for use in multicenter settings. The proposed acquisition and processing protocols were those routinely used at each participating center with additional triple energy window (TEW) scatter correction (SC) and computed tomography (CT) based attenuation correction (CTAC). A multicenter phantom study was conducted on six imaging systems in five centers, with either single photon emission computed tomography (SPECT) or SPECT/CT, and two brain phantoms. The gray/white matter I-123 activity ratio in the brain phantoms was 4, and they were enclosed in either an artificial adult male skull, 1300 Hounsfield units (HU), a female skull, 850 HU, or an acrylic cover. The cut-off frequency of the Butterworth filters was adjusted so that the spatial resolution was unified to a 17.9 mm full width at half maximum (FWHM), that of the lowest resolution system. The gray-to-white matter count ratios were measured from SPECT images and compared with the actual activity ratio. In addition, mean, standard deviation and coefficient of variation images were calculated after normalization and anatomical standardization to evaluate the variability of the NDB. The gray-to-white matter count ratio error without SC and attenuation correction (AC) was significantly larger for higher bone densities (p < 0.05). The count ratio error with TEW and CTAC was approximately 5% regardless of bone density. After adjustment of the spatial resolution in the SPECT images, the variability of the NDB decreased and was comparable

  19. [A multicenter study of correlation between peripheral lymphocyte counts and CD(+)4T cell counts in HIV/AIDS patients].

    PubMed

    Xie, Jing; Qiu, Zhifeng; Han, Yang; Li, Yanling; Song, Xiaojing; Li, Taisheng

    2015-02-01

    To evaluate the accuracy of lymphocyte count as a surrogate for CD(+)4T cell count in treatment-naїve HIV-infected adults. A total of 2 013 HIV-infected patients were screened at 23 sites in China. CD(+)4T cell counts were measured by flow cytometry. Correlation between CD(+)4T cell count and peripheral lymphocyte count were analyzed by spearman coefficient. AUCROC were used to evaluate the performance of lymphocyte count as a surrogate for CD(+)4T cell count. The lymphocyte count and CD(+)4T cell count of these 2 013 patients were (1 600 ± 670) × 10(6)/L and (244 ± 148) × 10(6)/L respectively. CD(+)4T cell count were positively correlated with lymphocyte count (r = 0.482, P < 0.000 1). AUCROC of lymphocyte count as a surrogate for CD(+)4T cell counts of <100×10(6)/L, <200×10(6)/L and <350×10(6)/L were 0.790 (95%CI 0.761-0.818, P < 0.000 1), 0.733 (95%CI 0.710-0.755, P < 0.000 1) and 0.732 (95%CI 0.706-0.758, P < 0.000 1) respectively. Lymphocyte count could be considerad as a potential surrogate marker for CD(+)4T cell count in HIV/AIDS patients not having access to T cell subset test by flowcytometry.

  20. RSA and its Correctness through Modular Arithmetic

    NASA Astrophysics Data System (ADS)

    Meelu, Punita; Malik, Sitender

    2010-11-01

    To ensure the security to the applications of business, the business sectors use Public Key Cryptographic Systems (PKCS). An RSA system generally belongs to the category of PKCS for both encryption and authentication. This paper describes an introduction to RSA through encryption and decryption schemes, mathematical background which includes theorems to combine modular equations and correctness of RSA. In short, this paper explains some of the maths concepts that RSA is based on, and then provides a complete proof that RSA works correctly. We can proof the correctness of RSA through combined process of encryption and decryption based on the Chinese Remainder Theorem (CRT) and Euler theorem. However, there is no mathematical proof that RSA is secure, everyone takes that on trust!.

  1. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method

    PubMed Central

    Veta, Mitko; van Diest, Paul J.; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P. W.

    2016-01-01

    Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased

  2. Development and evaluation of the Kids Count Farm Safety Lesson.

    PubMed

    Liller, K D; Noland, V; Rijal, P; Pesce, K; Gonzalez, R

    2002-11-01

    The Kids Count Farm Safety Lesson was delivered to nearly 2,000 fifth-grade students in 15 rural schools in Hillsborough County, Florida. The lesson covered animal, machinery, water, and general safety topics applicable to farming in Florida. A staggered pretest-posttest study design was followed whereby five schools received a multiple-choice pretest and posttest and the remainder of the schools (N = 10) received the posttest only. Results of the study showed a significant increase in the mean number of correct answers on the posttests compared to the pretests. There was no significant difference in the mean number of correct answers of those students who received the pretest and those students who had not, eliminating a "pretest" effect. This study fills an important gap in the literature by evaluating a farm safety curriculum offered in the elementary school setting. It also included migrant schoolchildren in the study population. It is strongly recommended that agricultural safety information be included into the health education curriculum of these elementary schools.

  3. Components of the Extragalactic Gamma-Ray Background

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.; Venters, Tonia M.

    2011-01-01

    We present new theoretical estimates of the relative contributions of unresolved blazars and star-forming galaxies to the extragalactic gamma-ray background (EGB) and discuss constraints on the contributions from alternative mechanisms such as dark matter annihilation and truly diffuse gamma-ray production. We find that the Fermi source count data do not rule out a scenario in which the EGB is dominated by emission from unresolved blazars, though unresolved star-forming galaxies may also contribute significantly to the background, within order-of-magnitude uncertainties. In addition, we find that the spectrum of the unresolved star-forming galaxy contribution cannot explain the EGB spectrum found by EGRET at energies between 50 and 200 MeV, whereas the spectrum of unresolved flat spectrum radio quasars, when accounting for the energy-dependent effects of source confusion, could be consistent with the combined spectrum of the low-energy EGRET EGB measurements and the Fermi-Large Area Telescope EGB measurements.

  4. Statistical Aspects of Point Count Sampling

    Treesearch

    Richard J. Barker; John R. Sauer

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demonstrate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the...

  5. Progress on the Use of Combined Analog and Photon Counting Detection for Raman Lidar

    NASA Technical Reports Server (NTRS)

    Newsom, Rob; Turner, Dave; Clayton, Marian; Ferrare, Richard

    2008-01-01

    The Atmospheric Radiation Measurement (ARM) program Raman Lidar (CARL) was upgraded in 2004 with a new data system that provides simultaneous measurements of both the photomultiplier analog output voltage and photon counts. The so-called merge value added procedure (VAP) was developed to combine the analog and count-rate signals into a single signal with improved dynamic range. Earlier versions of this VAP tended to cause unacceptably large biases in the water vapor mixing ratio during the daytime as a result of improper matching between the analog and count-rate signals in the presence of elevated solar background levels. We recently identified several problems and tested a modified version of the merge VAP by comparing profiles of water vapor mixing ratio derived from CARL with simultaneous sonde data over a six month period. We show that the modified merge VAP significantly reduces the daytime bias, and results in mean differences that are within approximately 1% for both nighttime and daytime measurements.

  6. Analysis of counting errors in the phase/Doppler particle analyzer

    NASA Technical Reports Server (NTRS)

    Oldenburg, John R.

    1987-01-01

    NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.

  7. Rigid-body transformation of list-mode projection data for respiratory motion correction in cardiac PET.

    PubMed

    Livieratos, L; Stegger, L; Bloomfield, P M; Schafers, K; Bailey, D L; Camici, P G

    2005-07-21

    High-resolution cardiac PET imaging with emphasis on quantification would benefit from eliminating the problem of respiratory movement during data acquisition. Respiratory gating on the basis of list-mode data has been employed previously as one approach to reduce motion effects. However, it results in poor count statistics with degradation of image quality. This work reports on the implementation of a technique to correct for respiratory motion in the area of the heart at no extra cost for count statistics and with the potential to maintain ECG gating, based on rigid-body transformations on list-mode data event-by-event. A motion-corrected data set is obtained by assigning, after pre-correction for detector efficiency and photon attenuation, individual lines-of-response to new detector pairs with consideration of respiratory motion. Parameters of respiratory motion are obtained from a series of gated image sets by means of image registration. Respiration is recorded simultaneously with the list-mode data using an inductive respiration monitor with an elasticized belt at chest level. The accuracy of the technique was assessed with point-source data showing a good correlation between measured and true transformations. The technique was applied on phantom data with simulated respiratory motion, showing successful recovery of tracer distribution and contrast on the motion-corrected images, and on patient data with C15O and 18FDG. Quantitative assessment of preliminary C15O patient data showed improvement in the recovery coefficient at the centre of the left ventricle.

  8. You can count on the motor cortex: Finger counting habits modulate motor cortex activation evoked by numbers

    PubMed Central

    Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H.; Pulvermüller, Friedemann

    2012-01-01

    The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. PMID:22133748

  9. Total lymphocyte count and subpopulation lymphocyte counts in relation to dietary intake and nutritional status of peritoneal dialysis patients.

    PubMed

    Grzegorzewska, Alicja E; Leander, Magdalena

    2005-01-01

    Dietary deficiency causes abnormalities in circulating lymphocyte counts. For the present paper, we evaluated correlations between total and subpopulation lymphocyte counts (TLC, SLCs) and parameters of nutrition in peritoneal dialysis (PD) patients. Studies were carried out in 55 patients treated with PD for 22.2 +/- 11.4 months. Parameters of nutritional status included total body mass, lean body mass (LBM), body mass index (BMI), and laboratory indices [total protein, albumin, iron, ferritin, and total iron binding capacity (TIBC)]. The SLCs were evaluated using flow cytometry. Positive correlations were seen between TLC and dietary intake of niacin; TLC and CD8 and CD16+56 counts and energy delivered from protein; CD4 count and beta-carotene and monounsaturated fatty acids 17:1 intake; and CD19 count and potassium, copper, vitamin A, and beta-carotene intake. Anorexia negatively influenced CD19 count. Serum albumin showed correlations with CD4 and CD19 counts, and LBM with CD19 count. A higher CD19 count was connected with a higher red blood cell count, hemoglobin, and hematocrit. Correlations were observed between TIBC and TLC and CD3 and CD8 counts, and between serum Fe and TLC and CD3 and CD4 counts. Patients with a higher CD19 count showed a better clinical-laboratory score, especially less weakness. Patients with a higher CD4 count had less expressed insomnia. Quantities of ingested vitamins and minerals influence lymphocyte counts in the peripheral blood of PD patients. Evaluation of TLC and SLCs is helpful in monitoring the effectiveness of nutrition in these patients.

  10. Kids Count in Delaware: Fact Book 1999 [and] Families Count in Delaware: Fact Book, 1999.

    ERIC Educational Resources Information Center

    Delaware Univ., Newark. Kids Count in Delaware.

    This Kids Count Fact Book is combined with the Families Count Fact Book to provide information on statewide trends affecting children and families in Delaware. The Kids Count statistical profile is based on 10 main indicators of child well-being: (1) births to teens; (2) low birth weight babies; (3) infant mortality; (4) child deaths; (5) teen…

  11. Correlation Functions Quantify Super-Resolution Images and Estimate Apparent Clustering Due to Over-Counting

    PubMed Central

    Veatch, Sarah L.; Machta, Benjamin B.; Shelby, Sarah A.; Chiang, Ethan N.; Holowka, David A.; Baird, Barbara A.

    2012-01-01

    We present an analytical method using correlation functions to quantify clustering in super-resolution fluorescence localization images and electron microscopy images of static surfaces in two dimensions. We use this method to quantify how over-counting of labeled molecules contributes to apparent self-clustering and to calculate the effective lateral resolution of an image. This treatment applies to distributions of proteins and lipids in cell membranes, where there is significant interest in using electron microscopy and super-resolution fluorescence localization techniques to probe membrane heterogeneity. When images are quantified using pair auto-correlation functions, the magnitude of apparent clustering arising from over-counting varies inversely with the surface density of labeled molecules and does not depend on the number of times an average molecule is counted. In contrast, we demonstrate that over-counting does not give rise to apparent co-clustering in double label experiments when pair cross-correlation functions are measured. We apply our analytical method to quantify the distribution of the IgE receptor (FcεRI) on the plasma membranes of chemically fixed RBL-2H3 mast cells from images acquired using stochastic optical reconstruction microscopy (STORM/dSTORM) and scanning electron microscopy (SEM). We find that apparent clustering of FcεRI-bound IgE is dominated by over-counting labels on individual complexes when IgE is directly conjugated to organic fluorophores. We verify this observation by measuring pair cross-correlation functions between two distinguishably labeled pools of IgE-FcεRI on the cell surface using both imaging methods. After correcting for over-counting, we observe weak but significant self-clustering of IgE-FcεRI in fluorescence localization measurements, and no residual self-clustering as detected with SEM. We also apply this method to quantify IgE-FcεRI redistribution after deliberate clustering by crosslinking with two

  12. The piecewise-linear dynamic attenuator reduces the impact of count rate loss with photon-counting detectors

    NASA Astrophysics Data System (ADS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-06-01

    Photon counting x-ray detectors (PCXDs) offer several advantages compared to standard energy-integrating x-ray detectors, but also face significant challenges. One key challenge is the high count rates required in CT. At high count rates, PCXDs exhibit count rate loss and show reduced detective quantum efficiency in signal-rich (or high flux) measurements. In order to reduce count rate requirements, a dynamic beam-shaping filter can be used to redistribute flux incident on the patient. We study the piecewise-linear attenuator in conjunction with PCXDs without energy discrimination capabilities. We examined three detector models: the classic nonparalyzable and paralyzable detector models, and a ‘hybrid’ detector model which is a weighted average of the two which approximates an existing, real detector (Taguchi et al 2011 Med. Phys. 38 1089-102 ). We derive analytic expressions for the variance of the CT measurements for these detectors. These expressions are used with raw data estimated from DICOM image files of an abdomen and a thorax to estimate variance in reconstructed images for both the dynamic attenuator and a static beam-shaping (‘bowtie’) filter. By redistributing flux, the dynamic attenuator reduces dose by 40% without increasing peak variance for the ideal detector. For non-ideal PCXDs, the impact of count rate loss is also reduced. The nonparalyzable detector shows little impact from count rate loss, but with the paralyzable model, count rate loss leads to noise streaks that can be controlled with the dynamic attenuator. With the hybrid model, the characteristic count rates required before noise streaks dominate the reconstruction are reduced by a factor of 2 to 3. We conclude that the piecewise-linear attenuator can reduce the count rate requirements of the PCXD in addition to improving dose efficiency. The magnitude of this reduction depends on the detector, with paralyzable detectors showing much greater benefit than nonparalyzable detectors.

  13. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  14. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  15. Assessment guidance of carbohydrate counting method in patients with type 2 diabetes mellitus.

    PubMed

    Martins, Michelle R; Ambrosio, Ana Cristina T; Nery, Marcia; Aquino, Rita de Cássia; Queiroz, Marcia S

    2014-04-01

    We evaluated the application of the method of carbohydrate counting performed by 21 patients with type 2 diabetes, 1 year later attending a guidance course. Participants answered a questionnaire to assess patients' adhesion to carbohydrate counting as well as to identify habit changes and the method's applicability, and values of glycated hemoglobin were also analyzed. Most participants (76%) were females, and 25% of them had obesity degree III. There was a statistically significant decrease in glycated hemoglobin from 8.42±0.02% to 7.66±0.01% comparing values before and after counseling. We observed that although patients stated that the method was difficult they understood that carbohydrate counting could allow them make choices and have more freedom in their meals; we also verified if they understood accurately how to replace some foods used regularly in their diets and most patients correctly chose replacements for the groups of bread (76%), beans (67%) and noodles (67%). We concluded that participation in the course led to improved blood glucose control with a significant reduction of glycated hemoglobin, better understanding of food groups and the adoption of healthier eating habits. Copyright © 2013 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.

  16. Counting OCR errors in typeset text

    NASA Astrophysics Data System (ADS)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  17. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  18. Breast Tissue Characterization with Photon-counting Spectral CT Imaging: A Postmortem Breast Study

    PubMed Central

    Ding, Huanjun; Klopfer, Michael J.; Ducote, Justin L.; Masaki, Fumitaro

    2014-01-01

    Purpose To investigate the feasibility of breast tissue characterization in terms of water, lipid, and protein contents with a spectral computed tomographic (CT) system based on a cadmium zinc telluride (CZT) photon-counting detector by using postmortem breasts. Materials and Methods Nineteen pairs of postmortem breasts were imaged with a CZT-based photon-counting spectral CT system with beam energy of 100 kVp. The mean glandular dose was estimated to be in the range of 1.8–2.2 mGy. The images were corrected for pulse pile-up and other artifacts by using spectral distortion corrections. Dual-energy decomposition was then applied to characterize each breast into water, lipid, and protein contents. The precision of the three-compartment characterization was evaluated by comparing the composition of right and left breasts, where the standard error of the estimations was determined. The results of dual-energy decomposition were compared by using averaged root mean square to chemical analysis, which was used as the reference standard. Results The standard errors of the estimations of the right-left correlations obtained from spectral CT were 7.4%, 6.7%, and 3.2% for water, lipid, and protein contents, respectively. Compared with the reference standard, the average root mean square error in breast tissue composition was 2.8%. Conclusion Spectral CT can be used to accurately quantify the water, lipid, and protein contents in breast tissue in a laboratory study by using postmortem specimens. © RSNA, 2014 PMID:24814180

  19. Factor V Leiden is associated with increased sperm count.

    PubMed

    van Mens, T E; Joensen, U N; Bochdanovits, Z; Takizawa, A; Peter, J; Jørgensen, N; Szecsi, P B; Meijers, J C M; Weiler, H; Rajpert-De Meyts, E; Repping, S; Middeldorp, S

    2017-11-01

    Is the thrombophilia mutation factor V Leiden (FVL) associated with an increased total sperm count? Carriers of FVL have a higher total sperm count than non-FVL-carriers, which could not be explained by genetic linkage or by observations in a FVL-mouse model. FVL has a high prevalence in Caucasians despite detrimental health effects. Carriers have been shown to have higher fecundity, which might partly explain this evolutionary paradox. We determined FVL status in two cohorts (Dutch, n = 627; Danish, n = 854) of consecutively included men without known causes for spermatogenic failure, and performed an individual patient data meta-analysis of these two cohorts together with one previously published (Dutch, n = 908) cohort. We explored possible biological underpinnings for the relation between sperm count and FVL, by use of a FVL-mouse model and investigations of genetic linkage. Participants were male partners of subfertile couples (two Dutch cohorts) and young men from the general population (Danish cohort): FVL carrier rate was 4.0%, 4.6% and 7.3%, respectively. There were differences in smoking, abstinence time and age between the cohorts. We corrected for these in the primary analysis, which consisted of a mixed linear effects model, also incorporating unobjectified population differences. In public haplotype data from subjects of European descent, we explored linkage disequilibrium of FVL with all known single nucleotide polymorphisms in a 1.5 MB region around the F5 gene with an R2 cutoff of 0.8. We sequenced exons of four candidate genes hypothesized to be linked to FVL in a subgroup of FVL carriers with extreme sperm count values. The animal studies consisted of never mated 15-18-week-old C57BL/J6 mice heterozygous and homozygous for FVL and wild-type mice. We compared spermatogenesis parameters (normalized internal genitalia weights, epididymis sperm content and sperm motility) between FVL and wild-type mice. Human FVL carriers have a higher total sperm

  20. Does the covariance structure matter in longitudinal modelling for the prediction of future CD4 counts?

    PubMed

    Taylor, J M; Law, N

    1998-10-30

    We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.

  1. Command Decision-Making: Experience Counts

    DTIC Science & Technology

    2005-03-18

    USAWC STRATEGY RESEARCH PROJECT COMMAND DECISION - MAKING : EXPERIENCE COUNTS by Lieutenant Colonel Kelly A. Wolgast United States Army Colonel Charles...1. REPORT DATE 18 MAR 2005 2. REPORT TYPE 3. DATES COVERED - 4. TITLE AND SUBTITLE Command Decision Making Experience Counts 5a. CONTRACT...Colonel Kelly A. Wolgast TITLE: Command Decision - making : Experience Counts FORMAT: Strategy Research Project DATE: 18 March 2005 PAGES: 30 CLASSIFICATION

  2. Relationship between automated total nucleated cell count and enumeration of cells on direct smears of canine synovial fluid.

    PubMed

    Dusick, Allison; Young, Karen M; Muir, Peter

    2014-12-01

    Canine osteoarthritis is a common disorder seen in veterinary clinical practice and causes considerable morbidity in dogs as they age. Synovial fluid analysis is an important tool for diagnosis and treatment of canine joint disease and obtaining a total nucleated cell count (TNCC) is particularly important. However, the low sample volumes obtained during arthrocentesis are often insufficient for performing an automated TNCC, thereby limiting diagnostic interpretation. The aim of the present study was to investigate whether estimation of TNCC in canine synovial fluid could be achieved by performing manual cell counts on direct smears of fluid. Fifty-eight synovial fluid samples, taken by arthrocentesis from 48 dogs, were included in the study. Direct smears of synovial fluid were prepared, and hyaluronidase added before cell counts were obtained using a commercial laser-based instrument. A protocol was established to count nucleated cells in a specific region of the smear, using a serpentine counting pattern; the mean number of nucleated cells per 400 × field was then calculated. There was a positive correlation between the automated TNCC and mean manual cell count, with more variability at higher TNCC. Regression analysis was performed to estimate TNCC from manual counts. By this method, 78% of the samples were correctly predicted to fall into one of three categories (within the reference interval, mildly to moderately increased, or markedly increased) relative to the automated TNCC. Intra-observer and inter-observer agreement was good to excellent. The results of the study suggest that interpretation of canine synovial fluid samples of low volume can be aided by methodical manual counting of cells on direct smears. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. The optimal on-source region size for detections with counting-type telescopes

    NASA Astrophysics Data System (ADS)

    Klepser, S.

    2017-03-01

    Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.

  4. You can count on the motor cortex: finger counting habits modulate motor cortex activation evoked by numbers.

    PubMed

    Tschentscher, Nadja; Hauk, Olaf; Fischer, Martin H; Pulvermüller, Friedemann

    2012-02-15

    The embodied cognition framework suggests that neural systems for perception and action are engaged during higher cognitive processes. In an event-related fMRI study, we tested this claim for the abstract domain of numerical symbol processing: is the human cortical motor system part of the representation of numbers, and is organization of numerical knowledge influenced by individual finger counting habits? Developmental studies suggest a link between numerals and finger counting habits due to the acquisition of numerical skills through finger counting in childhood. In the present study, digits 1 to 9 and the corresponding number words were presented visually to adults with different finger counting habits, i.e. left- and right-starters who reported that they usually start counting small numbers with their left and right hand, respectively. Despite the absence of overt hand movements, the hemisphere contralateral to the hand used for counting small numbers was activated when small numbers were presented. The correspondence between finger counting habits and hemispheric motor activation is consistent with an intrinsic functional link between finger counting and number processing. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Performance of 3DOSEM and MAP algorithms for reconstructing low count SPECT acquisitions.

    PubMed

    Grootjans, Willem; Meeuwis, Antoi P W; Slump, Cornelis H; de Geus-Oei, Lioe-Fee; Gotthardt, Martin; Visser, Eric P

    2016-12-01

    Low count single photon emission computed tomography (SPECT) is becoming more important in view of whole body SPECT and reduction of radiation dose. In this study, we investigated the performance of several 3D ordered subset expectation maximization (3DOSEM) and maximum a posteriori (MAP) algorithms for reconstructing low count SPECT images. Phantom experiments were conducted using the National Electrical Manufacturers Association (NEMA) NU2 image quality (IQ) phantom. The background compartment of the phantom was filled with varying concentrations of pertechnetate and indiumchloride, simulating various clinical imaging conditions. Images were acquired using a hybrid SPECT/CT scanner and reconstructed with 3DOSEM and MAP reconstruction algorithms implemented in Siemens Syngo MI.SPECT (Flash3D) and Hermes Hybrid Recon Oncology (Hyrid Recon 3DOSEM and MAP). Image analysis was performed by calculating the contrast recovery coefficient (CRC),percentage background variability (N%), and contrast-to-noise ratio (CNR), defined as the ratio between CRC and N%. Furthermore, image distortion is characterized by calculating the aspect ratio (AR) of ellipses fitted to the hot spheres. Additionally, the performance of these algorithms to reconstruct clinical images was investigated. Images reconstructed with 3DOSEM algorithms demonstrated superior image quality in terms of contrast and resolution recovery when compared to images reconstructed with filtered-back-projection (FBP), OSEM and 2DOSEM. However, occurrence of correlated noise patterns and image distortions significantly deteriorated the quality of 3DOSEM reconstructed images. The mean AR for the 37, 28, 22, and 17mm spheres was 1.3, 1.3, 1.6, and 1.7 respectively. The mean N% increase in high and low count Flash3D and Hybrid Recon 3DOSEM from 5.9% and 4.0% to 11.1% and 9.0%, respectively. Similarly, the mean CNR decreased in high and low count Flash3D and Hybrid Recon 3DOSEM from 8.7 and 8.8 to 3.6 and 4.2, respectively

  6. Casual instrument corrections for short-period and broadband seismometers

    USGS Publications Warehouse

    Haney, Matthew M.; Power, John; West, Michael; Michaels, Paul

    2012-01-01

    Of all the filters applied to recordings of seismic waves, which include source, path, and site effects, the one we know most precisely is the instrument filter. Therefore, it behooves seismologists to accurately remove the effect of the instrument from raw seismograms. Applying instrument corrections allows analysis of the seismogram in terms of physical units (e.g., displacement or particle velocity of the Earth’s surface) instead of the output of the instrument (e.g., digital counts). The instrument correction can be considered the most fundamental processing step in seismology since it relates the raw data to an observable quantity of interest to seismologists. Complicating matters is the fact that, in practice, the term “instrument correction” refers to more than simply the seismometer. The instrument correction compensates for the complete recording system including the seismometer, telemetry, digitizer, and any anti‐alias filters. Knowledge of all these components is necessary to perform an accurate instrument correction. The subject of instrument corrections has been covered extensively in the literature (Seidl, 1980; Scherbaum, 1996). However, the prospect of applying instrument corrections still evokes angst among many seismologists—the authors of this paper included. There may be several reasons for this. For instance, the seminal paper by Seidl (1980) exists in a journal that is not currently available in electronic format and cannot be accessed online. Also, a standard method for applying instrument corrections involves the programs TRANSFER and EVALRESP in the Seismic Analysis Code (SAC) package (Goldstein et al., 2003). The exact mathematical methods implemented in these codes are not thoroughly described in the documentation accompanying SAC.

  7. Dark-count-less photon-counting x-ray computed tomography system using a YAP-MPPC detector

    NASA Astrophysics Data System (ADS)

    Sato, Eiichi; Sato, Yuich; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Watanabe, Manabu; Kusachi, Shinya; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2012-10-01

    A high-sensitive X-ray computed tomography (CT) system is useful for decreasing absorbed dose for patients, and a dark-count-less photon-counting CT system was developed. X-ray photons are detected using a YAP(Ce) [cerium-doped yttrium aluminum perovskite] single crystal scintillator and an MPPC (multipixel photon counter). Photocurrents are amplified by a high-speed current-voltage amplifier, and smooth event pulses from an integrator are sent to a high-speed comparator. Then, logical pulses are produced from the comparator and are counted by a counter card. Tomography is accomplished by repeated linear scans and rotations of an object, and projection curves of the object are obtained by the linear scan. The image contrast of gadolinium medium slightly fell with increase in lower-level voltage (Vl) of the comparator. The dark count rate was 0 cps, and the count rate for the CT was approximately 250 kcps.

  8. Genotype-specific risk factors for Staphylococcus aureus in Swiss dairy herds with an elevated yield-corrected herd somatic cell count.

    PubMed

    Berchtold, B; Bodmer, M; van den Borne, B H P; Reist, M; Graber, H U; Steiner, A; Boss, R; Wohlfender, F

    2014-01-01

    Bovine mastitis is a frequent problem in Swiss dairy herds. One of the main pathogens causing significant economic loss is Staphylococcus aureus. Various Staph. aureus genotypes with different biological properties have been described. Genotype B (GTB) of Staph. aureus was identified as the most contagious and one of the most prevalent strains in Switzerland. The aim of this study was to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB in Swiss dairy herds with an elevated yield-corrected herd somatic cell count (YCHSCC). One hundred dairy herds with a mean YCHSCC between 200,000 and 300,000cells/mL in 2010 were recruited and each farm was visited once during milking. A standardized protocol investigating demography, mastitis management, cow husbandry, milking system, and milking routine was completed during the visit. A bulk tank milk (BTM) sample was analyzed by real-time PCR for the presence of Staph. aureus GTB to classify the herds into 2 groups: Staph. aureus GTB-positive and Staph. aureus GTB-negative. Moreover, quarter milk samples were aseptically collected for bacteriological culture from cows with a somatic cell count ≥150,000cells/mL on the last test-day before the visit. The culture results allowed us to allocate the Staph. aureus GTB-negative farms to Staph. aureus non-GTB and Staph. aureus-free groups. Multivariable multinomial logistic regression models were built to identify risk factors associated with the herd-level presence of Staph. aureus GTB and Staph. aureus non-GTB. The prevalence of Staph. aureus GTB herds was 16% (n=16), whereas that of Staph. aureus non-GTB herds was 38% (n=38). Herds that sent lactating cows to seasonal communal pastures had significantly higher odds of being infected with Staph. aureus GTB (odds ratio: 10.2, 95% CI: 1.9-56.6), compared with herds without communal pasturing. Herds that purchased heifers had significantly higher odds of being infected with

  9. Inappropriate ICD discharges due to "triple counting" during normal sinus rhythm.

    PubMed

    Khan, Ejaz; Voudouris, Apostolos; Shorofsky, Stephen R; Peters, Robert W

    2006-11-01

    To describe the clinical course of a patient with multiple ICD shocks in the setting of advanced renal failure and hyperkalemia. The patient was brought to the Electrophysiology Laboratory where the ICD was interrogated. The patient was found to be hyperkalemic (serum potassium 7.6 mg/dl). Analysis of stored intracardiac electrograms from the ICD revealed "triple counting" (twice during his QRS complex and once during the T wave) and multiple inappropriate shocks. Correction of his electrolyte abnormality normalized his electrogram and no further ICD activations were observed. Electrolyte abnormalities can distort the intracardiac electrogram in patients with ICD's and these changes can lead to multiple inappropriate shocks.

  10. Modeling and simulation of count data.

    PubMed

    Plan, E L

    2014-08-13

    Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.

  11. Experimental analysis of the auditory detection process on avian point counts

    USGS Publications Warehouse

    Simons, T.R.; Alldredge, M.W.; Pollock, K.H.; Wettroth, J.M.

    2007-01-01

    We have developed a system for simulating the conditions of avian surveys in which birds are identified by sound. The system uses a laptop computer to control a set of amplified MP3 players placed at known locations around a survey point. The system can realistically simulate a known population of songbirds under a range of factors that affect detection probabilities. The goals of our research are to describe the sources and range of variability affecting point-count estimates and to find applications of sampling theory and methodologies that produce practical improvements in the quality of bird-census data. Initial experiments in an open field showed that, on average, observers tend to undercount birds on unlimited-radius counts, though the proportion of birds counted by individual observers ranged from 81% to 132% of the actual total. In contrast to the unlimited-radius counts, when data were truncated at a 50-m radius around the point, observers overestimated the total population by 17% to 122%. Results also illustrate how detection distances decline and identification errors increase with increasing levels of ambient noise. Overall, the proportion of birds heard by observers decreased by 28 ± 4.7% under breezy conditions, 41 ± 5.2% with the presence of additional background birds, and 42 ± 3.4% with the addition of 10 dB of white noise. These findings illustrate some of the inherent difficulties in interpreting avian abundance estimates based on auditory detections, and why estimates that do not account for variations in detection probability will not withstand critical scrutiny.

  12. Robust Data Detection for the Photon-Counting Free-Space Optical System With Implicit CSI Acquisition and Background Radiation Compensation

    NASA Astrophysics Data System (ADS)

    Song, Tianyu; Kam, Pooi-Yuen

    2016-02-01

    Since atmospheric turbulence and pointing errors cause signal intensity fluctuations and the background radiation surrounding the free-space optical (FSO) receiver contributes an undesired noisy component, the receiver requires accurate channel state information (CSI) and background information to adjust the detection threshold. In most previous studies, for CSI acquisition, pilot symbols were employed, which leads to a reduction of spectral and energy efficiency; and an impractical assumption that the background radiation component is perfectly known was made. In this paper, we develop an efficient and robust sequence receiver, which acquires the CSI and the background information implicitly and requires no knowledge about the channel model information. It is robust since it can automatically estimate the CSI and background component and detect the data sequence accordingly. Its decision metric has a simple form and involves no integrals, and thus can be easily evaluated. A Viterbi-type trellis-search algorithm is adopted to improve the search efficiency, and a selective-store strategy is adopted to overcome a potential error floor problem as well as to increase the memory efficiency. To further simplify the receiver, a decision-feedback symbol-by-symbol receiver is proposed as an approximation of the sequence receiver. By simulations and theoretical analysis, we show that the performance of both the sequence receiver and the symbol-by-symbol receiver, approach that of detection with perfect knowledge of the CSI and background radiation, as the length of the window for forming the decision metric increases.

  13. SPERM COUNT DISTRIBUTIONS IN FERTILE MEN

    EPA Science Inventory

    Sperm concentration and count are often used as indicators of environmental impacts on male reproductive health. Existing clinical databases may be biased towards subfertile men with low sperm counts and less is known about expected sperm count distributions in cohorts of fertil...

  14. Nicotine dependence, "background" and cue-induced craving and smoking in the laboratory.

    PubMed

    Dunbar, Michael S; Shiffman, Saul; Kirchner, Thomas R; Tindle, Hilary A; Scholl, Sarah M

    2014-09-01

    Nicotine dependence has been associated with higher "background" craving and smoking, independent of situational cues. Due in part to conceptual and methodological differences across past studies, the relationship between dependence and cue-reactivity (CR; e.g., cue-induced craving and smoking) remains unclear. 207 daily smokers completed six pictorial CR sessions (smoking, negative affect, positive affect, alcohol, smoking prohibitions, and neutral). Individuals rated craving before (background craving) and after cues, and could smoke following cue exposure. Session videos were coded to assess smoking. Participants completed four nicotine dependence measures. Regression models assessed the relationship of dependence to cue-independent (i.e., pre-cue) and cue-specific (i.e., pre-post cue change for each cue, relative to neutral) craving and smoking (likelihood of smoking, latency to smoke, puff count). Dependence was associated with background craving and smoking, but did not predict change in craving across the entire sample for any cue. Among alcohol drinkers, dependence was associated with greater increases in craving following the alcohol cue. Only one dependence measure (Wisconsin Inventory of Smoking Dependence Motives) was consistently associated with smoking reactivity (higher likelihood of smoking, shorter latency to smoke, greater puff count) in response to cues. While related to cue-independent background craving and smoking, dependence is not strongly associated with laboratory cue-induced craving under conditions of minimal deprivation. Dependence measures that incorporate situational influences on smoking correlate with greater cue-provoked smoking. This may suggest independent roles for CR and traditional dependence as determinants of smoking, and highlights the importance of assessing behavioral CR outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. A post-reconstruction method to correct cupping artifacts in cone beam breast computed tomography

    PubMed Central

    Altunbas, M. C.; Shaw, C. C.; Chen, L.; Lai, C.; Liu, X.; Han, T.; Wang, T.

    2007-01-01

    In cone beam breast computed tomography (CT), scattered radiation leads to nonuniform biasing of CT numbers known as a cupping artifact. Besides being visual distractions, cupping artifacts appear as background nonuniformities, which impair efficient gray scale windowing and pose a problem in threshold based volume visualization/segmentation. To overcome this problem, we have developed a background nonuniformity correction method specifically designed for cone beam breast CT. With this technique, the cupping artifact is modeled as an additive background signal profile in the reconstructed breast images. Due to the largely circularly symmetric shape of a typical breast, the additive background signal profile was also assumed to be circularly symmetric. The radial variation of the background signals were estimated by measuring the spatial variation of adipose tissue signals in front view breast images. To extract adipose tissue signals in an automated manner, a signal sampling scheme in polar coordinates and a background trend fitting algorithm were implemented. The background fits compared with targeted adipose tissue signal value (constant throughout the breast volume) to get an additive correction value for each tissue voxel. To test the accuracy, we applied the technique to cone beam CT images of mastectomy specimens. After correction, the images demonstrated significantly improved signal uniformity in both front and side view slices. The reduction of both intra-slice and inter-slice variations in adipose tissue CT numbers supported our observations. PMID:17822018

  16. Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.

    PubMed

    Klumpp, John; Brandl, Alexander

    2015-03-01

    A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.

  17. The complete blood count and reticulocyte count--are they necessary in the evaluation of acute vasoocclusive sickle-cell crisis?

    PubMed

    Lopez, B L; Griswold, S K; Navek, A; Urbanski, L

    1996-08-01

    To assess the usefulness of the complete blood count (CBC) and the reticulocyte count in the evaluation of adult patients with acute vasoocclusive sickle-cell crisis (SCC) presenting to the ED. A 2-part study was performed. Part 1 was retrospective chart review of patients with a sole ED diagnosis of acute SCC. Part 2 was a prospective evaluation of consecutive patients presenting in SCC. In both parts of the study, patients with coexisting acute disease were excluded. The remaining patients were divided into 2 groups: admitted and released. The mean values for white blood cell (WBC) count, hemoglobin (Hb) level, and reticulocyte count were compared. In Part 2, the change (delta) from the patient's baseline in WBC count, Hb level, and reticulocyte count also was determined. Data were analyzed by 2-tailed Student's t-test. Part 1: There was no difference between the admitted (n = 33) and the released (n = 86) groups in mean WBC count (p = 0.10), Hb level (p = 0.25), or reticulocyte count (p = 0.08). Part 2: There was no difference between the admitted (n = 44) and the released (n = 160) groups in mean Hb level (p = 0.88), reticulocyte count (p = 0.47), delta Hb level (p = 0.88), and delta reticulocyte count (p = 0.76). There was a difference in mean WBC counts (15.8 +/- 4.9 x 10(9)/L admitted vs 12.8 +/- 4.9 x 10(9)/L released, p = 0.003) and delta WBC counts (5.1 +/- 4.6 x 10(9)/L admitted vs 1.8 +/- 4.6 x 10(9)/L released, p < 0.002). Determination of the Hb level and the reticulocyte count do not appear useful in the evaluation of acute SCC in the ED. Admission decisions appear associated with elevations in the WBC count. Further study is required to determine the true value of the WBC count in such decisions.

  18. Blackfolds in (anti)-de Sitter backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armas, Jay; Obers, Niels A.

    2011-04-15

    We construct different neutral blackfold solutions in Anti-de Sitter and de Sitter background spacetimes in the limit where the cosmological constant is taken to be much smaller than the horizon size. This includes a class of blackfolds with horizons that are products of odd-spheres times a transverse sphere, for which the thermodynamic stability is also studied. Moreover, we exhibit a specific case in which the same blackfold solution can describe different limiting black hole spacetimes therefore illustrating the geometric character of the blackfold approach. Furthermore, we show that the higher-dimensional Kerr-(Anti)-de Sitter black hole allows for ultraspinning regimes in themore » same limit under consideration and demonstrate that this is correctly described by a pancaked blackfold geometry. We also give evidence for the possibility of saturating the rigidity theorem in these backgrounds.« less

  19. Duffy-Null–Associated Low Neutrophil Counts Influence HIV-1 Susceptibility in High-Risk South African Black Women

    PubMed Central

    Ramsuran, Veron; Kulkarni, Hemant; He, Weijing; Mlisana, Koleka; Wright, Edwina J.; Werner, Lise; Castiblanco, John; Dhanda, Rahul; Le, Tuan; Dolan, Matthew J.; Guan, Weihua; Weiss, Robin A.; Clark, Robert A.; Abdool Karim, Salim S.; Ndung'u, Thumbi

    2011-01-01

    Background. The Duffy-null trait and ethnic netropenia are both highly prevalent in Africa. The influence of pre-seroconversion levels of peripheral blood cell counts (PBCs) on the risk of acquiring human immunodeficiency virus (HIV)–1 infection among Africans is unknown. Methods. The triangular relationship among pre-seroconversion PBC counts, host genotypes, and risk of HIV acquisition was determined in a prospective cohort of black South African high-risk female sex workers. Twenty-seven women had seroconversion during follow-up, and 115 remained HIV negative for 2 years, despite engaging in high-risk activity. Results. Pre-seroconversion neutrophil counts in women who subsequently had seroconversion were significantly lower, whereas platelet counts were higher, compared with those who remained HIV negative. Comprising 27% of the cohort, subjects with pre-seroconversion neutrophil counts of <2500 cells/mm3 had a ∼3-fold greater risk of acquiring HIV infection. In a genome-wide association analyses, an African-specific polymorphism (rs2814778) in the promoter of Duffy Antigen Receptor for Chemokines (DARC −46T > C) was significantly associated with neutrophil counts (P = 7.9 × 10−11). DARC −46C/C results in loss of DARC expression on erthyrocytes (Duffy-null) and resistance to Plasmodium vivax malaria, and in our cohort, only subjects with this genotype had pre-seroconversion neutrophil counts of <2500 cells/mm3. The risk of acquiring HIV infection was ∼3-fold greater in those with the trait of Duffy-null–associated low neutrophil counts, compared with all other study participants. Conclusions. Pre-seroconversion neutrophil and platelet counts influence risk of HIV infection. The trait of Duffy-null–associated low neutrophil counts influences HIV susceptibility. Because of the high prevalence of this trait among persons of African ancestry, it may contribute to the dynamics of the HIV epidemic in Africa. PMID:21507922

  20. Dead-time correction for high-throughput fluorescence lifetime imaging microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Ruhlandt, Daja; Chithik, Anna; Ebrecht, René; Wouters, Fred S.; Gregor, Ingo

    2016-02-01

    Fluorescence lifetime microscopy has become an important method of bioimaging, allowing not only to record intensity and spectral, but also lifetime information across an image. One of the most widely used methods of FLIM is based on Time-Correlated Single Photon Counting (TCSPC). In TCSPC, one determines this curve by exciting molecules with a periodic train of short laser pulses, and then measuring the time delay between the first recorded fluorescence photon after each exciting laser pulse. An important technical detail of TCSPC measurements is the fact that the delay times between excitation laser pulses and resulting fluorescence photons are always measured between a laser pulse and the first fluorescence photon which is detected after that pulse. At high count rates, this leads to so-called pile-up: ``early'' photons eclipse long-delay photons, resulting in heavily skewed TCSPC histograms. To avoid pile-up, a rule of thumb is to perform TCSPC measurements at photon count rates which are at least hundred times smaller than the laser-pulse excitation rate. The downside of this approach is that the fluorescence-photon count-rate is restricted to a value below one hundredth of the laser-pulse excitation-rate, reducing the overall speed with which a fluorescence signal can be measured. We present a new data evaluation method which provides pile-up corrected fluorescence decay estimates from TCSPC measurements at high count rates, and we demonstrate our method on FLIM of fluorescently labeled cells.

  1. Deep Galex Observations of the Coma Cluster: Source Catalog and Galaxy Counts

    NASA Technical Reports Server (NTRS)

    Hammer, D.; Hornschemeier, A. E.; Mobasher, B.; Miller, N.; Smith, R.; Arnouts, S.; Milliard, B.; Jenkins, L.

    2010-01-01

    We present a source catalog from deep 26 ks GALEX observations of the Coma cluster in the far-UV (FUV; 1530 Angstroms) and near-UV (NUV; 2310 Angstroms) wavebands. The observed field is centered 0.9 deg. (1.6 Mpc) south-west of the Coma core, and has full optical photometric coverage by SDSS and spectroscopic coverage to r-21. The catalog consists of 9700 galaxies with GALEX and SDSS photometry, including 242 spectroscopically-confirmed Coma member galaxies that range from giant spirals and elliptical galaxies to dwarf irregular and early-type galaxies. The full multi-wavelength catalog (cluster plus background galaxies) is 80% complete to NUV=23 and FUV=23.5, and has a limiting depth at NUV=24.5 and FUV=25.0 which corresponds to a star formation rate of 10(exp -3) solar mass yr(sup -1) at the distance of Coma. The GALEX images presented here are very deep and include detections of many resolved cluster members superposed on a dense field of unresolved background galaxies. This required a two-fold approach to generating a source catalog: we used a Bayesian deblending algorithm to measure faint and compact sources (using SDSS coordinates as a position prior), and used the GALEX pipeline catalog for bright and/or extended objects. We performed simulations to assess the importance of systematic effects (e.g. object blends, source confusion, Eddington Bias) that influence source detection and photometry when using both methods. The Bayesian deblending method roughly doubles the number of source detections and provides reliable photometry to a few magnitudes deeper than the GALEX pipeline catalog. This method is also free from source confusion over the UV magnitude range studied here: conversely, we estimate that the GALEX pipeline catalogs are confusion limited at NUV approximately 23 and FUV approximately 24. We have measured the total UV galaxy counts using our catalog and report a 50% excess of counts across FUV=22-23.5 and NUV=21.5-23 relative to previous GALEX

  2. Electroweak radiative corrections to the top quark decay

    NASA Astrophysics Data System (ADS)

    Kuruma, Toshiyuki

    1993-12-01

    The top quark, once produced, should be an important window to the electroweak symmetry breaking sector. We compute electroweak radiative corrections to the decay process t→b+W + in order to extract information on the Higgs sector and to fix the background in searches for a possible new physics contribution. The large Yukawa coupling of the top quark induces a new form factor through vertex corrections and causes discrepancy from the tree-level longitudinal W-boson production fraction, but the effect is of order 1% or less for m H<1 TeV.

  3. Reticulocyte count

    MedlinePlus

    Anemia - reticulocyte ... A higher than normal reticulocytes count may indicate: Anemia due to red blood cells being destroyed earlier than normal ( hemolytic anemia ) Bleeding Blood disorder in a fetus or newborn ( ...

  4. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  5. A deep learning framework to discern and count microscopic nematode eggs.

    PubMed

    Akintayo, Adedotun; Tylka, Gregory L; Singh, Asheesh K; Ganapathysubramanian, Baskar; Singh, Arti; Sarkar, Soumik

    2018-06-14

    In order to identify and control the menace of destructive pests via microscopic image-based identification state-of-the art deep learning architecture is demonstrated on the parasitic worm, the soybean cyst nematode (SCN), Heterodera glycines. Soybean yield loss is negatively correlated with the density of SCN eggs that are present in the soil. While there has been progress in automating extraction of egg-filled cysts and eggs from soil samples counting SCN eggs obtained from soil samples using computer vision techniques has proven to be an extremely difficult challenge. Here we show that a deep learning architecture developed for rare object identification in clutter-filled images can identify and count the SCN eggs. The architecture is trained with expert-labeled data to effectively build a machine learning model for quantifying SCN eggs via microscopic image analysis. We show dramatic improvements in the quantification time of eggs while maintaining human-level accuracy and avoiding inter-rater and intra-rater variabilities. The nematode eggs are correctly identified even in complex, debris-filled images that are often difficult for experts to identify quickly. Our results illustrate the remarkable promise of applying deep learning approaches to phenotyping for pest assessment and management.

  6. Electrets used in measuring rocket exhaust effluents from the space shuttle's solid rocket booster during static test firing, DM-3

    NASA Technical Reports Server (NTRS)

    Susko, M.

    1979-01-01

    The purpose of this experimental research was to compare Marshall Space Flight Center's electrets with Thiokol's fixed flow air samplers during the Space Shuttle Solid Rocket Booster Demonstration Model-3 static test firing on October 19, 1978. The measurement of rocket exhaust effluents by Thiokol's samplers and MSFC's electrets indicated that the firing of the Solid Rocket Booster had no significant effect on the quality of the air sampled. The highest measurement by Thiokol's samplers was obtained at Plant 3 (site 11) approximately 8 km at a 113 degree heading from the static test stand. At sites 11, 12, and 5, Thiokol's fixed flow air samplers measured 0.0048, 0.00016, and 0.00012 mg/m3 of CI. Alongside the fixed flow measurements, the electret counts from X-ray spectroscopy were 685, 894, and 719 counts. After background corrections, the counts were 334, 543, and 368, or an average of 415 counts. An additional electred, E20, which was the only measurement device at a site approximately 20 km northeast from the test site where no power was available, obtained 901 counts. After background correction, the count was 550. Again this data indicate there was no measurement of significant rocket exhaust effluents at the test site.

  7. Sampling and counting genome rearrangement scenarios

    PubMed Central

    2015-01-01

    Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124

  8. Morphological spot counting from stacked images for automated analysis of gene copy numbers by fluorescence in situ hybridization.

    PubMed

    Grigoryan, Artyom M; Dougherty, Edward R; Kononen, Juha; Bubendorf, Lukas; Hostetter, Galen; Kallioniemi, Olli

    2002-01-01

    Fluorescence in situ hybridization (FISH) is a molecular diagnostic technique in which a fluorescent labeled probe hybridizes to a target nucleotide sequence of deoxyribose nucleic acid. Upon excitation, each chromosome containing the target sequence produces a fluorescent signal (spot). Because fluorescent spot counting is tedious and often subjective, automated digital algorithms to count spots are desirable. New technology provides a stack of images on multiple focal planes throughout a tissue sample. Multiple-focal-plane imaging helps overcome the biases and imprecision inherent in single-focal-plane methods. This paper proposes an algorithm for global spot counting in stacked three-dimensional slice FISH images without the necessity of nuclei segmentation. It is designed to work in complex backgrounds, when there are agglomerated nuclei, and in the presence of illumination gradients. It is based on the morphological top-hat transform, which locates intensity spikes on irregular backgrounds. After finding signals in the slice images, the algorithm groups these together to form three-dimensional spots. Filters are employed to separate legitimate spots from fluorescent noise. The algorithm is set in a comprehensive toolbox that provides visualization and analytic facilities. It includes simulation software that allows examination of algorithm performance for various image and algorithm parameter settings, including signal size, signal density, and the number of slices.

  9. A mind you can count on: validating breath counting as a behavioral measure of mindfulness.

    PubMed

    Levinson, Daniel B; Stoll, Eli L; Kindy, Sonam D; Merry, Hillary L; Davidson, Richard J

    2014-01-01

    Mindfulness practice of present moment awareness promises many benefits, but has eluded rigorous behavioral measurement. To date, research has relied on self-reported mindfulness or heterogeneous mindfulness trainings to infer skillful mindfulness practice and its effects. In four independent studies with over 400 total participants, we present the first construct validation of a behavioral measure of mindfulness, breath counting. We found it was reliable, correlated with self-reported mindfulness, differentiated long-term meditators from age-matched controls, and was distinct from sustained attention and working memory measures. In addition, we employed breath counting to test the nomological network of mindfulness. As theorized, we found skill in breath counting associated with more meta-awareness, less mind wandering, better mood, and greater non-attachment (i.e., less attentional capture by distractors formerly paired with reward). We also found in a randomized online training study that 4 weeks of breath counting training improved mindfulness and decreased mind wandering relative to working memory training and no training controls. Together, these findings provide the first evidence for breath counting as a behavioral measure of mindfulness.

  10. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed

  11. Determination of mammalian cell counts, cell size and cell health using the Moxi Z mini automated cell counter.

    PubMed

    Dittami, Gregory M; Sethi, Manju; Rabbitt, Richard D; Ayliffe, H Edward

    2012-06-21

    Particle and cell counting is used for a variety of applications including routine cell culture, hematological analysis, and industrial controls(1-5). A critical breakthrough in cell/particle counting technologies was the development of the Coulter technique by Wallace Coulter over 50 years ago. The technique involves the application of an electric field across a micron-sized aperture and hydrodynamically focusing single particles through the aperture. The resulting occlusion of the aperture by the particles yields a measurable change in electric impedance that can be directly and precisely correlated to cell size/volume. The recognition of the approach as the benchmark in cell/particle counting stems from the extraordinary precision and accuracy of its particle sizing and counts, particularly as compared to manual and imaging based technologies (accuracies on the order of 98% for Coulter counters versus 75-80% for manual and vision-based systems). This can be attributed to the fact that, unlike imaging-based approaches to cell counting, the Coulter Technique makes a true three-dimensional (3-D) measurement of cells/particles which dramatically reduces count interference from debris and clustering by calculating precise volumetric information about the cells/particles. Overall this provides a means for enumerating and sizing cells in a more accurate, less tedious, less time-consuming, and less subjective means than other counting techniques(6). Despite the prominence of the Coulter technique in cell counting, its widespread use in routine biological studies has been prohibitive due to the cost and size of traditional instruments. Although a less expensive Coulter-based instrument has been produced, it has limitations as compared to its more expensive counterparts in the correction for "coincidence events" in which two or more cells pass through the aperture and are measured simultaneously. Another limitation with existing Coulter technologies is the lack of metrics

  12. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  13. Low-Noise Free-Running High-Rate Photon-Counting for Space Communication and Ranging

    NASA Technical Reports Server (NTRS)

    Lu, Wei; Krainak, Michael A.; Yang, Guangning; Sun, Xiaoli; Merritt, Scott

    2016-01-01

    We present performance data for low-noise free-running high-rate photon counting method for space optical communication and ranging. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We successfully measured real-time communication performance using both the 2 detected-photon threshold and logic AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects without using other method of Time Gating The HgCdTe APD array routinely demonstrated very high photon detection efficiencies ((is) greater than 50%) at near infrared wavelength. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output. NASA GSFC has tested both detectors for their potential application for space communications and ranging. We developed and compare their performances using both the 2 detected photon threshold and coincidence methods.

  14. Low-Noise Free-Running High-Rate Photon-Counting for Space Communication and Ranging

    NASA Technical Reports Server (NTRS)

    Lu, Wei; Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Merritt, Scott

    2016-01-01

    We present performance data for low-noise free-running high-rate photon counting method for space optical communication and ranging. NASA GSFC is testing the performance of two types of novel photon-counting detectors 1) a 2x8 mercury cadmium telluride (HgCdTe) avalanche array made by DRS Inc., and a 2) a commercial 2880-element silicon avalanche photodiode (APD) array. We successfully measured real-time communication performance using both the 2 detected-photon threshold and logic AND-gate coincidence methods. Use of these methods allows mitigation of dark count, after-pulsing and background noise effects without using other method of Time Gating The HgCdTe APD array routinely demonstrated very high photon detection efficiencies (50) at near infrared wavelength. The commercial silicon APD array exhibited a fast output with rise times of 300 ps and pulse widths of 600 ps. On-chip individually filtered signals from the entire array were multiplexed onto a single fast output. NASA GSFC has tested both detectors for their potential application for space communications and ranging. We developed and compare their performances using both the 2 detected photon threshold and coincidence methods.

  15. Development of a low-level 39Ar calibration standard – Analysis by absolute gas counting measurements augmented with simulation

    DOE PAGES

    Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...

    2017-02-17

    Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).

  16. Development of a low-level 39Ar calibration standard – Analysis by absolute gas counting measurements augmented with simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.

    Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).

  17. In-flight calibration of Hitomi Soft X-ray Spectrometer. (1) Background

    NASA Astrophysics Data System (ADS)

    Kilbourne, Caroline A.; Sawada, Makoto; Tsujimoto, Masahiro; Angellini, Lorella; Boyce, Kevin R.; Eckart, Megan E.; Fujimoto, Ryuichi; Ishisaki, Yoshitaka; Kelley, Richard L.; Koyama, Shu; Leutenegger, Maurice A.; Loewenstein, Michael; McCammon, Dan; Mitsuda, Kazuhisa; Nakashima, Shinya; Porter, Frederick S.; Seta, Hiromi; Takei, Yoh; Tashiro, Makoto S.; Terada, Yukikatsu; Yamada, Shinya; Yamasaki, Noriko Y.

    2018-03-01

    The X-Ray Spectrometer (XRS) instrument of Suzaku provided the first measurement of the non-X-ray background (NXB) of an X-ray calorimeter spectrometer, but the data set was limited. The Soft X-ray Spectrometer (SXS) instrument of Hitomi was able to provide a more detailed picture of X-ray calorimeter background, with more than 360 ks of data while pointed at the Earth, and a comparable amount of blank-sky data. These data are important not only for analyzing SXS science data, but also for categorizing the contributions to the NXB in X-ray calorimeters as a class. In this paper, we present the contributions to the SXS NXB, the types and effectiveness of the screening, the interaction of the screening with the broad-band redistribution, and the residual background spectrum as a function of magnetic cut-off rigidity. The orbit-averaged SXS NXB in the range 0.3-12 keV was 4 × 10-2 counts s-1 cm-2. This very low background in combination with groundbreaking spectral resolution gave SXS unprecedented sensitivity to weak spectral lines.

  18. Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum.

    PubMed

    Kiefer, Claus; Krämer, Manuel

    2012-01-13

    We derive the primordial power spectrum of density fluctuations in the framework of quantum cosmology. For this purpose we perform a Born-Oppenheimer approximation to the Wheeler-DeWitt equation for an inflationary universe with a scalar field. In this way, we first recover the scale-invariant power spectrum that is found as an approximation in the simplest inflationary models. We then obtain quantum gravitational corrections to this spectrum and discuss whether they lead to measurable signatures in the cosmic microwave background anisotropy spectrum. The nonobservation so far of such corrections translates into an upper bound on the energy scale of inflation.

  19. Controlling for varying effort in count surveys --an analysis of Christmas Bird Count Data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1999-01-01

    The Christmas Bird Count (CBC) is a valuable source of information about midwinter populations of birds in the continental U.S. and Canada. Analysis of CBC data is complicated by substantial variation among sites and years in effort expended in counting; this feature of the CBC is common to many other wildlife surveys. Specification of a method for adjusting counts for effort is a matter of some controversy. Here, we present models for longitudinal count surveys with varying effort; these describe the effect of effort as proportional to exp(B effortp), where B and p are parameters. For any fixed p, our models are loglinear in the transformed explanatory variable (effort)p and other covariables. Hence we fit a collection of loglinear models corresponding to a range of values of p, and select the best effort adjustment from among these on the basis of fit statistics. We apply this procedure to data for six bird species in five regions, for the period 1959-1988.

  20. DC KIDS COUNT e-Databook Indicators

    ERIC Educational Resources Information Center

    DC Action for Children, 2012

    2012-01-01

    This report presents indicators that are included in DC Action for Children's 2012 KIDS COUNT e-databook, their definitions and sources and the rationale for their selection. The indicators for DC KIDS COUNT represent a mix of traditional KIDS COUNT indicators of child well-being, such as the number of children living in poverty, and indicators of…

  1. The particle background observed by the X-ray detectors onboard Copernicus

    NASA Technical Reports Server (NTRS)

    Davison, P. J. N.

    1974-01-01

    The design and characteristics of low energy detectors on the Copernicus satellite are described. The functions of the sensors in obtaining data on the particle background. The procedure for processing the data obtained by the satellite is examined. The most significant positive deviations are caused by known weak X-ray sources in the field of view. In addition to small systemic effects, occasional random effects where the count rate increases suddenly and decreases within a few frames are analyzed.

  2. Theory of particle detection and multiplicity counting with dead time effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, L.; Pazsit, I.

    The subject of this paper is the investigation of the effect of the dead time on the statistics of the particle detection process. A theoretical treatment is provided with the application of the methods of renewal theory. The detector efficiency and various types of the dead time are accounted for. Exact analytical results are derived for the probability distribution functions, the expectations and the variances of the number of detected particles. Explicit solutions are given for a few representative cases. The results should serve for the evaluation of the measurements in view of the dead time correction effects for themore » higher moments of the detector counts. (authors)« less

  3. Establishing a gold standard for manual cough counting: video versus digital audio recordings

    PubMed Central

    Smith, Jaclyn A; Earis, John E; Woodcock, Ashley A

    2006-01-01

    Background Manual cough counting is time-consuming and laborious; however it is the standard to which automated cough monitoring devices must be compared. We have compared manual cough counting from video recordings with manual cough counting from digital audio recordings. Methods We studied 8 patients with chronic cough, overnight in laboratory conditions (diagnoses were 5 asthma, 1 rhinitis, 1 gastro-oesophageal reflux disease and 1 idiopathic cough). Coughs were recorded simultaneously using a video camera with infrared lighting and digital sound recording. The numbers of coughs in each 8 hour recording were counted manually, by a trained observer, in real time from the video recordings and using audio-editing software from the digital sound recordings. Results The median cough frequency was 17.8 (IQR 5.9–28.7) cough sounds per hour in the video recordings and 17.7 (6.0–29.4) coughs per hour in the digital sound recordings. There was excellent agreement between the video and digital audio cough rates; mean difference of -0.3 coughs per hour (SD ± 0.6), 95% limits of agreement -1.5 to +0.9 coughs per hour. Video recordings had poorer sound quality even in controlled conditions and can only be analysed in real time (8 hours per recording). Digital sound recordings required 2–4 hours of analysis per recording. Conclusion Manual counting of cough sounds from digital audio recordings has excellent agreement with simultaneous video recordings in laboratory conditions. We suggest that ambulatory digital audio recording is therefore ideal for validating future cough monitoring devices, as this as this can be performed in the patients own environment. PMID:16887019

  4. Systemic inflammation in 222.841 healthy employed smokers and nonsmokers: white blood cell count and relationship to spirometry

    PubMed Central

    2012-01-01

    Background Smoking has been linked to low-grade systemic inflammation, a known risk factor for disease. This state is reflected in elevated white blood cell (WBC) count. Objective We analyzed the relationship between WBC count and smoking in healthy men and women across several age ranges who underwent preventive medical check-ups in the workplace. We also analysed the relationship between smoking and lung function. Methods Cross-sectional descriptive study in 163 459 men and 59 382 women aged between 16 and 70 years. Data analysed were smoking status, WBC count, and spirometry readings. Results Total WBC showed higher counts in both male and female smokers, around 1000 to 1300 cell/ml (t test, P < 0.001). Forced expiratory volume in 1 second (FEV1%) was higher in nonsmokers for both sexes between 25 to 54 years (t test, P < 0.001). Analysis of covariance showed a multiple variable effect of age, sex, smoking status, body mass index on WBC count. The relationship between WBC blood count and smoking status was confirmed after the sample was stratified for these variables. Smokers with airway obstruction measured by FEV1% were found to have higher WBC counts, in comparison to smokers with a normal FEV1% among similar age and BMI groups. Conclusions Smoking increases WBC count and affects lung function. The effects are evident across a wide age range, underlining the importance of initiating preventive measures as soon as an individual begins to smoke. PMID:22613769

  5. 9C spectral-index distributions and source-count estimates from 15 to 93 GHz - a re-assessment

    NASA Astrophysics Data System (ADS)

    Waldram, E. M.; Bolton, R. C.; Riley, J. M.; Pooley, G. G.

    2018-01-01

    In an earlier paper (2007), we used follow-up observations of a sample of sources from the 9C survey at 15.2 GHz to derive a set of spectral-index distributions up to a frequency of 90 GHz. These were based on simultaneous measurements made at 15.2 GHz with the Ryle telescope and at 22 and 43 GHz with the Karl G. Jansky Very Large Array (VLA). We used these distributions to make empirical estimates of source counts at 22, 30, 43, 70 and 90 GHz. In a later paper (2013), we took data at 15.7 GHz from the Arcminute Microkelvin Imager (AMI) and data at 93.2 GHz from the Combined Array for Research in Millimetre-wave Astronomy (CARMA) and estimated the source count at 93.2 GHz. In this paper, we re-examine the data used in both papers and now believe that the VLA flux densities we measured at 43 GHz were significantly in error, being on average only about 70 per cent of their correct values. Here, we present strong evidence for this conclusion and discuss the effect on the source-count estimates made in the 2007 paper. The source-count prediction in the 2013 paper is also revised. We make comparisons with spectral-index distributions and source counts from other telescopes, in particular with a recent deep 95 GHz source count measured by the South Pole Telescope. We investigate reasons for the problem of the low VLA 43-GHz values and find a number of possible contributory factors, but none is sufficient on its own to account for such a large deficit.

  6. Equivalence of truncated count mixture distributions and mixtures of truncated count distributions.

    PubMed

    Böhning, Dankmar; Kuhnert, Ronny

    2006-12-01

    This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.

  7. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  8. Increased Microerythrocyte Count in Homozygous α+-Thalassaemia Contributes to Protection against Severe Malarial Anaemia

    PubMed Central

    Fowkes, Freya J. I; Allen, Stephen J; Allen, Angela; Alpers, Michael P; Weatherall, David J; Day, Karen P

    2008-01-01

    Background The heritable haemoglobinopathy α+-thalassaemia is caused by the reduced synthesis of α-globin chains that form part of normal adult haemoglobin (Hb). Individuals homozygous for α+-thalassaemia have microcytosis and an increased erythrocyte count. α+-Thalassaemia homozygosity confers considerable protection against severe malaria, including severe malarial anaemia (SMA) (Hb concentration < 50 g/l), but does not influence parasite count. We tested the hypothesis that the erythrocyte indices associated with α+-thalassaemia homozygosity provide a haematological benefit during acute malaria. Methods and Findings Data from children living on the north coast of Papua New Guinea who had participated in a case-control study of the protection afforded by α+-thalassaemia against severe malaria were reanalysed to assess the genotype-specific reduction in erythrocyte count and Hb levels associated with acute malarial disease. We observed a reduction in median erythrocyte count of ∼1.5 × 1012/l in all children with acute falciparum malaria relative to values in community children (p < 0.001). We developed a simple mathematical model of the linear relationship between Hb concentration and erythrocyte count. This model predicted that children homozygous for α+-thalassaemia lose less Hb than children of normal genotype for a reduction in erythrocyte count of >1.1 × 1012/l as a result of the reduced mean cell Hb in homozygous α+-thalassaemia. In addition, children homozygous for α+-thalassaemia require a 10% greater reduction in erythrocyte count than children of normal genotype (p = 0.02) for Hb concentration to fall to 50 g/l, the cutoff for SMA. We estimated that the haematological profile in children homozygous for α+-thalassaemia reduces the risk of SMA during acute malaria compared to children of normal genotype (relative risk 0.52; 95% confidence interval [CI] 0.24–1.12, p = 0.09). Conclusions The increased erythrocyte count and microcytosis in

  9. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model

    PubMed Central

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    Purpose The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. Materials and Methods This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R2, Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Results Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R2, 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R2, 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Conclusion Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice. PMID:28253564

  10. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model.

    PubMed

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R 2 , Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R 2 , 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R 2 , 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice.

  11. A Bayesian Method for Identifying Contaminated Detectors in Low-Level Alpha Spectrometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maclellan, Jay A.; Strom, Daniel J.; Joyce, Kevin E.

    2011-11-02

    Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet amore » required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.« less

  12. TH-EF-207A-03: Photon Counting Implementation Challenges Using An Electron Multiplying Charged-Coupled Device Based Micro-CT System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Podgorsak, A; Bednarek, D; Rudin, S

    2016-06-15

    Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less

  13. Energy response calibration of photon-counting detectors using x-ray fluorescence: a feasibility study.

    PubMed

    Cho, H-M; Ding, H; Ziemer, B P; Molloi, S

    2014-12-07

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm(2) in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.

  14. Energy response calibration of photon-counting detectors using X-ray fluorescence: a feasibility study

    PubMed Central

    Cho, H-M; Ding, H; Ziemer, BP; Molloi, S

    2014-01-01

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using X-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for X-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of X-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded X-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of X-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of X-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic X-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the X-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory. PMID:25369288

  15. Background Characterization for Thermal Ion Release Experiments with 224Ra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwong, H.; /Stanford U., Phys. Dept.; Rowson, P.

    The Enriched Xenon Observatory for neutrinoless double beta decay uses {sup 136}Ba identification as a means for verifying the decay's occurrence in {sup 136}Xe. A current challenge is the release of Ba ions from the Ba extraction probe, and one possible solution is to heat the probe to high temperatures to release the ions. The investigation of this method requires a characterization of the alpha decay background in our test apparatus, which uses a {sup 228}Th source that produces {sup 224}Ra daughters, the ionization energies of which are similar to those of Ba. For this purpose, we ran a backgroundmore » count with our apparatus maintained at a vacuum, and then three counts with the apparatus filled with Xe gas. We were able to match up our alpha spectrum in vacuum with the known decay scheme of {sup 228}Th, while the spectrum in xenon gas had too many unresolved ambiguities for an accurate characterization. We also found that the alpha decays occurred at a near-zero rate both in vacuum and in xenon gas, which indicates that the rate was determined by {sup 228}Th decays. With these background measurements, we can in the future make a more accurate measurement of the temperature dependency of the ratio of ions to neutral atoms released from the hot surface of the probe, which may lead to a successful method of Ba ion release.« less

  16. An Ultrasensitive Hot-Electron Bolometer for Low-Background SMM Applications

    NASA Technical Reports Server (NTRS)

    Olayaa, David; Wei, Jian; Pereverzev, Sergei; Karasik, Boris S.; Kawamura, Jonathan H.; McGrath, William R.; Sergeev, Andrei V.; Gershenson, Michael E.

    2006-01-01

    We are developing a hot-electron superconducting transition-edge sensor (TES) that is capable of counting THz photons and operates at T = 0.3K. The main driver for this work is moderate resolution spectroscopy (R approx. 1000) on the future space telescopes with cryogenically cooled (approx. 4 K) mirrors. The detectors for these telescopes must be background-limited with a noise equivalent power (NEP) approx. 10(exp -19)-10(exp -20) W/Hz(sup 1/2) over the range v = 0.3-10 THz. Above about 1 THz, the background photon arrival rate is expected to be approx. 10-100/s), and photon counting detectors may be preferable to an integrating type. We fabricated superconducting Ti nanosensors with a volume of approx. 3x10(exp -3) cubic microns on planar substrate and have measured the thermal conductance G to the thermal bath. A very low G = 4x10(exp -14) W/K, measured at 0.3 K, is due to the weak electron-phonon coupling in the material and the thermal isolation provided by superconducting Nb contacts. This low G corresponds to NEP(0.3K) = 3x10(exp -19) W/Hz(sup 1/2). This Hot-Electron Direct Detector (HEDD) is expected to have a sufficient energy resolution for detecting individual photons with v > 0.3 THz at 0.3 K. With the sensor time constant of a few microseconds, the dynamic range is approx. 50 dB.

  17. Relativistic electron plasma oscillations in an inhomogeneous ion background

    NASA Astrophysics Data System (ADS)

    Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil

    2018-06-01

    The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.

  18. Kids Count in Delaware: Fact Book, 2000-2001 [and] Families Count in Delaware: Fact Book, 2000-2001.

    ERIC Educational Resources Information Center

    Delaware Univ., Newark. Kids Count in Delaware.

    This Kids Count Fact Book is combined with the Families Count Fact Book to provide information on statewide trends affecting children and families in Delaware. The Kids Count statistical profile is based on 11 main indicators of child well-being: (1) births to teens 15 to 17 years; (2) births to teens 15 to 19 years; (3) low birth weight babies;…

  19. White blood cell counting system

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The design, fabrication, and tests of a prototype white blood cell counting system for use in the Skylab IMSS are presented. The counting system consists of a sample collection subsystem, sample dilution and fluid containment subsystem, and a cell counter. Preliminary test results show the sample collection and the dilution subsystems are functional and fulfill design goals. Results for the fluid containment subsystem show the handling bags cause counting errors due to: (1) adsorption of cells to the walls of the container, and (2) inadequate cleaning of the plastic bag material before fabrication. It was recommended that another bag material be selected.

  20. Interstellar cyanogen and the temperature of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Roth, Katherine C.; Meyer, David M.; Hawkins, Isabel

    1993-01-01

    We present the results of a recently completed effort to determine the amount of CN rotational excitation in five diffuse interstellar clouds for the purpose of accurately measuring the temperature of the cosmic microwave background radiation (CMBR). In addition, we report a new detection of emission from the strongest hyperfine component of the 2.64 mm CN rotational transition (N = 1-0) in the direction toward HD 21483. We have used this result in combination with existing emission measurements toward our other stars to correct for local excitation effects within diffuse clouds which raise the measured CN rotational temperature above that of the CMBR. After making this correction, we find a weighted mean value of T(CMBR) = 2.729 (+0.023, -0.031) K. This temperature is in excellent agreement with the new COBE measurement of 2.726 +/- 0.010 K (Mather et al., 1993). Our result, which samples the CMBR far from the near-Earth environment, attests to the accuracy of the COBE measurement and reaffirms the cosmic nature of this background radiation. From the observed agreement between our CMBR temperature and the COBE result, we conclude that corrections for local CN excitation based on millimeter emission measurements provide an accurate adjustment to the measured rotational excitation.

  1. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  2. Far-Ultraviolet Number Counts of Field Galaxies

    NASA Technical Reports Server (NTRS)

    Voyer, Elysse N.; Gardner, Jonathan P.; Teplitz, Harry I.; Siana, Brian D.; deMello, Duilia F.

    2010-01-01

    The Number counts of far-ultraviolet (FUV) galaxies as a function of magnitude provide a direct statistical measure of the density and evolution of star-forming galaxies. We report on the results of measurements of the rest-frame FUV number counts computed from data of several fields including the Hubble Ultra Deep Field, the Hubble Deep Field North, and the GOODS-North and -South fields. These data were obtained from the Hubble Space Telescope Solar Blind Channel of the Advance Camera for Surveys. The number counts cover an AB magnitude range from 20-29 magnitudes, covering a total area of 15.9 arcmin'. We show that the number counts are lower than those in previous studies using smaller areas. The differences in the counts are likely the result of cosmic variance; our new data cover more area and more lines of sight than the previous studies. The slope of our number counts connects well with local FUV counts and they show good agreement with recent semi-analytical models based on dark matter "merger trees".

  3. Increasing point-count duration increases standard error

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.

    1998-01-01

    We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.

  4. Radon induced background processes in the KATRIN pre-spectrometer

    NASA Astrophysics Data System (ADS)

    Fränkle, F. M.; Bornschein, L.; Drexlin, G.; Glück, F.; Görhardt, S.; Käfer, W.; Mertens, S.; Wandkowsky, N.; Wolf, J.

    2011-10-01

    The KArlsruhe TRItium Neutrino (KATRIN) experiment is a next generation, model independent, large scale tritium β-decay experiment to determine the effective electron anti-neutrino mass by investigating the kinematics of tritium β-decay with a sensitivity of 200 meV/c 2 using the MAC-E filter technique. In order to reach this sensitivity, a low background level of 10 -2 counts per second (cps) is required. This paper describes how the decay of radon in a MAC-E filter generates background events, based on measurements performed at the KATRIN pre-spectrometer test setup. Radon (Rn) atoms, which emanate from materials inside the vacuum region of the KATRIN spectrometers, are able to penetrate deep into the magnetic flux tube so that the α-decay of Rn contributes to the background. Of particular importance are electrons emitted in processes accompanying the Rn α-decay, such as shake-off, internal conversion of excited levels in the Rn daughter atoms and Auger electrons. While low-energy electrons (<100 eV) directly contribute to the background in the signal region, higher energy electrons can be stored magnetically inside the volume of the spectrometer. Depending on their initial energy, they are able to create thousands of secondary electrons via subsequent ionization processes with residual gas molecules and, since the detector is not able to distinguish these secondary electrons from the signal electrons, an increased background rate over an extended period of time is generated.

  5. 25 CFR 81.21 - Counting of ballots.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... STATUTE § 81.21 Counting of ballots. All duly cast ballots are to be counted. Even though it will not be... counted for purposes of determining whether the required percentage of voters have cast their ballots in... of votes cast. ...

  6. The SCUBA-2 Cosmology Legacy Survey: the EGS deep field - I. Deep number counts and the redshift distribution of the recovered cosmic infrared background at 450 and 850 μ m

    NASA Astrophysics Data System (ADS)

    Zavala, J. A.; Aretxaga, I.; Geach, J. E.; Hughes, D. H.; Birkinshaw, M.; Chapin, E.; Chapman, S.; Chen, Chian-Chou; Clements, D. L.; Dunlop, J. S.; Farrah, D.; Ivison, R. J.; Jenness, T.; Michałowski, M. J.; Robson, E. I.; Scott, Douglas; Simpson, J.; Spaans, M.; van der Werf, P.

    2017-01-01

    We present deep observations at 450 and 850 μm in the Extended Groth Strip field taken with the SCUBA-2 camera mounted on the James Clerk Maxwell Telescope as part of the deep SCUBA-2 Cosmology Legacy Survey (S2CLS), achieving a central instrumental depth of σ450 = 1.2 mJy beam-1 and σ850 = 0.2 mJy beam-1. We detect 57 sources at 450 μm and 90 at 850 μm with signal-to-noise ratio >3.5 over ˜70 arcmin2. From these detections, we derive the number counts at flux densities S450 > 4.0 mJy and S850 > 0.9 mJy, which represent the deepest number counts at these wavelengths derived using directly extracted sources from only blank-field observations with a single-dish telescope. Our measurements smoothly connect the gap between previous shallower blank-field single-dish observations and deep interferometric ALMA results. We estimate the contribution of our SCUBA-2 detected galaxies to the cosmic infrared background (CIB), as well as the contribution of 24 μm-selected galaxies through a stacking technique, which add a total of 0.26 ± 0.03 and 0.07 ± 0.01 MJy sr-1, at 450 and 850 μm, respectively. These surface brightnesses correspond to 60 ± 20 and 50 ± 20 per cent of the total CIB measurements, where the errors are dominated by those of the total CIB. Using the photometric redshifts of the 24 μm-selected sample and the redshift distributions of the submillimetre galaxies, we find that the redshift distribution of the recovered CIB is different at each wavelength, with a peak at z ˜ 1 for 450 μm and at z ˜ 2 for 850 μm, consistent with previous observations and theoretical models.

  7. Nonrelativistic fluids on scale covariant Newton-Cartan backgrounds

    NASA Astrophysics Data System (ADS)

    Mitra, Arpita

    2017-12-01

    The nonrelativistic covariant framework for fields is extended to investigate fields and fluids on scale covariant curved backgrounds. The scale covariant Newton-Cartan background is constructed using the localization of space-time symmetries of nonrelativistic fields in flat space. Following this, we provide a Weyl covariant formalism which can be used to study scale invariant fluids. By considering ideal fluids as an example, we describe its thermodynamic and hydrodynamic properties and explicitly demonstrate that it satisfies the local second law of thermodynamics. As a further application, we consider the low energy description of Hall fluids. Specifically, we find that the gauge fields for scale transformations lead to corrections of the Wen-Zee and Berry phase terms contained in the effective action.

  8. Development of a low background liquid scintillation counter for a shallow underground laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erchinger, Jennifer L.; Aalseth, Craig E.; Bernacki, Bruce E.

    2015-08-20

    Pacific Northwest National Laboratory has recently opened a shallow underground laboratory intended for measurement of lowconcentration levels of radioactive isotopes in samples collected from the environment. The development of a low-background liquid scintillation counter is currently underway to further augment the measurement capabilities within this underground laboratory. Liquid scintillation counting is especially useful for measuring charged particle (e.g., B, a) emitting isotopes with no (orvery weak) gamma-ray yields. The combination of high-efficiency detection of charged particle emission in a liquid scintillation cocktail coupled with the low-background environment of an appropriately-designed shield located in a clean underground laboratory provides the opportunitymore » for increased-sensitivity measurements of a range of isotopes. To take advantage of the 35-meter water-equivalent overburden of the underground laboratory, a series of simulations have evaluated the instrumental shield design requirements to assess the possible background rate achievable. This report presents the design and background evaluation for a shallow underground, low background liquid scintillation counter design for sample measurements.« less

  9. TU-FG-209-03: Exploring the Maximum Count Rate Capabilities of Photon Counting Arrays Based On Polycrystalline Silicon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, A K; Koniczek, M; Antonuk, L E

    Purpose: Photon counting arrays (PCAs) offer several advantages over conventional, fluence-integrating x-ray imagers, such as improved contrast by means of energy windowing. For that reason, we are exploring the feasibility and performance of PCA pixel circuitry based on polycrystalline silicon. This material, unlike the crystalline silicon commonly used in photon counting detectors, lends itself toward the economic manufacture of radiation tolerant, monolithic large area (e.g., ∼43×43 cm2) devices. In this presentation, exploration of maximum count rate, a critical performance parameter for such devices, is reported. Methods: Count rate performance for a variety of pixel circuit designs was explored through detailedmore » circuit simulations over a wide range of parameters (including pixel pitch and operating conditions) with the additional goal of preserving good energy resolution. The count rate simulations assume input events corresponding to a 72 kVp x-ray spectrum with 20 mm Al filtration interacting with a CZT detector at various input flux rates. Output count rates are determined at various photon energy threshold levels, and the percentage of counts lost (e.g., due to deadtime or pile-up) is calculated from the ratio of output to input counts. The energy resolution simulations involve thermal and flicker noise originating from each circuit element in a design. Results: Circuit designs compatible with pixel pitches ranging from 250 to 1000 µm that allow count rates over a megacount per second per pixel appear feasible. Such rates are expected to be suitable for radiographic and fluoroscopic imaging. Results for the analog front-end circuitry of the pixels show that acceptable energy resolution can also be achieved. Conclusion: PCAs created using polycrystalline silicon have the potential to offer monolithic large-area detectors with count rate performance comparable to those of crystalline silicon detectors. Further improvement through detailed

  10. Recursive algorithms for phylogenetic tree counting.

    PubMed

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  11. Analysis of counting data: Development of the SATLAS Python package

    NASA Astrophysics Data System (ADS)

    Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.

    2018-01-01

    For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.

  12. Clinical predictors of the optimal spectacle correction for comfort performing desktop tasks.

    PubMed

    Leffler, Christopher T; Davenport, Byrd; Rentz, Jodi; Miller, Amy; Benson, William

    2008-11-01

    The best strategy for spectacle correction of presbyopia for near tasks has not been determined. Thirty volunteers over the age of 40 years were tested for subjective accommodative amplitude, pupillary size, fusional vergence, interpupillary distance, arm length, preferred working distance, near and far visual acuity and preferred reading correction in the phoropter and trial frames. Subjects performed near tasks (reading, writing and counting change) using various spectacle correction strengths. Predictors of the correction maximising near task comfort were determined by multivariable linear regression. The mean age was 54.9 years (range 43 to 71) and 40 per cent had diabetes. Significant predictors of the most comfortable addition in univariate analyses were age (p<0.001), interpupillary distance (p=0.02), fusional vergence amplitude (p=0.02), distance visual acuity in the worse eye (p=0.01), vision at 40 cm in the worse eye with distance correction (p=0.01), duration of diabetes (p=0.01), and the preferred correction to read at 40 cm with the phoropter (p=0.002) or trial frames (p<0.001). Target distance selected wearing trial frames (in dioptres), arm length, and accommodative amplitude were not significant predictors (p>0.15). The preferred addition wearing trial frames holding a reading target at a distance selected by the patient was the only independent predictor. Excluding this variable, distance visual acuity was predictive independent of age or near vision wearing distance correction. The distance selected for task performance was predicted by vision wearing distance correction at near and at distance. Multivariable linear regression can be used to generate tables based on distance visual acuity and age or near vision wearing distance correction to determine tentative near spectacle addition. Final spectacle correction for desktop tasks can be estimated by subjective refraction with trial frames.

  13. LINEAR COUNT-RATE METER

    DOEpatents

    Henry, J.J.

    1961-09-01

    A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.

  14. Calibrating passive acoustic monitoring: correcting humpback whale call detections for site-specific and time-dependent environmental characteristics.

    PubMed

    Helble, Tyler A; D'Spain, Gerald L; Campbell, Greg S; Hildebrand, John A

    2013-11-01

    This paper demonstrates the importance of accounting for environmental effects on passive underwater acoustic monitoring results. The situation considered is the reduction in shipping off the California coast between 2008-2010 due to the recession and environmental legislation. The resulting variations in ocean noise change the probability of detecting marine mammal vocalizations. An acoustic model was used to calculate the time-varying probability of detecting humpback whale vocalizations under best-guess environmental conditions and varying noise. The uncorrected call counts suggest a diel pattern and an increase in calling over a two-year period; the corrected call counts show minimal evidence of these features.

  15. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE PAGES

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...

    2017-09-19

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  16. Casimir meets Poisson: improved quark/gluon discrimination with counting observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse

    Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less

  17. Platelet counting using the Coulter electronic counter.

    PubMed

    Eggleton, M J; Sharp, A A

    1963-03-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.(1) The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed.

  18. [Automated hematology analysers and spurious counts Part 3. Haemoglobin, red blood cells, cell count and indices, reticulocytes].

    PubMed

    Godon, Alban; Genevieve, Franck; Marteau-Tessier, Anne; Zandecki, Marc

    2012-01-01

    Several situations lead to abnormal haemoglobin measurement or to abnormal red blood cells (RBC) counts, including hyperlipemias, agglutinins and cryoglobulins, haemolysis, or elevated white blood cells (WBC) counts. Mean (red) cell volume may be also subject to spurious determination, because of agglutinins (mainly cold), high blood glucose level, natremia, anticoagulants in excess and at times technological considerations. Abnormality related to one measured parameter eventually leads to abnormal calculated RBC indices: mean cell haemoglobin content is certainly the most important RBC parameter to consider, maybe as important as flags generated by the haematology analysers (HA) themselves. In many circumstances, several of the measured parameters from cell blood counts (CBC) may be altered, and the discovery of a spurious change on one parameter frequently means that the validity of other parameters should be considered. Sensitive flags allow now the identification of several spurious counts, but only the most sophisticated HA have optimal flagging, and simpler ones, especially those without any WBC differential scattergram, do not share the same capacity to detect abnormal results. Reticulocytes are integrated into the CBC in many HA, and several situations may lead to abnormal counts, including abnormal gating, interference with intraerythrocytic particles, erythroblastosis or high WBC counts.

  19. 2008 KidsCount in Colorado!

    ERIC Educational Resources Information Center

    Colorado Children's Campaign, 2008

    2008-01-01

    "KidsCount in Colorado!" is an annual publication of the Colorado Children's Campaign, which provides the best available state- and county-level data to measure and track the education, health and general well-being of the state's children. KidsCount in Colorado! informs policy debates and community discussions, serving as a valuable…

  20. Children's Counting Strategies for Time Quantification and Integration.

    ERIC Educational Resources Information Center

    Wilkening, Friedrich; And Others

    1987-01-01

    Investigated whether and how children age 5 to 7 employed counting to measure and integrate the duration of two events, which were accompanied by metronome beats for half the children. The rhythm enhanced use of counting in younger children. By age 7, most counted spontaneously, using sensible counting strategies. (SKC)

  1. Platelet counting using the Coulter electronic counter

    PubMed Central

    Eggleton, M. J.; Sharp, A. A.

    1963-01-01

    A method for counting platelets in dilutions of platelet-rich plasm using the Coulter electronic counter is described.1 The results obtained show that such platelet counts are at least as accurate as the best methods of visual counting. The various technical difficulties encountered are discussed. PMID:16811002

  2. Identification of CSF fistulas by radionuclide counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Y.; Kunishio, K.; Sunami, N.

    1990-07-01

    A radionuclide counting method, performed with the patient prone and the neck flexed, was used successfully to diagnose CSF rhinorrhea in two patients. A normal radionuclide ratio (radionuclide counts in pledget/radionuclide counts in 1-ml blood sample) was obtained in 11 normal control subjects. Significance was determined to be a ratio greater than 0.37. Use of radionuclide counting method of determining CSF rhinorrhea is recommended when other methods have failed to locate a site of leakage or when posttraumatic meningitis suggests subclinical CSF rhinorrhea.

  3. Occupations at Case Closure for Vocational Rehabilitation Applicants with Criminal Backgrounds

    ERIC Educational Resources Information Center

    Whitfield, Harold Wayne

    2009-01-01

    The purpose of this study was to identify industries that hire persons with disabilities and criminal backgrounds. The researcher obtained data on 1,355 applicants for vocational rehabilitation services who were living in adult correctional facilities at the time of application. Service-based industries hired the most ex-inmates with disabilities…

  4. Counting and Surveying Homeless Youth: Recommendations from YouthCount 2.0!, a Community-Academic Partnership.

    PubMed

    Narendorf, Sarah C; Santa Maria, Diane M; Ha, Yoonsook; Cooper, Jenna; Schieszler, Christine

    2016-12-01

    Communities across the United States are increasing efforts to find and count homeless youth. This paper presents findings and lessons learned from a community/academic partnership to count homeless youth and conduct an in depth research survey focused on the health needs of this population. Over a 4 week recruitment period, 632 youth were counted and 420 surveyed. Methodological successes included an extended counting period, broader inclusion criteria to capture those in unstable housing, use of student volunteers in health training programs, recruiting from magnet events for high risk youth, and partnering with community agencies to disseminate findings. Strategies that did not facilitate recruitment included respondent driven sampling, street canvassing beyond known hotspots, and having community agencies lead data collection. Surveying was successful in gathering data on reasons for homelessness, history in public systems of care, mental health history and needs, sexual risk behaviors, health status, and substance use. Youth were successfully surveyed across housing types including shelters or transitional housing (n = 205), those in unstable housing such as doubled up with friends or acquaintances (n = 75), and those who were literally on the streets or living in a place not meant for human habitation (n = 140). Most youth completed the self-report survey and provided detailed information about risk behaviors. Recommendations to combine research data collection with counting are presented.

  5. Spatial variability in the pollen count in Sydney, Australia: can one sampling site accurately reflect the pollen count for a region?

    PubMed

    Katelaris, Constance H; Burke, Therese V; Byth, Karen

    2004-08-01

    There is increasing interest in the daily pollen count, with pollen-sensitive individuals using it to determine medication use and researchers relying on it for commencing clinical drug trials and assessing drug efficacy according to allergen exposure. Counts are often expressed qualitatively as low, medium, and high, and often only 1 pollen trap is used for an entire region. To examine the spatial variability in the pollen count in Sydney, Australia, and to compare discrepancies among low-, medium-, and high-count days at 3 sites separated by a maximum of 30 km. Three sites in western Sydney were sampled using Burkard traps. Data from the 3 sites were used to compare vegetation differences, possible effects of some meteorological parameters, and discrepancies among sites in low-, medium-, and high-count days. Total pollen counts during the spring months were 14,382 grains/m3 at Homebush, 11,584 grains/m3 at Eastern Creek, and 9,269 grains/m3 at Nepean. The only significant correlation between differences in meteorological parameters and differences in pollen counts was the Homebush-Nepean differences in rainfall and pollen counts. Comparison between low- and high-count days among the 3 sites revealed a discordance rate of 8% to 17%. For informing the public about pollen counts, the count from 1 trap is a reasonable estimation in a 30-km region; however, the discrepancies among 3 trap sites would have a significant impact on the performance of a clinical trial where enrollment was determined by a low or high count. Therefore, for clinical studies, data collection must be local and applicable to the study population.

  6. Clean Hands Count

    MedlinePlus Videos and Cool Tools

    ... why Close Clean Hands Count Centers for Disease Control and Prevention (CDC) Loading... Unsubscribe from Centers for Disease Control and Prevention (CDC)? Cancel Unsubscribe Working... Subscribe Subscribed ...

  7. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  8. Evaluation of lactate, white blood cell count, neutrophil count, procalcitonin and immature granulocyte count as biomarkers for sepsis in emergency department patients.

    PubMed

    Karon, Brad S; Tolan, Nicole V; Wockenfus, Amy M; Block, Darci R; Baumann, Nikola A; Bryant, Sandra C; Clements, Casey M

    2017-11-01

    Lactate, white blood cell (WBC) and neutrophil count, procalcitonin and immature granulocyte (IG) count were compared for the prediction of sepsis, and severe sepsis or septic shock, in patients presenting to the emergency department (ED). We prospectively enrolled 501 ED patients with a sepsis panel ordered for suspicion of sepsis. WBC, neutrophil, and IG counts were measured on a Sysmex XT-2000i analyzer. Lactate was measured by i-STAT, and procalcitonin by Brahms Kryptor. We classified patients as having sepsis using a simplification of the 1992 consensus conference sepsis definitions. Patients with sepsis were further classified as having severe sepsis or septic shock using established criteria. Univariate receiver operating characteristic (ROC) analysis was performed to determine odds ratio (OR), area under the ROC curve (AUC), and sensitivity/specificity at optimal cut-off for prediction of sepsis (vs. no sepsis), and prediction of severe sepsis or septic shock (vs. no sepsis). There were 267 patients without sepsis; and 234 with sepsis, including 35 patients with severe sepsis or septic shock. Lactate had the highest OR (1.44, 95th% CI 1.20-1.73) for the prediction of sepsis; while WBC, neutrophil count and percent (neutrophil/WBC) had OR>1.00 (p<0.05). All biomarkers had AUC<0.70 and sensitivity and specificity <70% at the optimal cut-off. Initial lactate was the best biomarker for predicting severe sepsis or septic shock, with an odds ratio (95th% CI) of 2.70 (2.02-3.61) and AUC 0.89 (0.82-0.96). Traditional biomarkers (lactate, WBC, neutrophil count, procalcitonin, IG) have limited utility in the prediction of sepsis. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  9. Method for detecting and correcting for isotope burn-in during long-term neutron dosimetry exposure

    DOEpatents

    Ruddy, Francis H.

    1988-01-01

    A method is described for detecting and correcting for isotope burn-in during-long term neutron dosimetry exposure. In one embodiment, duplicate pairs of solid state track recorder fissionable deposits are used, including a first, fissionable deposit of lower mass to quantify the number of fissions occuring during the exposure, and a second deposit of higher mass to quantify the number of atoms of for instance .sup.239 Pu by alpha counting. In a second embodiment, only one solid state track recorder fissionable deposit is used and the resulting higher track densities are counted with a scanning electron microscope. This method is also applicable to other burn-in interferences, e.g., .sup.233 U in .sup.232 Th or .sup.238 Pu in .sup.237 Np.

  10. Reliable and Accurate CD4+ T Cell Count and Percent by the Portable Flow Cytometer CyFlow MiniPOC and “CD4 Easy Count Kit-Dry”, as Revealed by the Comparison with the Gold Standard Dual Platform Technology

    PubMed Central

    Nasi, Milena; De Biasi, Sara; Bianchini, Elena; Gibellini, Lara; Pinti, Marcello; Scacchetti, Tiziana; Trenti, Tommaso; Borghi, Vanni; Mussini, Cristina; Cossarizza, Andrea

    2015-01-01

    Background An accurate and affordable CD4+ T cells count is an essential tool in the fight against HIV/AIDS. Flow cytometry (FCM) is the “gold standard” for counting such cells, but this technique is expensive and requires sophisticated equipment, temperature-sensitive monoclonal antibodies (mAbs) and trained personnel. The lack of access to technical support and quality assurance programs thus limits the use of FCM in resource-constrained countries. We have tested the accuracy, the precision and the carry-over contamination of Partec CyFlow MiniPOC, a portable and economically affordable flow cytometer designed for CD4+ count and percentage, used along with the “CD4% Count Kit-Dry”. Materials and Methods Venous blood from 59 adult HIV+ patients (age: 25–58 years; 43 males and 16 females) was collected and stained with the “MiniPOC CD4% Count Kit-Dry”. CD4+ count and percentage were then determined in triplicate by the CyFlow MiniPOC. In parallel, CD4 count was performed using mAbs and a CyFlow Counter, or by a dual platform system (from Beckman Coulter) based upon Cytomic FC500 (“Cytostat tetrachrome kit” for mAbs) and Coulter HmX Hematology Analyzer (for absolute cell count). Results The accuracy of CyFlow MiniPOC against Cytomic FC500 showed a correlation coefficient (CC) of 0.98 and 0.97 for CD4+ count and percentage, respectively. The accuracy of CyFlow MiniPOC against CyFlow Counter showed a CC of 0.99 and 0.99 for CD4 T cell count and percentage, respectively. CyFlow MiniPOC showed an excellent repeatability: CD4+ cell count and percentage were analyzed on two instruments, with an intra-assay precision below ±5% deviation. Finally, there was no carry-over contamination for samples at all CD4 values, regardless of their position in the sequence of analysis. Conclusion The cost-effective CyFlow MiniPOC produces rapid, reliable and accurate results that are fully comparable with those from highly expensive dual platform systems. PMID:25622041

  11. Terrestrial Background Reduction in RPM Systems by Direct Internal Shielding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Sean M.; Ashbaker, Eric D.; Schweppe, John E.

    2008-11-19

    Gamma-ray detection systems that are close to the earth or other sources of background radiation often require shielding, especially when trying to detect a relatively weak source. One particular case of interest that we address in this paper is that encountered by the Radiation Portal Monitors (RPMs) systems placed at border-crossing Ports of Entry (POE). These RPM systems are used to screen for illicit radiological materials, and they are often placed in situations where terrestrial background is large. In such environments, it is desirable to consider simple physical modifications that could be implemented to reduce the effects from background radiationmore » without affecting the flow of traffic and the normal operation of the portal. Simple modifications include adding additional shielding to the environment, either inside or outside the apparatus. Previous work [2] has shown the utility of some of these shielding configurations for increasing the Signal to Noise Ratio (SNR) of gross-counting RPMs. Because the total cost for purchasing and installing RPM systems can be quite expensive, in the range of hundreds of thousands of dollars for each cargo-screening installation, these shielding variations may offer increases in detection capability for relatively small cost. Several modifications are considered here in regard to their real-world applicability, and are meant to give a general idea of the effectiveness of the schemes used to reduce background for both gross-counting and spectroscopic detectors. These scenarios are modeled via the Monte-Carlo N-Particle (MCNP) code package [1] for ease of altering shielding configurations, as well as enacting unusual scenarios prior to prototyping in the field. The objective of this paper is to provide results representative of real modifications that could enhance the sensitivity of this, as well as the next generation of radiation detectors. The models used in this work were designed to provide the most general

  12. Correcting length-frequency distributions for imperfect detection

    USGS Publications Warehouse

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data

  13. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... changes. This correcting amendment is necessary to correct the statutory authority that is cited in one of... is necessary to correct the statutory authority that is cited in the authority citation for part 171...

  14. Counting Penguins.

    ERIC Educational Resources Information Center

    Perry, Mike; Kader, Gary

    1998-01-01

    Presents an activity on the simplification of penguin counting by employing the basic ideas and principles of sampling to teach students to understand and recognize its role in statistical claims. Emphasizes estimation, data analysis and interpretation, and central limit theorem. Includes a list of items for classroom discussion. (ASK)

  15. Regression Models For Multivariate Count Data.

    PubMed

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2017-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data.

  16. A Calibration of NICMOS Camera 2 for Low Count Rates

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Amanullah, R.; Barbary, K.; Dawson, K. S.; Deustua, S.; Faccioli, L.; Fadeyev, V.; Fakhouri, H. K.; Fruchter, A. S.; Gladders, M. D.; de Jong, R. S.; Koekemoer, A.; Krechmer, E.; Lidman, C.; Meyers, J.; Nordin, J.; Perlmutter, S.; Ripoche, P.; Schlegel, D. J.; Spadafora, A.; Suzuki, N.

    2015-05-01

    NICMOS 2 observations are crucial for constraining distances to most of the existing sample of z\\gt 1 SNe Ia. Unlike conventional calibration programs, these observations involve long exposure times and low count rates. Reciprocity failure is known to exist in HgCdTe devices and a correction for this effect has already been implemented for high and medium count rates. However, observations at faint count rates rely on extrapolations. Here instead, we provide a new zero-point calibration directly applicable to faint sources. This is obtained via inter-calibration of NIC2 F110W/F160W with the Wide Field Camera 3 (WFC3) in the low count-rate regime using z∼ 1 elliptical galaxies as tertiary calibrators. These objects have relatively simple near-IR spectral energy distributions, uniform colors, and their extended nature gives a superior signal-to-noise ratio at the same count rate than would stars. The use of extended objects also allows greater tolerances on point-spread function profiles. We find space telescope magnitude zero points (after the installation of the NICMOS cooling system, NCS) of 25.296\\+/- 0.022 for F110W and 25.803\\+/- 0.023 for F160W, both in agreement with the calibration extrapolated from count rates ≳1000 times larger (25.262 and 25.799). Before the installation of the NCS, we find 24.843\\+/- 0.025 for F110W and 25.498\\+/- 0.021 for F160W, also in agreement with the high-count-rate calibration (24.815 and 25.470). We also check the standard bandpasses of WFC3 and NICMOS 2 using a range of stars and galaxies at different colors and find mild tension for WFC3, limiting the accuracy of the zero points. To avoid human bias, our cross-calibration was “blinded” in that the fitted zero-point differences were hidden until the analysis was finalized. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555, under programs

  17. Mapping of Bird Distributions from Point Count Surveys

    Treesearch

    John R. Sauer; Grey W. Pendleton; Sandra Orsillo

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes...

  18. A general dead-time correction method based on live-time stamping. Application to the measurement of short-lived radionuclides.

    PubMed

    Chauvenet, B; Bobin, C; Bouchard, J

    2017-12-01

    Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Photon-Counting Multikilohertz Microlaser Altimeters for Airborne and Spaceborne Topographic Measurements

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; Smith, David E. (Technical Monitor)

    2000-01-01

    We consider the optimum design of photon-counting microlaser altimeters operating from airborne and spaceborne platforms under both day and night conditions. Extremely compact Q-switched microlaser transmitters produce trains of low energy pulses at multi-kHz rates and can easily generate subnanosecond pulse-widths for precise ranging. To guide the design, we have modeled the solar noise background and developed simple algorithms, based on Post-Detection Poisson Filtering (PDPF), to optimally extract the weak altimeter signal from a high noise background during daytime operations. Practical technology issues, such as detector and/or receiver dead times, have also been considered in the analysis. We describe an airborne prototype, being developed under NASA's instrument Incubator Program, which is designed to operate at a 10 kHz rate from aircraft cruise altitudes up to 12 km with laser pulse energies on the order of a few microjoules. We also analyze a compact and power efficient system designed to operate from Mars orbit at an altitude of 300 km and sample the Martian surface at rates up to 4.3 kHz using a 1 watt laser transmitter and an 18 cm telescope. This yields a Power-Aperture Product of 0.24 W-square meter, corresponding to a value almost 4 times smaller than the Mars Orbiting Laser Altimeter (0. 88W-square meter), yet the sampling rate is roughly 400 times greater (4 kHz vs 10 Hz) Relative to conventional high power laser altimeters, advantages of photon-counting laser altimeters include: (1) a more efficient use of available laser photons providing up to two orders of magnitude greater surface sampling rates for a given laser power-telescope aperture product; (2) a simultaneous two order of magnitude reduction in the volume, cost and weight of the telescope system; (3) the unique ability to spatially resolve the source of the surface return in a photon counting mode through the use of pixellated or imaging detectors; and (4) improved vertical and

  20. Who Counts?

    ERIC Educational Resources Information Center

    Brass, Jory

    2017-01-01

    This article recovers a 1972 essay from James Moffett entitled "Who Counts?" that the National Council of Teachers of English commissioned at the onset of US standards and accountability reforms. The essay historicises NCTE's positions on teacher accountability by comparing its recent positions on teacher evaluation and the Common Core…

  1. A genome wide association study of alcohol dependence symptom counts in extended pedigrees identifies C15orf53

    PubMed Central

    Wang, Jen-Chyong; Foroud, Tatiana; Hinrichs, Anthony L; Le, Nhung XH; Bertelsen, Sarah; Budde, John P; Harari, Oscar; Koller, Daniel L; Wetherill, Leah; Agrawal, Arpana; Almasy, Laura; Brooks, Andrew I; Bucholz, Kathleen; Dick, Danielle; Hesselbrock, Victor; Johnson, Eric O; Kang, Sun; Kapoor, Manav; Kramer, John; Kuperman, Samuel; Madden, Pamela AF; Manz, Niklas; Martin, Nicholas G; McClintick, Jeanette N; Montgomery, Grant W; Nurnberger, John I; Rangaswamy, Madhavi; Rice, John; Schuckit, Marc; Tischfield, Jay A; Whitfield, John B; Xuei, Xiaoling; Porjesz, Bernice; Heath, Andrew C; Edenberg, Howard J; Bierut, Laura J; Goate, Alison M

    2013-01-01

    Several studies have identified genes associated with alcohol use disorders, but the variation in each of these genes explains only a small portion of the genetic vulnerability. The goal of the present study was to perform a genome-wide association study (GWAS) in extended families from the Collaborative Study on the Genetics of Alcoholism (COGA) to identify novel genes affecting risk for alcohol dependence. To maximize the power of the extended family design we used a quantitative endophenotype, measured in all individuals: number of alcohol dependence symptoms endorsed (symptom count). Secondary analyses were performed to determine if the single nucleotide polymorphisms (SNPs) associated with symptom count were also associated with the dichotomous phenotype, DSM-IV alcohol dependence. This family-based GWAS identified SNPs in C15orf53 that are strongly associated with DSM-IV alcohol (p=4.5×10−8, inflation corrected p=9.4×10−7). Results with DSM-IV alcohol dependence in the regions of interest support our findings with symptom count, though the associations were less significant. Attempted replications of the most promising association results were conducted in two independent samples: non-overlapping subjects from the Study of Addiction: Genes and Environment (SAGE) and the Australian twin-family study of alcohol use disorders (OZALC). Nominal association of C15orf53 with symptom count was observed in SAGE. The variant that showed strongest association with symptom count, rs12912251 and its highly correlated variants (D′=1, r2≥ 0.95), has previously been associated with risk for bipolar disorder. PMID:23089632

  2. On the importance of controlling for effort in analysis of count survey data: Modeling population change from Christmas Bird Count data

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.; Helbig, Andreas J.; Flade, Martin

    1999-01-01

    Count survey data are commonly used for estimating temporal and spatial patterns of population change. Since count surveys are not censuses, counts can be influenced by 'nuisance factors' related to the probability of detecting animals but unrelated to the actual population size. The effects of systematic changes in these factors can be confounded with patterns of population change. Thus, valid analysis of count survey data requires the identification of nuisance factors and flexible models for their effects. We illustrate using data from the Christmas Bird Count (CBC), a midwinter survey of bird populations in North America. CBC survey effort has substantially increased in recent years, suggesting that unadjusted counts may overstate population growth (or understate declines). We describe a flexible family of models for the effect of effort, that includes models in which increasing effort leads to diminishing returns in terms of the number of birds counted.

  3. Improvement of Aerosol Optical Depth Retrieval over Hong Kong from a Geostationary Meteorological Satellite Using Critical Reflectance with Background Optical Depth Correction

    NASA Technical Reports Server (NTRS)

    Kim, Mijin; Kim, Jhoon; Wong, Man Sing; Yoon, Jongmin; Lee, Jaehwa; Wu, Dong L.; Chan, P.W.; Nichol, Janet E.; Chung, Chu-Yong; Ou, Mi-Lim

    2014-01-01

    Despite continuous efforts to retrieve aerosol optical depth (AOD) using a conventional 5-channelmeteorological imager in geostationary orbit, the accuracy in urban areas has been poorer than other areas primarily due to complex urban surface properties and mixed aerosol types from different emission sources. The two largest error sources in aerosol retrieval have been aerosol type selection and surface reflectance. In selecting the aerosol type from a single visible channel, the season-dependent aerosol optical properties were adopted from longterm measurements of Aerosol Robotic Network (AERONET) sun-photometers. With the aerosol optical properties obtained fromthe AERONET inversion data, look-up tableswere calculated by using a radiative transfer code: the Second Simulation of the Satellite Signal in the Solar Spectrum (6S). Surface reflectance was estimated using the clear sky composite method, awidely used technique for geostationary retrievals. Over East Asia, the AOD retrieved from the Meteorological Imager showed good agreement, although the values were affected by cloud contamination errors. However, the conventional retrieval of the AOD over Hong Kong was largely underestimated due to the lack of information on the aerosol type and surface properties. To detect spatial and temporal variation of aerosol type over the area, the critical reflectance method, a technique to retrieve single scattering albedo (SSA), was applied. Additionally, the background aerosol effect was corrected to improve the accuracy of the surface reflectance over Hong Kong. The AOD retrieved froma modified algorithmwas compared to the collocated data measured by AERONET in Hong Kong. The comparison showed that the new aerosol type selection using the critical reflectance and the corrected surface reflectance significantly improved the accuracy of AODs in Hong Kong areas,with a correlation coefficient increase from0.65 to 0.76 and a regression line change from tMI [basic algorithm] = 0

  4. Comparison of fluorescence rejection methods of baseline correction and shifted excitation Raman difference spectroscopy

    NASA Astrophysics Data System (ADS)

    Cai, Zhijian; Zou, Wenlong; Wu, Jianhong

    2017-10-01

    Raman spectroscopy has been extensively used in biochemical tests, explosive detection, food additive and environmental pollutants. However, fluorescence disturbance brings a big trouble to the applications of portable Raman spectrometer. Currently, baseline correction and shifted-excitation Raman difference spectroscopy (SERDS) methods are the most prevailing fluorescence suppressing methods. In this paper, we compared the performances of baseline correction and SERDS methods, experimentally and simulatively. Through the comparison, it demonstrates that the baseline correction can get acceptable fluorescence-removed Raman spectrum if the original Raman signal has good signal-to-noise ratio, but it cannot recover the small Raman signals out of large noise background. By using SERDS method, the Raman signals, even very weak compared to fluorescence intensity and noise level, can be clearly extracted, and the fluorescence background can be completely rejected. The Raman spectrum recovered by SERDS has good signal to noise ratio. It's proved that baseline correction is more suitable for large bench-top Raman system with better quality or signal-to-noise ratio, while the SERDS method is more suitable for noisy devices, especially the portable Raman spectrometers.

  5. Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Weller, J.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.

  6. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  7. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  8. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.

    PubMed

    Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran

    2016-05-01

    In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.

  9. The Eosinophil Count Tends to Be Negatively Associated with Levels of Serum Glucose in Patients with Adrenal Cushing Syndrome

    PubMed Central

    Lee, Younghak; Kim, Hae Ri; Joung, Kyong Hye; Kang, Yea Eun; Lee, Ju Hee; Kim, Koon Soon; Kim, Hyun Jin; Ku, Bon Jeong; Shong, Minho

    2017-01-01

    Background Cushing syndrome is characterized by glucose intolerance, cardiovascular disease, and an enhanced systemic inflammatory response caused by chronic exposure to excess cortisol. Eosinopenia is frequently observed in patients with adrenal Cushing syndrome, but the relationship between the eosinophil count in peripheral blood and indicators of glucose level in patients with adrenal Cushing syndrome has not been determined. Methods A retrospective study was undertaken of the clinical and laboratory findings of 40 patients diagnosed with adrenal Cushing syndrome at Chungnam National University Hospital from January 2006 to December 2016. Clinical characteristics, complete blood cell counts with white blood cell differential, measures of their endocrine function, description of imaging studies, and pathologic findings were obtained from their medical records. Results Eosinophil composition and count were restored by surgical treatment of all of the patients with adrenal Cushing disease. The eosinophil count was inversely correlated with serum and urine cortisol, glycated hemoglobin, and inflammatory markers in the patients with adrenal Cushing syndrome. Conclusion Smaller eosinophil populations in patients with adrenal Cushing syndrome tend to be correlated with higher levels of blood sugar and glycated hemoglobin. This study suggests that peripheral blood eosinophil composition or count may be associated with serum glucose levels in patients with adrenal Cushing syndrome. PMID:28956365

  10. Improving the counting efficiency in time-correlated single photon counting experiments by dead-time optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peronio, P.; Acconcia, G.; Rech, I.

    Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach basedmore » on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.« less

  11. A Vacuum-Aspirator for Counting Termites

    Treesearch

    Susan C. Jones; Joe K. Mauldin

    1983-01-01

    An aspirator-system powered by a vacuum cleaner is described for manually counting termites. It is significantly faster and termite survival is at least as high as when using a mouth-aspirator for counting large numbers of termites.

  12. Regression Models For Multivariate Count Data

    PubMed Central

    Zhang, Yiwen; Zhou, Hua; Zhou, Jin; Sun, Wei

    2016-01-01

    Data with multivariate count responses frequently occur in modern applications. The commonly used multinomial-logit model is limiting due to its restrictive mean-variance structure. For instance, analyzing count data from the recent RNA-seq technology by the multinomial-logit model leads to serious errors in hypothesis testing. The ubiquity of over-dispersion and complicated correlation structures among multivariate counts calls for more flexible regression models. In this article, we study some generalized linear models that incorporate various correlation structures among the counts. Current literature lacks a treatment of these models, partly due to the fact that they do not belong to the natural exponential family. We study the estimation, testing, and variable selection for these models in a unifying framework. The regression models are compared on both synthetic and real RNA-seq data. PMID:28348500

  13. Temporal differences in point counts of bottomland forest landbirds

    USGS Publications Warehouse

    Smith, W.P.; Twedt, D.J.

    1999-01-01

    We compared number of avian species and individuals in morning and evening point counts during the breeding season and during winter in a bottomland hardwood forest in west-central Mississippi. USA. In both seasons, more species and individuals were recorded during morning counts than during evening counts. We also compared morning and evening detections for 18 species during the breeding season and 9 species during winter. Blue Jay (Cyanocitta cristata), Mourning Dove (Zenaida macroura), and Red-bellied Woodpecker (Melanerpes carolinus) were detected significantly more often in morning counts than in evening counts during the breeding season. Tufted Titmouse (Baeolophus bicolor) was recorded more often in morning Counts than evening counts during the breeding season and during winter. No species was detected more often in evening counts. Thus, evening point counts of birds during either the breeding season or winter will likely underestimate species richness, overall avian abundance, and the abundance of some individual species in bottomland hardwood forests.

  14. Correcting ligands, metabolites, and pathways

    PubMed Central

    Ott, Martin A; Vriend, Gert

    2006-01-01

    Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry) and that a considerable number (about one third) had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect) reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and visualization. It is freely available

  15. Mapping of bird distributions from point count surveys

    USGS Publications Warehouse

    Sauer, J.R.; Pendleton, G.W.; Orsillo, Sandra; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    Maps generated from bird survey data are used for a variety of scientific purposes, but little is known about their bias and precision. We review methods for preparing maps from point count data and appropriate sampling methods for maps based on point counts. Maps based on point counts can be affected by bias associated with incomplete counts, primarily due to changes in proportion counted as a function of observer or habitat differences. Large-scale surveys also generally suffer from regional and temporal variation in sampling intensity. A simulated surface is used to demonstrate sampling principles for maps.

  16. Improving photoelectron counting and particle identification in scintillation detectors with Bayesian techniques

    NASA Astrophysics Data System (ADS)

    Akashi-Ronquest, M.; Amaudruz, P.-A.; Batygov, M.; Beltran, B.; Bodmer, M.; Boulay, M. G.; Broerman, B.; Buck, B.; Butcher, A.; Cai, B.; Caldwell, T.; Chen, M.; Chen, Y.; Cleveland, B.; Coakley, K.; Dering, K.; Duncan, F. A.; Formaggio, J. A.; Gagnon, R.; Gastler, D.; Giuliani, F.; Gold, M.; Golovko, V. V.; Gorel, P.; Graham, K.; Grace, E.; Guerrero, N.; Guiseppe, V.; Hallin, A. L.; Harvey, P.; Hearns, C.; Henning, R.; Hime, A.; Hofgartner, J.; Jaditz, S.; Jillings, C. J.; Kachulis, C.; Kearns, E.; Kelsey, J.; Klein, J. R.; Kuźniak, M.; LaTorre, A.; Lawson, I.; Li, O.; Lidgard, J. J.; Liimatainen, P.; Linden, S.; McFarlane, K.; McKinsey, D. N.; MacMullin, S.; Mastbaum, A.; Mathew, R.; McDonald, A. B.; Mei, D.-M.; Monroe, J.; Muir, A.; Nantais, C.; Nicolics, K.; Nikkel, J. A.; Noble, T.; O'Dwyer, E.; Olsen, K.; Orebi Gann, G. D.; Ouellet, C.; Palladino, K.; Pasuthip, P.; Perumpilly, G.; Pollmann, T.; Rau, P.; Retière, F.; Rielage, K.; Schnee, R.; Seibert, S.; Skensved, P.; Sonley, T.; Vázquez-Jáuregui, E.; Veloce, L.; Walding, J.; Wang, B.; Wang, J.; Ward, M.; Zhang, C.

    2015-05-01

    Many current and future dark matter and neutrino detectors are designed to measure scintillation light with a large array of photomultiplier tubes (PMTs). The energy resolution and particle identification capabilities of these detectors depend in part on the ability to accurately identify individual photoelectrons in PMT waveforms despite large variability in pulse amplitudes and pulse pileup. We describe a Bayesian technique that can identify the times of individual photoelectrons in a sampled PMT waveform without deconvolution, even when pileup is present. To demonstrate the technique, we apply it to the general problem of particle identification in single-phase liquid argon dark matter detectors. Using the output of the Bayesian photoelectron counting algorithm described in this paper, we construct several test statistics for rejection of backgrounds for dark matter searches in argon. Compared to simpler methods based on either observed charge or peak finding, the photoelectron counting technique improves both energy resolution and particle identification of low energy events in calibration data from the DEAP-1 detector and simulation of the larger MiniCLEAN dark matter detector.

  17. Energy response calibration of photon-counting detectors using x-ray fluorescence: a feasibility study

    NASA Astrophysics Data System (ADS)

    Cho, H.-M.; Ding, H.; Ziemer, BP; Molloi, S.

    2014-12-01

    Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3  ×  3 mm2 in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.

  18. Overestimating Fish Counts by Non-Instantaneous Visual Censuses: Consequences for Population and Community Descriptions

    PubMed Central

    Ward-Paige, Christine; Mills Flemming, Joanna; Lotze, Heike K.

    2010-01-01

    Background Increasingly, underwater visual censuses (UVC) are used to assess fish populations. Several studies have demonstrated the effectiveness of protected areas for increasing fish abundance or provided insight into the natural abundance and structure of reef fish communities in remote areas. Recently, high apex predator densities (>100,000 individuals·km−2) and biomasses (>4 tonnes·ha−1) have been reported for some remote islands suggesting the occurrence of inverted trophic biomass pyramids. However, few studies have critically evaluated the methods used for sampling conspicuous and highly mobile fish such as sharks. Ideally, UVC are done instantaneously, however, researchers often count animals that enter the survey area after the survey has started, thus performing non-instantaneous UVC. Methodology/Principal Findings We developed a simulation model to evaluate counts obtained by divers deploying non-instantaneous belt-transect and stationary-point-count techniques. We assessed how fish speed and survey procedure (visibility, diver speed, survey time and dimensions) affect observed fish counts. Results indicate that the bias caused by fish speed alone is huge, while survey procedures had varying effects. Because the fastest fishes tend to be the largest, the bias would have significant implications on their biomass contribution. Therefore, caution is needed when describing abundance, biomass, and community structure based on non-instantaneous UVC, especially for highly mobile species such as sharks. Conclusions/Significance Based on our results, we urge that published literature state explicitly whether instantaneous counts were made and that survey procedures be accounted for when non-instantaneous counts are used. Using published density and biomass values of communities that include sharks we explore the effect of this bias and suggest that further investigation may be needed to determine pristine shark abundances and the existence of inverted

  19. Implementation and Initial Testing of Advanced Processing and Analysis Algorithms for Correlated Neutron Counting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santi, Peter Angelo; Cutler, Theresa Elizabeth; Favalli, Andrea

    In order to improve the accuracy and capabilities of neutron multiplicity counting, additional quantifiable information is needed in order to address the assumptions that are present in the point model. Extracting and utilizing higher order moments (Quads and Pents) from the neutron pulse train represents the most direct way of extracting additional information from the measurement data to allow for an improved determination of the physical properties of the item of interest. The extraction of higher order moments from a neutron pulse train required the development of advanced dead time correction algorithms which could correct for dead time effects inmore » all of the measurement moments in a self-consistent manner. In addition, advanced analysis algorithms have been developed to address specific assumptions that are made within the current analysis model, namely that all neutrons are created at a single point within the item of interest, and that all neutrons that are produced within an item are created with the same energy distribution. This report will discuss the current status of implementation and initial testing of the advanced dead time correction and analysis algorithms that have been developed in an attempt to utilize higher order moments to improve the capabilities of correlated neutron measurement techniques.« less

  20. Evaluation of the traffic count program.

    DOT National Transportation Integrated Search

    1978-01-01

    The purpose of this study was to determine the Department's needs for traffic count data, to relate them to an evaluation of the traffic count programs and procedures, to identify problems and deficiencies with data requirements, and to seek means of...

  1. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    MedlinePlus

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts Seasonal allergic rhinitis known as hay fever is ... hay fever symptoms, it is important to monitor pollen counts so you can limit your exposure on days ...

  2. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  3. CD4+ Cell Count and HIV Load as Predictors of Size of Anal Warts Over Time in HIV-Infected Women

    PubMed Central

    Luu, Hung N.; Amirian, E. Susan; Chan, Wenyaw; Beasley, R. Palmer; Piller, Linda B.

    2012-01-01

    Background. Little is known about the associations between CD4+ cell counts, human immunodeficiency virus (HIV) load, and human papillomavirus “low-risk” types in noncancerous clinical outcomes. This study examined whether CD4+ count and HIV load predict the size of the largest anal warts in 976 HIV-infected women in an ongoing cohort. Methods. A linear mixed model was used to determine the association between size of anal wart and CD4+ count and HIV load. Results. The incidence of anal warts was 4.15 cases per 100 person-years (95% confidence interval [CI], 3.83–4.77) and 1.30 cases per 100 person-years (95% CI, 1.00–1.58) in HIV-infected and HIV-uninfected women, respectively. There appeared to be an inverse association between size of the largest anal warts and CD4+ count at baseline; however, this was not statistically significant. There was no association between size of the largest anal warts and CD4+ count or HIV load over time. Conclusions. There was no evidence for an association between size of the largest anal warts and CD4+ count or HIV load over time. Further exploration on the role of immune response on the development of anal warts is warranted in a larger study. PMID:22246682

  4. Characterizing energy dependence and count rate performance of a dual scintillator fiber-optic detector for computed tomography.

    PubMed

    Hoerner, Matthew R; Stepusin, Elliott J; Hyer, Daniel E; Hintenlang, David E

    2015-03-01

    reference air kerma. Each detector exhibited counting losses of 5% when irradiated at a dose rate of 26.3 mGy/s (Gadolinium) and 324.3 mGy/s (plastic). The dead time of the gadolinium oxysulfide detector was determined to be 48 ns, while the dead time of the plastic scintillating detector was unable to accurately be calculated due to poor counting statistics from low detected count rates. Noticeable depth/energy dependence was observed for the plastic scintillator for depths greater than 16 cm of acrylic that was not present for measurements using the gadolinium oxysulfide scintillator, leading us to believe that quenching may play a larger role in the depth dependence of the plastic scintillator than the incident x-ray energy spectrum. When properly corrected for dead time effects, the energy response of the gadolinium oxysulfide scintillator is consistent with the plastic scintillator. Using the integrated dual detector method was superior to each detector individually as the depth-dependent measure of dose was correctable to less than 8% between 100 and 135 kV. The dual scintillator fiber-optic detector accommodates a methodology for energy dependent corrections of the plastic scintillator, improving the overall accuracy of the dosimeter across the range of diagnostic energies.

  5. Should the Standard Count Be Excluded from Neutron Probe Calibration?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z. Fred

    About 6 decades after its introduction, the neutron probe remains one of the most accurate methods for indirect measurement of soil moisture content. Traditionally, the calibration of a neutron probe involves the ratio of the neutron count in the soil to a standard count, which is the neutron count in the fixed environment such as the probe shield or a specially-designed calibration tank. The drawback of this count-ratio-based calibration is that the error in the standard count is carried through to all the measurements. An alternative calibration is to use the neutron counts only, not the ratio, with proper correctionmore » for radioactive decay and counting time. To evaluate both approaches, the shield counts of a neutron probe used for three decades were analyzed. The results show that the surrounding conditions have a substantial effect on the standard count. The error in the standard count also impacts the calculation of water storage and could indicate false consistency among replicates. The analysis of the shield counts indicates negligible aging effect of the instrument over a period of 26 years. It is concluded that, by excluding the standard count, the use of the count-based calibration is appropriate and sometimes even better than ratio-based calibration. The count-based calibration is especially useful for historical data when the standard count was questionable or absent« less

  6. Blindness to background: an inbuilt bias for visual objects.

    PubMed

    O'Hanlon, Catherine G; Read, Jenny C A

    2017-09-01

    Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  7. Optimization of high count rate event counting detector with Microchannel Plates and quad Timepix readout

    NASA Astrophysics Data System (ADS)

    Tremsin, A. S.; Vallerga, J. V.; McPhate, J. B.; Siegmund, O. H. W.

    2015-07-01

    Many high resolution event counting devices process one event at a time and cannot register simultaneous events. In this article a frame-based readout event counting detector consisting of a pair of Microchannel Plates and a quad Timepix readout is described. More than 104 simultaneous events can be detected with a spatial resolution of 55 μm, while >103 simultaneous events can be detected with <10 μm spatial resolution when event centroiding is implemented. The fast readout electronics is capable of processing >1200 frames/sec, while the global count rate of the detector can exceed 5×108 particles/s when no timing information on every particle is required. For the first generation Timepix readout, the timing resolution is limited by the Timepix clock to 10-20 ns. Optimization of the MCP gain, rear field voltage and Timepix threshold levels are crucial for the device performance and that is the main subject of this article. These devices can be very attractive for applications where the photon/electron/ion/neutron counting with high spatial and temporal resolution is required, such as energy resolved neutron imaging, Time of Flight experiments in lidar applications, experiments on photoelectron spectroscopy and many others.

  8. Improving gross count gamma-ray logging in uranium mining with the NGRS probe

    NASA Astrophysics Data System (ADS)

    Carasco, C.; Pérot, B.; Ma, J.-L.; Toubon, H.; Dubille-Auchère, A.

    2018-01-01

    AREVA Mines and the Nuclear Measurement Laboratory of CEA Cadarache are collaborating to improve the sensitivity and precision of uranium concentration measurement by means of gamma ray logging. The determination of uranium concentration in boreholes is performed with the Natural Gamma Ray Sonde (NGRS) based on a NaI(Tl) scintillation detector. The total gamma count rate is converted into uranium concentration using a calibration coefficient measured in concrete blocks with known uranium concentration in the AREVA Mines calibration facility located in Bessines, France. Until now, to take into account gamma attenuation in a variety of boreholes diameters, tubing materials, diameters and thicknesses, filling fluid densities and compositions, a semi-empirical formula was used to correct the calibration coefficient measured in Bessines facility. In this work, we propose to use Monte Carlo simulations to improve gamma attenuation corrections. To this purpose, the NGRS probe and the calibration measurements in the standard concrete blocks have been modeled with MCNP computer code. The calibration coefficient determined by simulation, 5.3 s-1.ppmU-1 ± 10%, is in good agreement with the one measured in Bessines, 5.2 s-1.ppmU-1. Based on the validated MCNP model, several parametric studies have been performed. For instance, the rock density and chemical composition proved to have a limited impact on the calibration coefficient. However, gamma self-absorption in uranium leads to a nonlinear relationship between count rate and uranium concentration beyond approximately 1% of uranium weight fraction, the underestimation of the uranium content reaching more than a factor 2.5 for a 50 % uranium weight fraction. Next steps will concern parametric studies with different tubing materials, diameters and thicknesses, as well as different borehole filling fluids representative of real measurement conditions.

  9. Blood Count Tests

    MedlinePlus

    ... white blood cells (WBC), and platelets. Blood count tests measure the number and types of cells in ... helps doctors check on your overall health. The tests can also help to diagnose diseases and conditions ...

  10. Adjusting MtDNA Quantification in Whole Blood for Peripheral Blood Platelet and Leukocyte Counts.

    PubMed

    Hurtado-Roca, Yamilee; Ledesma, Marta; Gonzalez-Lazaro, Monica; Moreno-Loshuertos, Raquel; Fernandez-Silva, Patricio; Enriquez, Jose Antonio; Laclaustra, Martin

    2016-01-01

    Alterations of mitochondrial DNA copy number (mtDNAcn) in the blood (mitochondrial to nuclear DNA ratio) appear associated with several systemic diseases, including primary mitochondrial disorders, carcinogenesis, and hematologic diseases. Measuring mtDNAcn in DNA extracted from whole blood (WB) instead of from peripheral blood mononuclear cells or buffy coat may yield different results due to mitochondrial DNA present in platelets. The aim of this work is to quantify the contribution of platelets to mtDNAcn in whole blood [mtDNAcn(WB)] and to propose a correction formula to estimate leukocytes' mtDNAcn [mtDNAcn(L)] from mtDNAcn(WB). Blood samples from 10 healthy adults were combined with platelet-enriched plasma and saline solution to produce artificial blood preparations. Aliquots of each sample were combined with five different platelet concentrations. In 46 of these blood preparations, mtDNAcn was measured by qPCR. MtDNAcn(WB) increased 1.07 (95%CI 0.86, 1.29; p<0.001) per 1000 platelets present in the preparation. We proved that leukocyte count should also be taken into account as mtDNAcn(WB) was inversely associated with leukocyte count; it increased 1.10 (95%CI 0.95, 1.25, p<0.001) per unit increase of the ratio between platelet and leukocyte counts. If hematological measurements are available, subtracting 1.10 the platelets/leukocyte ratio from mtDNAcn(WB) may serve as an estimation for mtDNAcn(L). Both platelet and leukocyte counts in the sample are important sources of variation if comparing mtDNAcn among groups of patients when mtDNAcn is measured in DNA extracted from whole blood. Not taking the platelet/leukocyte ratio into account in whole blood measurements, may lead to overestimation and misclassification if interpreted as leukocytes' mtDNAcn.

  11. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  12. Large-scale femtoliter droplet array for digital counting of single biomolecules.

    PubMed

    Kim, Soo Hyeon; Iwai, Shino; Araki, Suguru; Sakakihara, Shouichi; Iino, Ryota; Noji, Hiroyuki

    2012-12-07

    We present a novel device employing one million femtoliter droplets immobilized on a substrate for the quantitative detection of extremely low concentrations of biomolecules in a sample. Surface-modified polystyrene beads carrying either zero or a single biomolecule-reporter enzyme complex are efficiently isolated into femtoliter droplets formed on hydrophilic-in-hydrophobic surfaces. Using a conventional micropipette, this is achieved by sequential injection first with an aqueous solution containing beads, and then with fluorinated oil. The concentration of target biomolecules is estimated from the ratio of the number of signal-emitting droplets to the total number of trapped beads (digital counting). The performance of our digital counting device was demonstrated by detecting a streptavidin-β-galactosidase conjugate with a limit of detection (LOD) of 10 zM. The sensitivity of our device was >20-fold higher than that noted in previous studies where a smaller number of reactors (fifty thousand reactors) were used. Such a low LOD was achieved because of the large number of droplets in an array, allowing simultaneous examination of a large number of beads. When combined with bead-based enzyme-linked immunosorbent assay (digital ELISA), the LOD for the detection of prostate specific antigen reached 2 aM. This value, again, was improved over that noted in a previous study, because of the decreased coefficient of variance of the background measurement determined by the Poisson noise. Our digital counting device using one million droplets has great potential as a highly sensitive, portable immunoassay device that could be used to diagnose diseases.

  13. Preverbal and verbal counting and computation.

    PubMed

    Gallistel, C R; Gelman, R

    1992-08-01

    We describe the preverbal system of counting and arithmetic reasoning revealed by experiments on numerical representations in animals. In this system, numerosities are represented by magnitudes, which are rapidly but inaccurately generated by the Meck and Church (1983) preverbal counting mechanism. We suggest the following. (1) The preverbal counting mechanism is the source of the implicit principles that guide the acquisition of verbal counting. (2) The preverbal system of arithmetic computation provides the framework for the assimilation of the verbal system. (3) Learning to count involves, in part, learning a mapping from the preverbal numerical magnitudes to the verbal and written number symbols and the inverse mappings from these symbols to the preverbal magnitudes. (4) Subitizing is the use of the preverbal counting process and the mapping from the resulting magnitudes to number words in order to generate rapidly the number words for small numerosities. (5) The retrieval of the number facts, which plays a central role in verbal computation, is mediated via the inverse mappings from verbal and written numbers to the preverbal magnitudes and the use of these magnitudes to find the appropriate cells in tabular arrangements of the answers. (6) This model of the fact retrieval process accounts for the salient features of the reaction time differences and error patterns revealed by experiments on mental arithmetic. (7) The application of verbal and written computational algorithms goes on in parallel with, and is to some extent guided by, preverbal computations, both in the child and in the adult.

  14. Evaluation of dead-time corrections for post-radionuclide-therapy (177)Lu quantitative imaging with low-energy high-resolution collimators.

    PubMed

    Celler, Anna; Piwowarska-Bilska, Hanna; Shcherbinin, Sergey; Uribe, Carlos; Mikolajczak, Renata; Birkenfeld, Bozena

    2014-01-01

    Dead-time (DT) effects rarely cause problems in diagnostic single-photon emission computed tomography (SPECT) studies; however, in post-radionuclide-therapy imaging, DT can be substantial. Therefore, corrections may be necessary if quantitative images are used in image-based dosimetry or for evaluation of therapy outcomes. This task is particularly challenging if low-energy collimators are used. Our goal was to design a simple method to determine the dead-time correction factor (DTCF) without the need for phantom experiments and complex calculations. Planar and SPECT/CT scans of a water phantom containing a 70 ml bottle filled with lutetium-177 (Lu) were acquired over 60 days. Two small Lu markers were used in all scans. The DTCF based on the ratio of observed to true count rates measured over the entire spectrum and using photopeak primary photons only was estimated for phantom (DT present) and marker (no DT) scans. In addition, variations in counts in SPECT projections (potentially caused by varying bremsstrahlung and scatter) were investigated. For count rates that were about two-fold higher than typically seen in post-therapy Lu scans, the maximum DTCF reached a level of about 17%. The DTCF values determined directly from the phantom experiments using the total energy spectrum and photopeak counts only were equal to 13 and 16%, respectively. They were closely matched by those from the proposed marker-based method, which uses only two energy windows and measures photopeak primary photons (15-17%). A simple, marker-based method allowing for determination of the DTCF in high-activity Lu imaging studies has been proposed and validated using phantom experiments.

  15. Analysis of error-correction constraints in an optical disk.

    PubMed

    Roberts, J D; Ryley, A; Jones, D M; Burke, D

    1996-07-10

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  16. Analysis of error-correction constraints in an optical disk

    NASA Astrophysics Data System (ADS)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  17. Addendum to final report, Optimizing traffic counting procedures.

    DOT National Transportation Integrated Search

    1987-01-01

    The methodology described in entry 55-14 was used with 1980 data for 16 continuous count stations to determine periods that were stable throughout the year for different short counts. It was found that stable periods for short counts occurred mainly ...

  18. Optimally achieving milk bulk tank somatic cell count thresholds.

    PubMed

    Troendle, Jason A; Tauer, Loren W; Gröhn, Yrjo T

    2017-01-01

    High somatic cell count in milk leads to reduced shelf life in fluid milk and lower processed yields in manufactured dairy products. As a result, farmers are often penalized for high bulk tank somatic cell count or paid a premium for low bulk tank somatic cell count. Many countries also require all milk from a farm to be lower than a specified regulated somatic cell count. Thus, farms often cull cows that have high somatic cell count to meet somatic cell count thresholds. Rather than naïvely cull the highest somatic cell count cows, a mathematical programming model was developed that determines the cows to be culled from the herd by maximizing the net present value of the herd, subject to meeting any specified bulk tank somatic cell count level. The model was applied to test-day cows on 2 New York State dairy farms. Results showed that the net present value of the herd was increased by using the model to meet the somatic cell count restriction compared with naïvely culling the highest somatic cell count cows. Implementation of the model would be straightforward in dairy management decision software. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  19. Exploring the effects of transfers and readmissions on trends in population counts of hospital admissions for coronary heart disease: a Western Australian data linkage study.

    PubMed

    Lopez, Derrick; Nedkoff, Lee; Knuiman, Matthew; Hobbs, Michael S T; Briffa, Thomas G; Preen, David B; Hung, Joseph; Beilby, John; Mathur, Sushma; Reynolds, Anna; Sanfilippo, Frank M

    2017-11-17

    To develop a method for categorising coronary heart disease (CHD) subtype in linked data accounting for different CHD diagnoses across records, and to compare hospital admission numbers and ratios of unlinked versus linked data for each CHD subtype over time, and across age groups and sex. Cohort study. Person-linked hospital administrative data covering all admissions for CHD in Western Australia from 1988 to 2013. Ratios of (1) unlinked admission counts to contiguous admission (CA) counts (accounting for transfers), and (2) 28-day episode counts (accounting for transfers and readmissions) to CA counts stratified by CHD subtype, sex and age group. In all CHD subtypes, the ratios changed in a linear or quadratic fashion over time and the coefficients of the trend term differed across CHD subtypes. Furthermore, for many CHD subtypes the ratios also differed by age group and sex. For example, in women aged 35-54 years, the ratio of unlinked to CA counts for non-ST elevation myocardial infarction admissions in 2000 was 1.10, and this increased in a linear fashion to 1.30 in 2013, representing an annual increase of 0.0148. The use of unlinked counts in epidemiological estimates of CHD hospitalisations overestimates CHD counts. The CA and 28-day episode counts are more aligned with epidemiological studies of CHD. The degree of overestimation of counts using only unlinked counts varies in a complex manner with CHD subtype, time, sex and age group, and it is not possible to apply a simple correction factor to counts obtained from unlinked data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Automatic correction of dental artifacts in PET/MRI

    PubMed Central

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune. H.; Beyer, Thomas; Law, Ian; Højgaard, Liselotte; Darkner, Sune; Lauze, Francois

    2015-01-01

    Abstract. A challenge when using current magnetic resonance (MR)-based attenuation correction in positron emission tomography/MR imaging (PET/MRI) is that the MRIs can have a signal void around the dental fillings that is segmented as artificial air-regions in the attenuation map. For artifacts connected to the background, we propose an extension to an existing active contour algorithm to delineate the outer contour using the nonattenuation corrected PET image and the original attenuation map. We propose a combination of two different methods for differentiating the artifacts within the body from the anatomical air-regions by first using a template of artifact regions, and second, representing the artifact regions with a combination of active shape models and k-nearest-neighbors. The accuracy of the combined method has been evaluated using 25 F18-fluorodeoxyglucose PET/MR patients. Results showed that the approach was able to correct an average of 97±3% of the artifact areas. PMID:26158104