Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
NASA Astrophysics Data System (ADS)
Tremsin, A. S.; Vallerga, J. V.; McPhate, J. B.; Siegmund, O. H. W.
2015-07-01
Many high resolution event counting devices process one event at a time and cannot register simultaneous events. In this article a frame-based readout event counting detector consisting of a pair of Microchannel Plates and a quad Timepix readout is described. More than 104 simultaneous events can be detected with a spatial resolution of 55 μm, while >103 simultaneous events can be detected with <10 μm spatial resolution when event centroiding is implemented. The fast readout electronics is capable of processing >1200 frames/sec, while the global count rate of the detector can exceed 5×108 particles/s when no timing information on every particle is required. For the first generation Timepix readout, the timing resolution is limited by the Timepix clock to 10-20 ns. Optimization of the MCP gain, rear field voltage and Timepix threshold levels are crucial for the device performance and that is the main subject of this article. These devices can be very attractive for applications where the photon/electron/ion/neutron counting with high spatial and temporal resolution is required, such as energy resolved neutron imaging, Time of Flight experiments in lidar applications, experiments on photoelectron spectroscopy and many others.
Absolute nuclear material assay using count distribution (LAMBDA) space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay using count distribution (LAMBDA) space
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-06-05
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Optimal measurement counting time and statistics in gamma spectrometry analysis: The time balance
NASA Astrophysics Data System (ADS)
Joel, Guembou Shouop Cebastien; Penabei, Samafou; Maurice, Ndontchueng Moyo; Gregoire, Chene; Jilbert, Nguelem Mekontso Eric; Didier, Takoukam Serge; Werner, Volker; David, Strivay
2017-01-01
The optimal measurement counting time for gamma-ray spectrometry analysis using HPGe detectors was determined in our laboratory by comparing twelve hours measurement counting time at day and twelve hours measurement counting time at night. The day spectrum does not fully cover the night spectrum for the same sample. It is observed that the perturbation come to the sun-light. After several investigations became clearer: to remove all effects of radiation from outside (earth, the sun, and universe) our system, it is necessary to measure the background for 24, 48 or 72 hours. In the same way, the samples have to be measured for 24, 48 or 72 hours to be safe to be purified the measurement (equality of day and night measurement). It is also possible to not use the background of the winter in summer. Depend on to the energy of radionuclide we seek, it is clear that the most important steps of a gamma spectrometry measurement are the preparation of the sample and the calibration of the detector.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2012-05-15
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Absolute nuclear material assay
Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA
2010-07-13
A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.
Optimization of simultaneous tritium–radiocarbon internal gas proportional counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonicalzi, R. M.; Aalseth, C. E.; Day, A. R.
Specific environmental applications can benefit from dual tritium and radiocarbon measurements in a single compound. Assuming typical environmental levels, it is often the low tritium activity relative to the higher radiocarbon activity that limits the dual measurement. In this paper, we explore the parameter space for a combined tritium and radiocarbon measurement using a methane sample mixed with an argon fill gas in low-background proportional counters of a specific design. We present an optimized methane percentage, detector fill pressure, and analysis energy windows to maximize measurement sensitivity while minimizing count time. The final optimized method uses a 9-atm fill ofmore » P35 (35% methane, 65% argon), and a tritium analysis window from 1.5 to 10.3 keV, which stops short of the tritium beta decay endpoint energy of 18.6 keV. This method optimizes tritium counting efficiency while minimizing radiocarbon beta decay interference.« less
NASA Technical Reports Server (NTRS)
Prochzaka, Ivan; Kodat, Jan; Blazej, Josef; Sun, Xiaoli (Editor)
2015-01-01
We are reporting on a design, construction and performance of photon-counting detector packages based on silicon avalanche photodiodes. These photon-counting devices have been optimized for extremely high stability of their detection delay. The detectors have been designed for future applications in fundamental metrology and optical time transfer in space. The detectors have been qualified for operation in space missions. The exceptional radiation tolerance of the detection chip itself and of all critical components of a detector package has been verified in a series of experiments.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
Rapid enumeration of viable bacteria by image analysis
NASA Technical Reports Server (NTRS)
Singh, A.; Pyle, B. H.; McFeters, G. A.
1989-01-01
A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.
Flight Test of an Adaptive Configuration Optimization System for Transport Aircraft
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Georgie, Jennifer; Barnicki, Joseph S.
1999-01-01
A NASA Dryden Flight Research Center program explores the practical application of real-time adaptive configuration optimization for enhanced transport performance on an L-1011 aircraft. This approach is based on calculation of incremental drag from forced-response, symmetric, outboard aileron maneuvers. In real-time operation, the symmetric outboard aileron deflection is directly optimized, and the horizontal stabilator and angle of attack are indirectly optimized. A flight experiment has been conducted from an onboard research engineering test station, and flight research results are presented herein. The optimization system has demonstrated the capability of determining the minimum drag configuration of the aircraft in real time. The drag-minimization algorithm is capable of identifying drag to approximately a one-drag-count level. Optimizing the symmetric outboard aileron position realizes a drag reduction of 2-3 drag counts (approximately 1 percent). Algorithm analysis of maneuvers indicate that two-sided raised-cosine maneuvers improve definition of the symmetric outboard aileron drag effect, thereby improving analysis results and consistency. Ramp maneuvers provide a more even distribution of data collection as a function of excitation deflection than raised-cosine maneuvers provide. A commercial operational system would require airdata calculations and normal output of current inertial navigation systems; engine pressure ratio measurements would be optional.
Vehicle counting system using real-time video processing
NASA Astrophysics Data System (ADS)
Crisóstomo-Romero, Pedro M.
2006-02-01
Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peronio, P.; Acconcia, G.; Rech, I.
Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach basedmore » on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.« less
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
NASA Astrophysics Data System (ADS)
Prochazka, Ivan; Kodet, Jan; Eckl, Johann; Blazej, Josef
2017-10-01
We are reporting on the design, construction, and performance of a photon counting detector system, which is based on single photon avalanche diode detector technology. This photon counting device has been optimized for very high timing resolution and stability of its detection delay. The foreseen application of this detector is laser ranging of space objects, laser time transfer ground to space and fundamental metrology. The single photon avalanche diode structure, manufactured on silicon using K14 technology, is used as a sensor. The active area of the sensor is circular with 200 μm diameter. Its photon detection probability exceeds 40% in the wavelength range spanning from 500 to 800 nm. The sensor is operated in active quenching and gating mode. A new control circuit was optimized to maintain high timing resolution and detection delay stability. In connection to this circuit, timing resolution of the detector is reaching 20 ps FWHM. In addition, the temperature change of the detection delay is as low as 70 fs/K. As a result, the detection delay stability of the device is exceptional: expressed in the form of time deviation, detection delay stability of better than 60 fs has been achieved. Considering the large active area aperture of the detector, this is, to our knowledge, the best timing performance reported for a solid state photon counting detector so far.
Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D
2012-07-07
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.
Poon, Jonathan K; Dahlbom, Magnus L; Moses, William W; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R; Badawi, Ramsey D
2013-01-01
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15–22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging, or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 90 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25–31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration. PMID:22678106
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Moses, William W.; Balakrishnan, Karthik; Wang, Wenli; Cherry, Simon R.; Badawi, Ramsey D.
2012-07-01
The axial field of view (AFOV) of the current generation of clinical whole-body PET scanners range from 15-22 cm, which limits sensitivity and renders applications such as whole-body dynamic imaging or imaging of very low activities in whole-body cellular tracking studies, almost impossible. Generally, extending the AFOV significantly increases the sensitivity and count-rate performance. However, extending the AFOV while maintaining detector thickness has significant cost implications. In addition, random coincidences, detector dead time, and object attenuation may reduce scanner performance as the AFOV increases. In this paper, we use Monte Carlo simulations to find the optimal scanner geometry (i.e. AFOV, detector thickness and acceptance angle) based on count-rate performance for a range of scintillator volumes ranging from 10 to 93 l with detector thickness varying from 5 to 20 mm. We compare the results to the performance of a scanner based on the current Siemens Biograph mCT geometry and electronics. Our simulation models were developed based on individual components of the Siemens Biograph mCT and were validated against experimental data using the NEMA NU-2 2007 count-rate protocol. In the study, noise-equivalent count rate (NECR) was computed as a function of maximum ring difference (i.e. acceptance angle) and activity concentration using a 27 cm diameter, 200 cm uniformly filled cylindrical phantom for each scanner configuration. To reduce the effect of random coincidences, we implemented a variable coincidence time window based on the length of the lines of response, which increased NECR performance up to 10% compared to using a static coincidence time window for scanners with a large maximum ring difference values. For a given scintillator volume, the optimal configuration results in modest count-rate performance gains of up to 16% compared to the shortest AFOV scanner with the thickest detectors. However, the longest AFOV of approximately 2 m with 20 mm thick detectors resulted in performance gains of 25-31 times higher NECR relative to the current Siemens Biograph mCT scanner configuration.
NASA Astrophysics Data System (ADS)
Jiang, Xiao-Pan; Zhang, Zi-Liang; Qin, Xiu-Bo; Yu, Run-Sheng; Wang, Bao-Yi
2010-12-01
Positronium time of flight spectroscopy (Ps-TOF) is an effective technique for porous material research. It has advantages over other techniques for analyzing the porosity and pore tortuosity of materials. This paper describes a design for Ps-TOF apparatus based on the Beijing intense slow positron beam, supplying a new material characterization technique. In order to improve the time resolution and increase the count rate of the apparatus, the detector system is optimized. For 3 eV o-Ps, the time broadening is 7.66 ns and the count rate is 3 cps after correction.
System for sensing droplet formation time delay in a flow cytometer
Van den Engh, Ger; Esposito, Richard J.
1997-01-01
A droplet flow cytometer system which includes a system to optimize the droplet formation time delay based on conditions actually experienced includes an automatic droplet sampler which rapidly moves a plurality of containers stepwise through the droplet stream while simultaneously adjusting the droplet time delay. Through the system sampling of an actual substance to be processed can be used to minimize the effect of the substances variations or the determination of which time delay is optimal. Analysis such as cell counting and the like may be conducted manually or automatically and input to a time delay adjustment which may then act with analysis equipment to revise the time delay estimate actually applied during processing. The automatic sampler can be controlled through a microprocessor and appropriate programming to bracket an initial droplet formation time delay estimate. When maximization counts through volume, weight, or other types of analysis exists in the containers, the increment may then be reduced for a more accurate ultimate setting. This may be accomplished while actually processing the sample without interruption.
Imaging workflow and calibration for CT-guided time-domain fluorescence tomography
Tichauer, Kenneth M.; Holt, Robert W.; El-Ghussein, Fadi; Zhu, Qun; Dehghani, Hamid; Leblond, Frederic; Pogue, Brian W.
2011-01-01
In this study, several key optimization steps are outlined for a non-contact, time-correlated single photon counting small animal optical tomography system, using simultaneous collection of both fluorescence and transmittance data. The system is presented for time-domain image reconstruction in vivo, illustrating the sensitivity from single photon counting and the calibration steps needed to accurately process the data. In particular, laser time- and amplitude-referencing, detector and filter calibrations, and collection of a suitable instrument response function are all presented in the context of time-domain fluorescence tomography and a fully automated workflow is described. Preliminary phantom time-domain reconstructed images demonstrate the fidelity of the workflow for fluorescence tomography based on signal from multiple time gates. PMID:22076264
HgCdTe APD-based linear-mode photon counting components and ladar receivers
NASA Astrophysics Data System (ADS)
Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.
2011-05-01
Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.
Optimization of low-level LS counter Quantulus 1220 for tritium determination in water samples
NASA Astrophysics Data System (ADS)
Jakonić, Ivana; Todorović, Natasa; Nikolov, Jovana; Bronić, Ines Krajcar; Tenjović, Branislava; Vesković, Miroslav
2014-05-01
Liquid scintillation counting (LSC) is the most commonly used technique for measuring tritium. To optimize tritium analysis in waters by ultra-low background liquid scintillation spectrometer Quantulus 1220 the optimization of sample/scintillant ratio, choice of appropriate scintillation cocktail and comparison of their efficiency, background and minimal detectable activity (MDA), the effect of chemi- and photoluminescence and combination of scintillant/vial were performed. ASTM D4107-08 (2006) method had been successfully applied in our laboratory for two years. During our last preparation of samples a serious quench effect in count rates of samples that could be consequence of possible contamination by DMSO was noticed. The goal of this paper is to demonstrate development of new direct method in our laboratory proposed by Pujol and Sanchez-Cabeza (1999), which turned out to be faster and simpler than ASTM method while we are dealing with problem of neutralization of DMSO in apparatus. The minimum detectable activity achieved was 2.0 Bq l-1 for a total counting time of 300 min. In order to test the optimization of system for this method tritium level was determined in Danube river samples and also for several samples within intercomparison with Ruđer Bošković Institute (IRB).
Peck, K; Stryer, L; Glazer, A N; Mathies, R A
1989-01-01
A theory for single-molecule fluorescence detection is developed and then used to analyze data from subpicomolar solutions of B-phycoerythrin (PE). The distribution of detected counts is the convolution of a Poissonian continuous background with bursts arising from the passage of individual fluorophores through the focused laser beam. The autocorrelation function reveals single-molecule events and provides a criterion for optimizing experimental parameters. The transit time of fluorescent molecules through the 120-fl imaged volume was 800 microseconds. The optimal laser power (32 mW at 514.5 nm) gave an incident intensity of 1.8 x 10(23) photons.cm-2.s-1, corresponding to a mean time of 1.1 ns between absorptions. The mean incremental count rate was 1.5 per 100 microseconds for PE monomers and 3.0 for PE dimers above a background count rate of 1.0. The distribution of counts and the autocorrelation function for 200 fM monomer and 100 fM dimer demonstrate that single-molecule detection was achieved. At this concentration, the mean occupancy was 0.014 monomer molecules in the probed volume. A hard-wired version of this detection system was used to measure the concentration of PE down to 1 fM. This single-molecule counter is 3 orders of magnitude more sensitive than conventional fluorescence detection systems. PMID:2726766
NASA Astrophysics Data System (ADS)
Wollman, E. E.; Verma, V. B.; Beyer, A. D.; Briggs, R. M.; Korzh, B.; Allmaras, J. P.; Marsili, F.; Lita, A. E.; Mirin, R. P.; Nam, S. W.; Shaw, M. D.
2017-10-01
For photon-counting applications at ultraviolet wavelengths, there are currently no detectors that combine high efficiency (> 50%), sub-nanosecond timing resolution, and sub-Hz dark count rates. Superconducting nanowire single-photon detectors (SNSPDs) have seen success over the past decade for photon-counting applications in the near-infrared, but little work has been done to optimize SNSPDs for wavelengths below 400 nm. Here, we describe the design, fabrication, and characterization of UV SNSPDs operating at wavelengths between 250 and 370 nm. The detectors have active areas up to 56 ${\\mu}$m in diameter, 70 - 80% efficiency, timing resolution down to 60 ps FWHM, blindness to visible and infrared photons, and dark count rates of ~ 0.25 counts/hr for a 56 ${\\mu}$m diameter pixel. By using the amorphous superconductor MoSi, these UV SNSPDs are also able to operate at temperatures up to 4.2 K. These performance metrics make UV SNSPDs ideal for applications in trapped-ion quantum information processing, lidar studies of the upper atmosphere, UV fluorescent-lifetime imaging microscopy, and photon-starved UV astronomy.
NASA Astrophysics Data System (ADS)
He, Xin; Links, Jonathan M.; Frey, Eric C.
2010-09-01
Quantum noise as well as anatomic and uptake variability in patient populations limits observer performance on a defect detection task in myocardial perfusion SPECT (MPS). The goal of this study was to investigate the relative importance of these two effects by varying acquisition time, which determines the count level, and assessing the change in performance on a myocardial perfusion (MP) defect detection task using both mathematical and human observers. We generated ten sets of projections of a simulated patient population with count levels ranging from 1/128 to around 15 times a typical clinical count level to simulate different levels of quantum noise. For the simulated population we modeled variations in patient, heart and defect size, heart orientation and shape, defect location, organ uptake ratio, etc. The projection data were reconstructed using the OS-EM algorithm with no compensation or with attenuation, detector response and scatter compensation (ADS). The images were then post-filtered and reoriented to generate short-axis slices. A channelized Hotelling observer (CHO) was applied to the short-axis images, and the area under the receiver operating characteristics (ROC) curve (AUC) was computed. For each noise level and reconstruction method, we optimized the number of iterations and cutoff frequencies of the Butterworth filter to maximize the AUC. Using the images obtained with the optimal iteration and cutoff frequency and ADS compensation, we performed human observer studies for four count levels to validate the CHO results. Both CHO and human observer studies demonstrated that observer performance was dependent on the relative magnitude of the quantum noise and the patient variation. When the count level was high, the patient variation dominated, and the AUC increased very slowly with changes in the count level for the same level of anatomic variability. When the count level was low, however, quantum noise dominated, and changes in the count level resulted in large changes in the AUC. This behavior agreed with a theoretical expression for the AUC as a function of quantum and anatomical noise levels. The results of this study demonstrate the importance of the tradeoff between anatomical and quantum noise in determining observer performance. For myocardial perfusion imaging, it indicates that, at current clinical count levels, there is some room to reduce acquisition time or injected activity without substantially degrading performance on myocardial perfusion defect detection.
High throughput RNAi assay optimization using adherent cell cytometry.
Nabzdyk, Christoph S; Chun, Maggie; Pradhan, Leena; Logerfo, Frank W
2011-04-25
siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC). AoSMC were seeded at a density of 3000-8000 cells/well of a 96 well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM), or non-targeting labeled siRNA, siGLO Red (5 or 50 nM) using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell) and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19). Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs.
High throughput RNAi assay optimization using adherent cell cytometry
2011-01-01
Background siRNA technology is a promising tool for gene therapy of vascular disease. Due to the multitude of reagents and cell types, RNAi experiment optimization can be time-consuming. In this study adherent cell cytometry was used to rapidly optimize siRNA transfection in human aortic vascular smooth muscle cells (AoSMC). Methods AoSMC were seeded at a density of 3000-8000 cells/well of a 96well plate. 24 hours later AoSMC were transfected with either non-targeting unlabeled siRNA (50 nM), or non-targeting labeled siRNA, siGLO Red (5 or 50 nM) using no transfection reagent, HiPerfect or Lipofectamine RNAiMax. For counting cells, Hoechst nuclei stain or Cell Tracker green were used. For data analysis an adherent cell cytometer, Celigo® was used. Data was normalized to the transfection reagent alone group and expressed as red pixel count/cell. Results After 24 hours, none of the transfection conditions led to cell loss. Red fluorescence counts were normalized to the AoSMC count. RNAiMax was more potent compared to HiPerfect or no transfection reagent at 5 nM siGLO Red (4.12 +/-1.04 vs. 0.70 +/-0.26 vs. 0.15 +/-0.13 red pixel/cell) and 50 nM siGLO Red (6.49 +/-1.81 vs. 2.52 +/-0.67 vs. 0.34 +/-0.19). Fluorescence expression results supported gene knockdown achieved by using MARCKS targeting siRNA in AoSMCs. Conclusion This study underscores that RNAi delivery depends heavily on the choice of delivery method. Adherent cell cytometry can be used as a high throughput-screening tool for the optimization of RNAi assays. This technology can accelerate in vitro cell assays and thus save costs. PMID:21518450
Expectation maximization for hard X-ray count modulation profiles
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.
2013-07-01
Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.
2018-01-01
Starch is increasingly used as a functional group in many industrial applications and foods due to its ability to work as a thickener. The experimental values of extracting starch from yellow skin potato indicate the processing conditions at 3000 rpm and 15 min as optimum for the highest yield of extracted starch. The effect of adding different concentrations of extracted starch under the optimized conditions was studied to determine the acidity, pH, syneresis, microbial counts, and sensory evaluation in stored yogurt manufactured at 5 °C for 15 days. The results showed that adding sufficient concentrations of starch (0.75%, 1%) could provide better results in terms of the minimum change in the total acidity, decrease in pH, reduction in syneresis, and preferable results for all sensory parameters. The results revealed that the total bacteria count of all yogurt samples increased throughout the storage time. However, adding different concentrations of optimized extracted starch had a significant effect, decreasing the microbial content compared with the control sample (YC). In addition, the results indicated that coliform bacteria were not found during the storage time. PMID:29382115
Validation of accelerometer wear and nonwear time classification algorithm.
Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S
2011-02-01
the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method.
Huh, Kyung-Hoe; Baik, Jee-Seon; Yi, Won-Jin; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-06-01
This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm.
Fractal analysis of mandibular trabecular bone: optimal tile sizes for the tile counting method
Huh, Kyung-Hoe; Baik, Jee-Seon; Heo, Min-Suk; Lee, Sam-Sun; Choi, Soon-Chul; Lee, Sun-Bok; Lee, Seung-Pyo
2011-01-01
Purpose This study was performed to determine the optimal tile size for the fractal dimension of the mandibular trabecular bone using a tile counting method. Materials and Methods Digital intraoral radiographic images were obtained at the mandibular angle, molar, premolar, and incisor regions of 29 human dry mandibles. After preprocessing, the parameters representing morphometric characteristics of the trabecular bone were calculated. The fractal dimensions of the processed images were analyzed in various tile sizes by the tile counting method. Results The optimal range of tile size was 0.132 mm to 0.396 mm for the fractal dimension using the tile counting method. The sizes were closely related to the morphometric parameters. Conclusion The fractal dimension of mandibular trabecular bone, as calculated with the tile counting method, can be best characterized with a range of tile sizes from 0.132 to 0.396 mm. PMID:21977478
Analysis of Different Cost Functions in the Geosect Airspace Partitioning Tool
NASA Technical Reports Server (NTRS)
Wong, Gregory L.
2010-01-01
A new cost function representing air traffic controller workload is implemented in the Geosect airspace partitioning tool. Geosect currently uses a combination of aircraft count and dwell time to select optimal airspace partitions that balance controller workload. This is referred to as the aircraft count/dwell time hybrid cost function. The new cost function is based on Simplified Dynamic Density, a measure of different aspects of air traffic controller workload. Three sectorizations are compared. These are the current sectorization, Geosect's sectorization based on the aircraft count/dwell time hybrid cost function, and Geosect s sectorization based on the Simplified Dynamic Density cost function. Each sectorization is evaluated for maximum and average workload along with workload balance using the Simplified Dynamic Density as the workload measure. In addition, the Airspace Concept Evaluation System, a nationwide air traffic simulator, is used to determine the capacity and delay incurred by each sectorization. The sectorization resulting from the Simplified Dynamic Density cost function had a lower maximum workload measure than the other sectorizations, and the sectorization based on the combination of aircraft count and dwell time did a better job of balancing workload and balancing capacity. However, the current sectorization had the lowest average workload, highest sector capacity, and the least system delay.
Photon counting phosphorescence lifetime imaging with TimepixCam
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...
2017-01-12
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam.
Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Photon counting phosphorescence lifetime imaging with TimepixCam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Wang, Yuanjia; Chen, Tianle; Zeng, Donglin
2016-01-01
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects.
Estimation method for serial dilution experiments.
Ben-David, Avishai; Davidson, Charles E
2014-12-01
Titration of microorganisms in infectious or environmental samples is a corner stone of quantitative microbiology. A simple method is presented to estimate the microbial counts obtained with the serial dilution technique for microorganisms that can grow on bacteriological media and develop into a colony. The number (concentration) of viable microbial organisms is estimated from a single dilution plate (assay) without a need for replicate plates. Our method selects the best agar plate with which to estimate the microbial counts, and takes into account the colony size and plate area that both contribute to the likelihood of miscounting the number of colonies on a plate. The estimate of the optimal count given by our method can be used to narrow the search for the best (optimal) dilution plate and saves time. The required inputs are the plate size, the microbial colony size, and the serial dilution factors. The proposed approach shows relative accuracy well within ±0.1log10 from data produced by computer simulations. The method maintains this accuracy even in the presence of dilution errors of up to 10% (for both the aliquot and diluent volumes), microbial counts between 10(4) and 10(12) colony-forming units, dilution ratios from 2 to 100, and plate size to colony size ratios between 6.25 to 200. Published by Elsevier B.V.
Optimized tomography of continuous variable systems using excitation counting
NASA Astrophysics Data System (ADS)
Shen, Chao; Heeres, Reinier W.; Reinhold, Philip; Jiang, Luyao; Liu, Yi-Kai; Schoelkopf, Robert J.; Jiang, Liang
2016-11-01
We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.
Addendum to final report, Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1987-01-01
The methodology described in entry 55-14 was used with 1980 data for 16 continuous count stations to determine periods that were stable throughout the year for different short counts. It was found that stable periods for short counts occurred mainly ...
Schuff-Werner, Peter; Steiner, Michael; Fenger, Sebastian; Gross, Hans-Jürgen; Bierlich, Alexa; Dreissiger, Katrin; Mannuß, Steffen; Siegert, Gabriele; Bachem, Maximilian; Kohlschein, Peter
2013-01-01
Pseudothrombocytopenia remains a challenge in the haematological laboratory. The pre-analytical problem that platelets tend to easily aggregate in vitro, giving rise to lower platelet counts, has been known since ethylenediamine-tetra acetic acid EDTA and automated platelet counting procedures were introduced in the haematological laboratory. Different approaches to avoid the time and temperature dependent in vitro aggregation of platelets in the presence of EDTA were tested, but none of them proved optimal for routine purposes. Patients with unexpectedly low platelet counts or flagged for suspected aggregates, were selected and smears were examined for platelet aggregates. In these cases patients were asked to consent to the drawing of an additional sample of blood anti-coagulated with a magnesium additive. Magnesium was used in the beginning of the last century as anticoagulant for microscopic platelet counts. Using this approach, we documented 44 patients with pseudothrombocytopenia. In all cases, platelet counts were markedly higher in samples anti-coagulated with the magnesium containing anticoagulant when compared to EDTA-anticoagulated blood samples. We conclude that in patients with known or suspected pseudothrombocytopenia the magnesium-anticoagulant blood samples may be recommended for platelet counting. PMID:23808903
Iwuji, Collins; McGrath, Nuala; Calmy, Alexandra; Dabis, Francois; Pillay, Deenan; Newell, Marie-Louise; Baisley, Kathy; Porter, Kholoud
2018-06-01
HIV treatment guidelines now recommend antiretroviral therapy (ART) initiation regardless of CD4 count to maximize benefit both for the individual and society. It is unknown whether the initiation of ART at higher CD4 counts would affect adherence levels. We investigated whether initiating ART at higher CD4 counts was associated with sub-optimal adherence (<95%) during the first 12 months of ART. A prospective cohort study nested within a two-arm cluster-randomized trial of universal test and treat was implemented from March 2012 to June 2016 to measure the impact of ART on HIV incidence in rural KwaZulu-Natal. ART was initiated regardless of CD4 count in the intervention arm and according to national guidelines in the control arm. ART adherence was measured monthly using a visual analogue scale (VAS) and pill counts (PC). HIV viral load was measured at ART initiation, three and six months, and six-monthly thereafter. We pooled data from participants in both arms and used random-effects logistic regression models to examine the association between CD4 count at ART initiation and sub-optimal adherence, and assessed if adherence levels were associated with virological suppression. Among 900 individuals who initiated ART ≥12 months before study end, median (IQR) CD4 at ART initiation was 350 cells/mm 3 (234, 503); median age was 34.6 years (IQR 27.4 to 46.4) and 71.7% were female. Adherence was sub-optimal in 14.7% of visits as measured by VAS and 20.7% by PC. In both the crude analyses and after adjusting for potential confounders, adherence was not significantly associated with CD4 count at ART initiation (adjusted OR for linear trend in sub-optimal adherence with every 100 cells/mm 3 increase in CD4 count: 1.00, 95% CI 0.95 to 1.05, for VAS, and 1.03, 95% CI 0.99 to 1.07, for PC). Virological suppression at 12 months was 97%. Optimal adherence by both measures was significantly associated with virological suppression (p < 0.001 for VAS; p = 0.006 for PC). We found no evidence that higher CD4 counts at ART initiation were associated with sub-optimal ART adherence in the first 12 months. Our findings should alleviate concerns about adherence in individuals initiating ART at higher CD4 counts, however long-term outcomes are needed. ClinicalTrials.gov NCT01509508. © 2018 The Authors. Journal of the International AIDS Society published by John Wiley & sons Ltd on behalf of the International AIDS Society.
Godon, Alban; Genevieve, Franck; Marteau-Tessier, Anne; Zandecki, Marc
2012-01-01
Several situations lead to abnormal haemoglobin measurement or to abnormal red blood cells (RBC) counts, including hyperlipemias, agglutinins and cryoglobulins, haemolysis, or elevated white blood cells (WBC) counts. Mean (red) cell volume may be also subject to spurious determination, because of agglutinins (mainly cold), high blood glucose level, natremia, anticoagulants in excess and at times technological considerations. Abnormality related to one measured parameter eventually leads to abnormal calculated RBC indices: mean cell haemoglobin content is certainly the most important RBC parameter to consider, maybe as important as flags generated by the haematology analysers (HA) themselves. In many circumstances, several of the measured parameters from cell blood counts (CBC) may be altered, and the discovery of a spurious change on one parameter frequently means that the validity of other parameters should be considered. Sensitive flags allow now the identification of several spurious counts, but only the most sophisticated HA have optimal flagging, and simpler ones, especially those without any WBC differential scattergram, do not share the same capacity to detect abnormal results. Reticulocytes are integrated into the CBC in many HA, and several situations may lead to abnormal counts, including abnormal gating, interference with intraerythrocytic particles, erythroblastosis or high WBC counts.
NASA Technical Reports Server (NTRS)
Unger, Eric R.; Hager, James O.; Agrawal, Shreekant
1999-01-01
This paper is a discussion of the supersonic nonlinear point design optimization efforts at McDonnell Douglas Aerospace under the High-Speed Research (HSR) program. The baseline for these optimization efforts has been the M2.4-7A configuration which represents an arrow-wing technology for the High-Speed Civil Transport (HSCT). Optimization work on this configuration began in early 1994 and continued into 1996. Initial work focused on optimization of the wing camber and twist on a wing/body configuration and reductions of 3.5 drag counts (Euler) were realized. The next phase of the optimization effort included fuselage camber along with the wing and a drag reduction of 5.0 counts was achieved. Including the effects of the nacelles and diverters into the optimization problem became the next focus where a reduction of 6.6 counts (Euler W/B/N/D) was eventually realized. The final two phases of the effort included a large set of constraints designed to make the final optimized configuration more realistic and they were successful albeit with a loss of performance.
Integrated Arrival and Departure Schedule Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2014-01-01
In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.
Performance evaluation and optimization of the MiniPET-II scanner
NASA Astrophysics Data System (ADS)
Lajtos, Imre; Emri, Miklos; Kis, Sandor A.; Opposits, Gabor; Potari, Norbert; Kiraly, Beata; Nagy, Ferenc; Tron, Lajos; Balkay, Laszlo
2013-04-01
This paper presents results of the performance of a small animal PET system (MiniPET-II) installed at our Institute. MiniPET-II is a full ring camera that includes 12 detector modules in a single ring comprised of 1.27×1.27×12 mm3 LYSO scintillator crystals. The axial field of view and the inner ring diameter are 48 mm and 211 mm, respectively. The goal of this study was to determine the NEMA-NU4 performance parameters of the scanner. In addition, we also investigated how the calculated parameters depend on the coincidence time window (τ=2, 3 and 4 ns) and the low threshold settings of the energy window (Elt=250, 350 and 450 keV). Independent measurements supported optimization of the effective system radius and the coincidence time window of the system. We found that the optimal coincidence time window and low threshold energy window are 3 ns and 350 keV, respectively. The spatial resolution was close to 1.2 mm in the center of the FOV with an increase of 17% at the radial edge. The maximum value of the absolute sensitivity was 1.37% for a point source. Count rate tests resulted in peak values for the noise equivalent count rate (NEC) curve and scatter fraction of 14.2 kcps (at 36 MBq) and 27.7%, respectively, using the rat phantom. Numerical values of the same parameters obtained for the mouse phantom were 55.1 kcps (at 38.8 MBq) and 12.3%, respectively. The recovery coefficients of the image quality phantom ranged from 0.1 to 0.87. Altering the τ and Elt resulted in substantial changes in the NEC peak and the sensitivity while the effect on the image quality was negligible. The spatial resolution proved to be, as expected, independent of the τ and Elt. The calculated optimal effective system radius (resulting in the best image quality) was 109 mm. Although the NEC peak parameters do not compare favorably with those of other small animal scanners, it can be concluded that under normal counting situations the MiniPET-II imaging capability assures remarkably good image quality, sensitivity and spatial resolution.
Optimizing the duration of point counts for monitoring trends in bird populations
Jared Verner
1988-01-01
Minute-by-minute analysis of point counts of birds in mixed-conifer forests in the Sierra National Forest, central California, showed that cumulative counts of species and individuals increased in a curvilinear fashion but did not reach asymptotes after 10 minutes of counting. Comparison of the expected number of individuals counted per hour with various combinations...
SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamberlain, S; Roswell Park Cancer Institute, Buffalo, NY; French, S
Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3more » × 10{sup 6} to 3 × 10{sup 7}); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10{sup 6} was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10{sup 6} have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.« less
Optimal staining methods for delineation of cortical areas and neuron counts in human brains.
Uylings, H B; Zilles, K; Rajkowska, G
1999-04-01
For cytoarchitectonic delineation of cortical areas in human brain, the Gallyas staining for somata with its sharp contrast between cell bodies and neuropil is preferable to the classical Nissl staining, the more so when an image analysis system is used. This Gallyas staining, however, does not appear to be appropriate for counting neuron numbers in pertinent brain areas, due to the lack of distinct cytological features between small neurons and glial cells. For cell counting Nissl is preferable. In an optimal design for cell counting at least both the Gallyas and the Nissl staining must be applied, the former staining for cytoarchitectural delineaton of cortical areas and the latter for counting the number of neurons in the pertinent cortical areas. Copyright 1999 Academic Press.
Differences in attentional strategies by novice and experienced operating theatre scrub nurses.
Koh, Ranieri Y I; Park, Taezoon; Wickens, Christopher D; Ong, Lay Teng; Chia, Soon Noi
2011-09-01
This study investigated the effect of nursing experience on attention allocation and task performance during surgery. The prevention of cases of retained foreign bodies after surgery typically depends on scrub nurses, who are responsible for performing multiple tasks that impose heavy demands on the nurses' cognitive resources. However, the relationship between the level of experiences and attention allocation strategies has not been extensively studied. Eye movement data were collected from 10 novice and 10 experienced scrub nurses in the operating theater for caesarean section surgeries. Visual scanning data, analyzed by dividing the workstation into four main areas and the surgery into four stages, were compared to the optimum expected value estimated by SEEV (Salience, Effort, Expectancy, and Value) model. Both experienced and novice nurses showed significant correlations to the optimal percentage dwell time values, and significant differences were found in attention allocation optimality between experienced and novice nurses, with experienced nurses adhering significantly more to the optimal in the stages of high workload. Experienced nurses spent less time on the final count and encountered fewer interruptions during the count than novices indicating better performance in task management, whereas novice nurses switched attention between areas of interest more than experienced nurses. The results provide empirical evidence of a relationship between the application of optimal visual attention management strategies and performance, opening up possibilities to the development of visual attention and interruption training for better performance. (c) 2011 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
Signal to noise ratio of energy selective x-ray photon counting systems with pileup.
Alvarez, Robert E
2014-11-01
To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, "Near optimal energy selective x-ray imaging system performance with simple detectors," Med. Phys. 37, 822-841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr-Rao lower bound (CRLB) for larger counts is tested. The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated.
Signal to noise ratio of energy selective x-ray photon counting systems with pileup
Alvarez, Robert E.
2014-01-01
Purpose: To derive fundamental limits on the effect of pulse pileup and quantum noise in photon counting detectors on the signal to noise ratio (SNR) and noise variance of energy selective x-ray imaging systems. Methods: An idealized model of the response of counting detectors to pulse pileup is used. The model assumes a nonparalyzable response and delta function pulse shape. The model is used to derive analytical formulas for the noise and energy spectrum of the recorded photons with pulse pileup. These formulas are first verified with a Monte Carlo simulation. They are then used with a method introduced in a previous paper [R. E. Alvarez, “Near optimal energy selective x-ray imaging system performance with simple detectors,” Med. Phys. 37, 822–841 (2010)] to compare the signal to noise ratio with pileup to the ideal SNR with perfect energy resolution. Detectors studied include photon counting detectors with pulse height analysis (PHA), detectors that simultaneously measure the number of photons and the integrated energy (NQ detector), and conventional energy integrating and photon counting detectors. The increase in the A-vector variance with dead time is also computed and compared to the Monte Carlo results. A formula for the covariance of the NQ detector is developed. The validity of the constant covariance approximation to the Cramèr–Rao lower bound (CRLB) for larger counts is tested. Results: The SNR becomes smaller than the conventional energy integrating detector (Q) SNR for 0.52, 0.65, and 0.78 expected number photons per dead time for counting (N), two, and four bin PHA detectors, respectively. The NQ detector SNR is always larger than the N and Q SNR but only marginally so for larger dead times. Its noise variance increases by a factor of approximately 3 and 5 for the A1 and A2 components as the dead time parameter increases from 0 to 0.8 photons per dead time. With four bin PHA data, the increase in variance is approximately 2 and 4 times. The constant covariance approximation to the CRLB is valid for larger counts such as those used in medical imaging. Conclusions: The SNR decreases rapidly as dead time increases. This decrease places stringent limits on allowable dead times with the high count rates required for medical imaging systems. The probability distribution of the idealized data with pileup is shown to be accurately described as a multivariate normal for expected counts greater than those typically utilized in medical imaging systems. The constant covariance approximation to the CRLB is also shown to be valid in this case. A new formula for the covariance of the NQ detector with pileup is derived and validated. PMID:25370642
Population Census of a Large Common Tern Colony with a Small Unmanned Aircraft
Chabot, Dominique; Craik, Shawn R.; Bird, David M.
2015-01-01
Small unmanned aircraft systems (UAS) may be useful for conducting high-precision, low-disturbance waterbird surveys, but limited data exist on their effectiveness. We evaluated the capacity of a small UAS to census a large (>6,000 nests) coastal Common tern (Sterna hirundo) colony of which ground surveys are particularly disruptive and time-consuming. We compared aerial photographic tern counts to ground nest counts in 45 plots (5-m radius) throughout the colony at three intervals over a nine-day period in order to identify sources of variation and establish a coefficient to estimate nest numbers from UAS surveys. We also compared a full colony ground count to full counts from two UAS surveys conducted the following day. Finally, we compared colony disturbance levels over the course of UAS flights to matched control periods. Linear regressions between aerial and ground counts in plots had very strong correlations in all three comparison periods (R 2 = 0.972–0.989, P < 0.001) and regression coefficients ranged from 0.928–0.977 terns/nest. Full colony aerial counts were 93.6% and 94.0%, respectively, of the ground count. Varying visibility of terns with ground cover, weather conditions and image quality, and changing nest attendance rates throughout incubation were likely sources of variation in aerial detection rates. Optimally timed UAS surveys of Common tern colonies following our method should yield population estimates in the 93–96% range of ground counts. Although the terns were initially disturbed by the UAS flying overhead, they rapidly habituated to it. Overall, we found no evidence of sustained disturbance to the colony by the UAS. We encourage colonial waterbird researchers and managers to consider taking advantage of this burgeoning technology. PMID:25874997
Timing resolution and time walk in SLiK SPAD: measurement and optimization
NASA Astrophysics Data System (ADS)
Fong, Bernicy S.; Davies, Murray; Deschamps, Pierre
2017-08-01
Timing resolution (or timing jitter) and time walk are separate parameters associated with a detector's response time. Studies have been done mostly on the time resolution of various single photon detectors [1]. As the designer and manufacturer of the ultra-low noise (ƙ-factor) silicon avalanche photodiode the SLiK SPAD, which is used in many single photon counting applications, we often get inquiries from customers to better understand how this detector behaves under different operating conditions. Hence, here we will be focusing on the study of these time related parameters specifically for the SLiK SPAD, as a way to provide the most direct information for users of this detector to help with its use more efficiently and effectively. We will be providing the study data on how these parameters can be affected by temperature (both intrinsic to the detector chip and environmental input based on operating conditions), operating voltage, photon wavelength, as well as light spot size. How these parameters can be optimized and the trade-offs from optimization from the desired performance will be presented.
Impact of donor- and collection-related variables on product quality in ex utero cord blood banking.
Askari, Sabeen; Miller, John; Chrysler, Gayl; McCullough, Jeffrey
2005-02-01
Optimizing product quality is a current focus in cord blood banking. This study evaluates the role of selected donor- and collection-related variables. Retrospective review was performed of cord blood units (CBUs) collected ex utero between February 1, 2000, and February 28, 2002. Preprocessing volume and total nucleated cell (TNC) counts and postprocessing CD34 cell counts were used as product quality indicators. Of 2084 CBUs, volume determinations and TNC counts were performed on 1628 and CD34+ counts on 1124 CBUs. Mean volume and TNC and CD34+ counts were 85.2 mL, 118.9 x 10(7), and 5.2 x 10(6), respectively. In univariate analysis, placental weight of greater than 500 g and meconium in amniotic fluid correlated with better volume and TNC and CD34+ counts. Greater than 40 weeks' gestation predicted enhanced volume and TNC count. Cesarean section, two- versus one-person collection, and not greater than 5 minutes between placental delivery and collection produced superior volume. Increased TNC count was also seen in Caucasian women, primigravidae, female newborns, and collection duration of more than 5 minutes. A time between delivery of newborn and placenta of not greater than 10 minutes predicted better volume and CD34+ count. By regression analysis, collection within not greater than 5 minutes of placental delivery produced superior volume and TNC count. Donor selection and collection technique modifications may improve product quality. TNC count appears to be more affected by different variables than CD34+ count.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, James T.; Thompson, Scott J.; Watson, Scott M.
We present a multi-channel, fast neutron/gamma ray detector array system that utilizes ZnS(Ag) scintillator detectors. The system employs field programmable gate arrays (FPGAs) to do real-time all digital neutron/gamma ray discrimination with pulse height and time histograms to allow count rates in excess of 1,000,000 pulses per second per channel. The system detector number is scalable in blocks of 16 channels.
Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.
2012-01-01
Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings. Please see later in the article for the Editors' Summary PMID:22529752
Microfluidic differential immunocapture biochip for specific leukocyte counting
Hassan, Umer; Watkins, Nicholas N; Reddy, Bobby; Damhorst, Gregory; Bashir, Rashid
2016-01-01
Enumerating specific cell types from whole blood can be very useful for research and diagnostic purposes—e.g., for counting of cD4 and cD8 t cells in HIV/aIDs diagnostics. We have developed a biosensor based on a differential immunocapture technology to enumerate specific cells in 30 min using 10 µl of blood. this paper provides a comprehensive stepwise protocol to replicate our biosensor for cD4 and cD8 cell counts. the biochip can also be adapted to enumerate other specific cell types such as somatic cells or cells from tissue or liquid biopsies. capture of other specific cells requires immobilization of their corresponding antibodies within the capture chamber. therefore, this protocol is useful for research into areas surrounding immunocapture-based biosensor development. the biosensor production requires 24 h, a one-time cell capture optimization takes 6–9 h, and the final cell counting experiment in a laboratory environment requires 30 min to complete. PMID:26963632
NASA Astrophysics Data System (ADS)
Monna, F.; Loizeau, J.-L.; Thomas, B. A.; Guéguen, C.; Favarger, P.-Y.
1998-08-01
One of the factors limiting the precision of inductively coupled plasma mass spectrometry is the counting statistics, which depend upon acquisition time and ion fluxes. In the present study, the precision of the isotopic measurements of Pb and Sr is examined. The time of measurement is optimally shared for each isotope, using a mathematical simulation, to provide the lowest theoretical analytical error. Different algorithms of mass bias correction are also taken into account and evaluated in term of improvement of overall precision. Several experiments allow a comparison of real conditions with theory. The present method significantly improves the precision, regardless of the instrument used. However, this benefit is more important for equipment which originally yields a precision close to that predicted by counting statistics. Additionally, the procedure is flexible enough to be easily adapted to other problems, such as isotopic dilution.
Flexible ultrathin-body single-photon avalanche diode sensors and CMOS integration.
Sun, Pengfei; Ishihara, Ryoichi; Charbon, Edoardo
2016-02-22
We proposed the world's first flexible ultrathin-body single-photon avalanche diode (SPAD) as photon counting device providing a suitable solution to advanced implantable bio-compatible chronic medical monitoring, diagnostics and other applications. In this paper, we investigate the Geiger-mode performance of this flexible ultrathin-body SPAD comprehensively and we extend this work to the first flexible SPAD image sensor with in-pixel and off-pixel electronics integrated in CMOS. Experimental results show that dark count rate (DCR) by band-to-band tunneling can be reduced by optimizing multiplication doping. DCR by trap-assisted avalanche, which is believed to be originated from the trench etching process, could be further reduced, resulting in a DCR density of tens to hundreds of Hertz per micrometer square at cryogenic temperature. The influence of the trench etching process onto DCR is also proved by comparison with planar ultrathin-body SPAD structures without trench. Photon detection probability (PDP) can be achieved by wider depletion and drift regions and by carefully optimizing body thickness. PDP in frontside- (FSI) and backside-illumination (BSI) are comparable, thus making this technology suitable for both modes of illumination. Afterpulsing and crosstalk are negligible at 2µs dead time, while it has been proved, for the first time, that a CMOS SPAD pixel of this kind could work in a cryogenic environment. By appropriate choice of substrate, this technology is amenable to implantation for biocompatible photon-counting applications and wherever bended imaging sensors are essential.
Rodes, Laetitia; Paul, Arghya; Coussa-Charley, Michael; Al-Salami, Hani; Tomaro-Duchesneau, Catherine; Fakhoury, Marc; Prakash, Satya
2011-12-01
Retention time, which is analogous to transit time, is an index for bacterial stability in the intestine. Its consideration is of particular importance to optimize the delivery of probiotic bacteria in order to improve treatment efficacy. This study aims to investigate the effect of retention time on Lactobacilli and Bifidobacteria stability using an established in vitro human colon model. Three retention times were used: 72, 96, and 144 h. The effect of retention time on cell viability of different bacterial populations was analyzed with bacterial plate counts and PCR. The proportions of intestinal Bifidobacteria, Lactobacilli, Enterococci, Staphylococci and Clostridia populations, analyzed by plate counts, were found to be the same as that in human colonic microbiota. Retention time in the human colon affected the stability of Lactobacilli and Bifidobacteria communities, with maximum stability observed at 144 h. Therefore, retention time is an important parameter that influences bacterial stability in the colonic microbiota. Future clinical studies on probiotic bacteria formulations should take into consideration gastrointestinal transit parameters to improve treatment efficacy.
Jiang, Ailian; Zheng, Lihong
2018-03-29
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime.
2018-01-01
Low cost, high reliability and easy maintenance are key criteria in the design of routing protocols for wireless sensor networks (WSNs). This paper investigates the existing ant colony optimization (ACO)-based WSN routing algorithms and the minimum hop count WSN routing algorithms by reviewing their strengths and weaknesses. We also consider the critical factors of WSNs, such as energy constraint of sensor nodes, network load balancing and dynamic network topology. Then we propose a hybrid routing algorithm that integrates ACO and a minimum hop count scheme. The proposed algorithm is able to find the optimal routing path with minimal total energy consumption and balanced energy consumption on each node. The algorithm has unique superiority in terms of searching for the optimal path, balancing the network load and the network topology maintenance. The WSN model and the proposed algorithm have been implemented using C++. Extensive simulation experimental results have shown that our algorithm outperforms several other WSN routing algorithms on such aspects that include the rate of convergence, the success rate in searching for global optimal solution, and the network lifetime. PMID:29596336
Adaptive Detector Arrays for Optical Communications Receivers
NASA Technical Reports Server (NTRS)
Vilnrotter, V.; Srinivasan, M.
2000-01-01
The structure of an optimal adaptive array receiver for ground-based optical communications is described and its performance investigated. Kolmogorov phase screen simulations are used to model the sample functions of the focal-plane signal distribution due to turbulence and to generate realistic spatial distributions of the received optical field. This novel array detector concept reduces interference from background radiation by effectively assigning higher confidence levels at each instant of time to those detector elements that contain significant signal energy and suppressing those that do not. A simpler suboptimum structure that replaces the continuous weighting function of the optimal receiver by a hard decision on the selection of the signal detector elements also is described and evaluated. Approximations and bounds to the error probability are derived and compared with the exact calculations and receiver simulation results. It is shown that, for photon-counting receivers observing Poisson-distributed signals, performance improvements of approximately 5 dB can be obtained over conventional single-detector photon-counting receivers, when operating in high background environments.
High-Dose Neutron Detector Development Using 10B Coated Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menlove, Howard Olsen; Henzlova, Daniela
2016-11-08
During FY16 the boron-lined parallel-plate technology was optimized to fully benefit from its fast timing characteristics in order to enhance its high count rate capability. To facilitate high count rate capability, a novel fast amplifier with timing and operating properties matched to the detector characteristics was developed and implemented in the 8” boron plate detector that was purchased from PDT. Each of the 6 sealed-cells was connected to a fast amplifier with corresponding List mode readout from each amplifier. The FY16 work focused on improvements in the boron-10 coating materials and procedures at PDT to significantly improve the neutron detectionmore » efficiency. An improvement in the efficiency of a factor of 1.5 was achieved without increasing the metal backing area for the boron coating. This improvement has allowed us to operate the detector in gamma-ray backgrounds that are four orders of magnitude higher than was previously possible while maintaining a relatively high counting efficiency for neutrons. This improvement in the gamma-ray rejection is a key factor in the development of the high dose neutron detector.« less
A system for counting fetal and maternal red blood cells.
Ge, Ji; Gong, Zheng; Chen, Jun; Liu, Jun; Nguyen, John; Yang, Zongyi; Wang, Chen; Sun, Yu
2014-12-01
The Kleihauer-Betke (KB) test is the standard method for quantitating fetal-maternal hemorrhage in maternal care. In hospitals, the KB test is performed by a certified technologist to count a minimum of 2000 fetal and maternal red blood cells (RBCs) on a blood smear. Manual counting suffers from inherent inconsistency and unreliability. This paper describes a system for automated counting and distinguishing fetal and maternal RBCs on clinical KB slides. A custom-adapted hardware platform is used for KB slide scanning and image capturing. Spatial-color pixel classification with spectral clustering is proposed to separate overlapping cells. Optimal clustering number and total cell number are obtained through maximizing cluster validity index. To accurately identify fetal RBCs from maternal RBCs, multiple features including cell size, roundness, gradient, and saturation difference between cell and whole slide are used in supervised learning to generate feature vectors, to tackle cell color, shape, and contrast variations across clinical KB slides. The results show that the automated system is capable of completing the counting of over 60,000 cells (versus ∼2000 by technologists) within 5 min (versus ∼15 min by technologists). The throughput is improved by approximately 90 times compared to manual reading by technologists. The counting results are highly accurate and correlate strongly with those from benchmarking flow cytometry measurement.
Estimation of Confidence Intervals for Multiplication and Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J
2009-07-17
Helium-3 tubes are used to detect thermal neutrons by charge collection using the {sup 3}He(n,p) reaction. By analyzing the time sequence of neutrons detected by these tubes, one can determine important features about the constitution of a measured object: Some materials such as Cf-252 emit several neutrons simultaneously, while others such as uranium and plutonium isotopes multiply the number of neutrons to form bursts. This translates into unmistakable signatures. To determine the type of materials measured, one compares the measured count distribution with the one generated by a theoretical fission chain model. When the neutron background is negligible, the theoreticalmore » count distributions can be completely characterized by a pair of parameters, the multiplication M and the detection efficiency {var_epsilon}. While the optimal pair of M and {var_epsilon} can be determined by existing codes such as BigFit, the uncertainty on these parameters has not yet been fully studied. The purpose of this work is to precisely compute the uncertainties on the parameters M and {var_epsilon}, given the uncertainties in the count distribution. By considering different lengths of time tagged data, we will determine how the uncertainties on M and {var_epsilon} vary with the different count distributions.« less
NASA Astrophysics Data System (ADS)
Fong, Bernicy S.; Davies, Murray; Deschamps, Pierre
2018-01-01
Timing resolution (or timing jitter) and time walk are separate parameters associated with a detector's response time. Studies have been done mostly on the time resolution of various single-photon detectors. As the designer and manufacturer of the ultra-low noise (ƙ-factor) silicon avalanche photodiode the super low K factor (SLiK) single-photon avalanche diode (SPAD), which is used in many single-photon counting applications, we often get inquiries from customers to better understand how this detector behaves under different operating conditions. Hence, here, we will be focusing on the study of these time-related parameters specifically for the SLiK SPAD, as a way to provide the most direct information for users of this detector to help with its use more efficiently and effectively. We will be providing the study data on how these parameters can be affected by temperature (both intrinsic to the detector chip and environmental input based on operating conditions), operating voltage, photon wavelength, as well as light spot size. How these parameters can be optimized and the trade-offs from optimization from the desired performance will be presented?
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.
2015-01-01
Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G
2015-01-01
Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Jürgen; Gregor, Ingo; Erdmann, Rainer; Enderlein, Jörg
2007-03-01
Time-correlated single photon counting is a powerful method for sensitive time-resolved fluorescence measurements down to the single molecule level. The method is based on the precisely timed registration of single photons of a fluorescence signal. Historically, its primary goal was the determination of fluorescence lifetimes upon optical excitation by a short light pulse. This goal is still important today and therefore has a strong influence on instrument design. However, modifications and extensions of the early designs allow for the recovery of much more information from the detected photons and enable entirely new applications. Here, we present a new instrument that captures single photon events on multiple synchronized channels with picosecond resolution and over virtually unlimited time spans. This is achieved by means of crystal-locked time digitizers with high resolution and very short dead time. Subsequent event processing in programmable logic permits classical histogramming as well as time tagging of individual photons and their streaming to the host computer. Through the latter, any algorithms and methods for the analysis of fluorescence dynamics can be implemented either in real time or offline. Instrument test results from single molecule applications will be presented.
A multi-purpose readout electronics for CdTe and CZT detectors for x-ray imaging applications
NASA Astrophysics Data System (ADS)
Yue, X. B.; Deng, Z.; Xing, Y. X.; Liu, Y. N.
2017-09-01
A multi-purpose readout electronics based on the DPLMS digital filter has been developed for CdTe and CZT detectors for X-ray imaging applications. Different filter coefficients can be synthesized optimized either for high energy resolution at relatively low counting rate or for high rate photon-counting with reduced energy resolution. The effects of signal width constraints, sampling rate and length were numerical studied by Mento Carlo simulation with simple CRRC shaper input signals. The signal width constraint had minor effect and the ENC was only increased by 6.5% when the signal width was shortened down to 2 τc. The sampling rate and length depended on the characteristic time constants of both input and output signals. For simple CR-RC input signals, the minimum number of the filter coefficients was 12 with 10% increase in ENC when the output time constant was close to the input shaping time. A prototype readout electronics was developed for demonstration, using a previously designed analog front ASIC and a commercial ADC card. Two different DPLMS filters were successfully synthesized and applied for high resolution and high counting rate applications respectively. The readout electronics was also tested with a linear array CdTe detector. The energy resolutions of Am-241 59.5 keV peak were measured to be 6.41% in FWHM for the high resolution filter and to be 13.58% in FWHM for the high counting rate filter with 160 ns signal width constraint.
Optimization of single photon detection model based on GM-APD
NASA Astrophysics Data System (ADS)
Chen, Yu; Yang, Yi; Hao, Peiyu
2017-11-01
One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.
Lewis, Joanna; Walker, A Sarah; Castro, Hannah; De Rossi, Anita; Gibb, Diana M; Giaquinto, Carlo; Klein, Nigel; Callard, Robin
2012-02-15
Effective therapies and reduced AIDS-related morbidity and mortality have shifted the focus in pediatric human immunodeficiency virus (HIV) from minimizing short-term disease progression to maintaining optimal long-term health. We describe the effects of children's age and pre-antiretroviral therapy (ART) CD4 count on long-term CD4 T-cell reconstitution. CD4 counts in perinatally HIV-infected, therapy-naive children in the Paediatric European Network for the Treatment of AIDS 5 trial were monitored following initiation of ART for a median 5.7 years. In a substudy, naive and memory CD4 counts were recorded. Age-standardized measurements were analyzed using monophasic, asymptotic nonlinear mixed-effects models. One hundred twenty-seven children were studied. Older children had lower age-adjusted CD4 counts in the long term and at treatment initiation (P < .001). At all ages, lower counts before treatment were associated with impaired recovery (P < .001). Age-adjusted naive CD4 counts increased on a timescale comparable to overall CD4 T-cell reconstitution, whereas age-adjusted memory CD4 counts increased less, albeit on a faster timescale. It appears the immature immune system can recover well from HIV infection via the naive pool. However, this potential is progressively damaged with age and/or duration of infection. Current guidelines may therefore not optimize long-term immunological health.
Performance of coincidence-based PSD on LiF/ZnS Detectors for Multiplicity Counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Sean M.; Stave, Sean C.; Lintereur, Azaree
Abstract: Mass accountancy measurement is a nuclear nonproliferation application which utilizes coincidence and multiplicity counters to verify special nuclear material declarations. With a well-designed and efficient detector system, several relevant parameters of the material can be verified simultaneously. 6LiF/ZnS scintillating sheets may be used for this purpose due to a combination of high efficiency and short die-away times in systems designed with this material, but involve choices of detector geometry and exact material composition (e.g., the addition of Ni-quenching in the material) that must be optimized for the application. Multiplicity counting for verification of declared nuclear fuel mass involves neutronmore » detection in conditions where several neutrons arrive in a short time window, with confounding gamma rays. This paper considers coincidence-based Pulse-Shape Discrimination (PSD) techniques developed to work under conditions of high pileup, and the performance of these algorithms with different detection materials. Simulated and real data from modern LiF/ZnS scintillator systems are evaluated with these techniques and the relationship between the performance under pileup and material characteristics (e.g., neutron peak width and total light collection efficiency) are determined, to allow for an optimal choice of detector and material.« less
Martyniak, Brian; Bolton, Jason; Kuksin, Dmitry; Shahin, Suzanne M; Chan, Leo Li-Ying
2017-01-01
Brettanomyces spp. can present unique cell morphologies comprised of excessive pseudohyphae and budding, leading to difficulties in enumerating cells. The current cell counting methods include manual counting of methylene blue-stained yeasts or measuring optical densities using a spectrophotometer. However, manual counting can be time-consuming and has high operator-dependent variations due to subjectivity. Optical density measurement can also introduce uncertainties where instead of individual cells counted, an average of a cell population is measured. In contrast, by utilizing the fluorescence capability of an image cytometer to detect acridine orange and propidium iodide viability dyes, individual cell nuclei can be counted directly in the pseudohyphae chains, which can improve the accuracy and efficiency of cell counting, as well as eliminating the subjectivity from manual counting. In this work, two experiments were performed to demonstrate the capability of Cellometer image cytometer to monitor Brettanomyces concentrations, viabilities, and budding/pseudohyphae percentages. First, a yeast propagation experiment was conducted to optimize software counting parameters for monitoring the growth of Brettanomyces clausenii, Brettanomyces bruxellensis, and Brettanomyces lambicus, which showed increasing cell concentrations, and varying pseudohyphae percentages. The pseudohyphae formed during propagation were counted either as multiple nuclei or a single multi-nuclei organism, where the results of counting the yeast as a single multi-nuclei organism were directly compared to manual counting. Second, a yeast fermentation experiment was conducted to demonstrate that the proposed image cytometric analysis method can monitor the growth pattern of B. lambicus and B. clausenii during beer fermentation. The results from both experiments displayed different growth patterns, viability, and budding/pseudohyphae percentages for each Brettanomyces species. The proposed Cellometer image cytometry method can improve efficiency and eliminate operator-dependent variations of cell counting compared with the traditional methods, which can potentially improve the quality of beverage products employing Brettanomyces yeasts.
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Pfeiffer, M.; Korff, B.; Thurow, J.; Ricken, W.
2010-03-01
We present tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray scale curves from images at pixel resolution. The PEAK tool uses the gray scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a smoothed curve. The zero-crossing algorithm counts every positive and negative halfway passage of the curve through a wide moving average, separating the record into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the curve into its frequency components before counting positive and negative passages. The algorithms are available at doi:10.1594/PANGAEA.729700. We applied the new methods successfully to tree rings, to well-dated and already manually counted marine varves from Saanich Inlet, and to marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that laminations in Weddell Sea sites represent varves, deposited continuously over several millennia during the last glacial maximum. The new tools offer several advantages over previous methods. The counting procedures are based on a moving average generated from gray scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Also, the PEAK tool measures the thickness of each year or season. Since all information required is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prochazka, Ivan, E-mail: prochiva@gmail.com; Blazej, Josef; Kodet, Jan
2016-05-15
The laser time transfer link is under construction for the European Space Agency in the frame of Atomic Clock Ensemble in Space. We have developed and tested the flying unit of the photon counting detector optimized for this space mission. The results are summarized in this Note. An extreme challenge was to build a detector package, which is rugged, small and which provides long term detection delay stability on picosecond level. The device passed successfully all the tests required for space missions on the low Earth orbits. The detector is extremely rugged and compact. Its long term detection delay stabilitymore » is excellent, it is better than ±1 ps/day, in a sense of time deviation it is better than 0.5 ps for averaging times of 2000 s to several hours. The device is capable to operate in a temperature range of −55 °C up to +60 °C, the change of the detection delay with temperature is +0.5 ps/K. The device is ready for integration into the space structure now.« less
Prochazka, Ivan; Kodet, Jan; Blazej, Josef
2016-05-01
The laser time transfer link is under construction for the European Space Agency in the frame of Atomic Clock Ensemble in Space. We have developed and tested the flying unit of the photon counting detector optimized for this space mission. The results are summarized in this Note. An extreme challenge was to build a detector package, which is rugged, small and which provides long term detection delay stability on picosecond level. The device passed successfully all the tests required for space missions on the low Earth orbits. The detector is extremely rugged and compact. Its long term detection delay stability is excellent, it is better than ±1 ps/day, in a sense of time deviation it is better than 0.5 ps for averaging times of 2000 s to several hours. The device is capable to operate in a temperature range of -55 °C up to +60 °C, the change of the detection delay with temperature is +0.5 ps/K. The device is ready for integration into the space structure now.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
Development of a homogeneous pulse shape discriminating flow-cell radiation detection system
NASA Astrophysics Data System (ADS)
Hastie, K. H.; DeVol, T. A.; Fjeld, R. A.
1999-02-01
A homogeneous flow-cell radiation detection system which utilizes coincidence counting and pulse shape discrimination circuitry was assembled and tested with five commercially available liquid scintillation cocktails. Two of the cocktails, Ultima Flo (Packard) and Mono Flow 5 (National Diagnostics) have low viscosities and are intended for flow applications; and three of the cocktails, Optiphase HiSafe 3 (Wallac), Ultima Gold AB (Packard), and Ready Safe (Beckman), have higher viscosities and are intended for static applications. The low viscosity cocktails were modified with 1-methylnaphthalene to increase their capability for alpha/beta pulse shape discrimination. The sample loading and pulse shape discriminator setting were optimized to give the lowest minimum detectable concentration for alpha radiation in a 30 s count time. Of the higher viscosity cocktails, Optiphase HiSafe 3 had the lowest minimum detectable activities for alpha and beta radiation, 0.2 and 0.4 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 70% and the corresponding alpha/beta spillover was 5.5%. Of the low viscosity cocktails, Mono Flow 5 modified with 2.5% (by volume) 1-methylnaphthalene resulted in the lowest minimum detectable activities for alpha and beta radiation; 0.3 and 0.5 Bq/ml for 233U and 90Sr/ 90Y, respectively, for a 30 s count time. The sample loading was 50%, and the corresponding alpha/beta spillover was 16.6%. HiSafe 3 at a 10% sample loading was used to evaluate the system under simulated flow conditions.
NASA Technical Reports Server (NTRS)
Krumins, Valdis; Hummerick, Mary; Levine, Lanfang; Strayer, Richard; Adams, Jennifer L.; Bauer, Jan
2002-01-01
A fixed-film (biofilm) reactor was designed and its performance was determined at various retention times. The goal was to find the optimal retention time for recycling plant nutrients in an advanced life support system, to minimize the size, mass, and volume (hold-up) of a production model. The prototype reactor was tested with aqueous leachate from wheat crop residue at 24, 12, 6, and 3 h hydraulic retention times (HRTs). Biochemical oxygen demand (BOD), nitrates and other plant nutrients, carbohydrates, total phenolics, and microbial counts were monitored to characterize reactor performance. BOD removal decreased significantly from 92% at the 24 h HRT to 73% at 3 h. Removal of phenolics was 62% at the 24 h retention time, but 37% at 3 h. Dissolved oxygen concentrations, nitric acid consumption, and calcium and magnesium removals were also affected by HRT. Carbohydrate removals, carbon dioxide (CO2) productions, denitrification, potassium concentrations, and microbial counts were not affected by different retention times. A 6 h HRT will be used in future studies to determine the suitability of the bioreactor effluent for hydroponic plant production.
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.
2009-12-01
We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei
2016-09-20
InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.
A novel method of personnel cooling in an operating theatre environment.
Casha, Aaron R; Manché, Alexander; Camilleri, Liberato; Gauci, Marilyn; Grima, Joseph N; Borg, Michael A
2014-10-01
An optimized theatre environment, including personal temperature regulation, can help maintain concentration, extend work times and may improve surgical outcomes. However, devices, such as cooling vests, are bulky and may impair the surgeon's mobility. We describe the use of a low-cost, low-energy 'bladeless fan' as a personal cooling device. The safety profile of this device was investigated by testing air quality using 0.5- and 5-µm particle counts as well as airborne bacterial counts on an operating table simulating a wound in a thoracic operation in a busy theatre environment. Particle and bacterial counts were obtained with both an empty and full theatre, with and without the 'bladeless fan'. The use of the 'bladeless fan' within the operating theatre during the simulated operation led to a minor, not statistically significant, lowering of both the particle and bacterial counts. In conclusion, the 'bladeless fan' is a safe, effective, low-cost and low-energy consumption solution for personnel cooling in a theatre environment that maintains the clean room conditions of the operating theatre. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
The optimal on-source region size for detections with counting-type telescopes
NASA Astrophysics Data System (ADS)
Klepser, S.
2017-03-01
Source detection in counting type experiments such as Cherenkov telescopes often involves the application of the classical Eq. (17) from the paper of Li & Ma (1983) to discrete on- and off-source regions. The on-source region is typically a circular area with radius θ in which the signal is expected to appear with the shape of the instrument point spread function (PSF). This paper addresses the question of what is the θ that maximises the probability of detection for a given PSF width and background event density. In the high count number limit and assuming a Gaussian PSF profile, the optimum is found to be at ζ∞2 ≈ 2.51 times the squared PSF width σPSF392. While this number is shown to be a good choice in many cases, a dynamic formula for cases of lower count numbers, which favour larger on-source regions, is given. The recipe to get to this parametrisation can also be applied to cases with a non-Gaussian PSF. This result can standardise and simplify analysis procedures, reduce trials and eliminate the need for experience-based ad hoc cut definitions or expensive case-by-case Monte Carlo simulations.
Power counting to better jet observables
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2014-12-01
Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia 8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.
Zeidler-Erdely, Patti C.; Antonini, James M.; Meighan, Terence G.; Young, Shih-Houng; Eye, Tracy J.; Hammer, Mary Ann; Erdely, Aaron
2016-01-01
Pulmonary toxicity studies often use bronchoalveolar lavage (BAL) to investigate potential adverse lung responses to a particulate exposure. The BAL cellular fraction is counted, using automated (i.e. Coulter Counter®), flow cytometry or manual (i.e. hemocytometer) methods, to determine inflammatory cell influx. The goal of the study was to compare the different counting methods to determine which is optimal for examining BAL cell influx after exposure by inhalation or intratracheal instillation (ITI) to different particles with varying inherent pulmonary toxicities in both rat and mouse models. General findings indicate that total BAL cell counts using the automated and manual methods tended to agree after inhalation or ITI exposure to particle samples that are relatively nontoxic or at later time points after exposure to a pneumotoxic particle when the response resolves. However, when the initial lung inflammation and cytotoxicity was high after exposure to a pneumotoxic particle, significant differences were observed when comparing cell counts from the automated, flow cytometry and manual methods. When using total BAL cell count for differential calculations from the automated method, depending on the cell diameter size range cutoff, the data suggest that the number of lung polymorphonuclear leukocytes (PMN) varies. Importantly, the automated counts, regardless of the size cutoff, still indicated a greater number of total lung PMN when compared with the manual method, which agreed more closely with flow cytometry. The results suggest that either the manual method or flow cytometry would be better suited for BAL studies where cytotoxicity is an unknown variable. PMID:27251196
Takahashi, Kazuhiro; Kurokawa, Tomohiro; Oshiro, Yukio; Fukunaga, Kiyoshi; Sakashita, Shingo; Ohkohchi, Nobuhiro
2016-05-01
Peripheral platelet counts decrease after partial hepatectomy; however, the implications of this phenomenon are unclear. We assessed if the observed decrease in platelet counts was associated with postoperative liver function and morbidity (complications grade ≤ II according to the Clavien-Dindo classification). We enrolled 216 consecutive patients who underwent partial hepatectomy for primary liver cancers, metastatic liver cancers, benign tumors, and donor hepatectomy. We classified patients as either low or high platelet percentage (postoperative platelet count/preoperative platelet count) using the optimal cutoff value calculated by a receiver operating characteristic (ROC) curve analysis, and analyzed risk factors for delayed liver functional recovery and morbidity after hepatectomy. Delayed liver function recovery and morbidity were significantly correlated with the lowest value of platelet percentage based on ROC analysis. Using a cutoff value of 60% acquired by ROC analysis, univariate and multivariate analysis determined that postoperative lowest platelet percentage ≤ 60% was identified as an independent risk factor of delayed liver function recovery (odds ratio (OR) 6.85; P < 0.01) and morbidity (OR, 4.90; P < 0.01). Furthermore, patients with the lowest platelet percentage ≤ 60% had decreased postoperative prothrombin time ratio and serum albumin level and increased serum bilirubin level when compared with patients with platelet percentage ≥ 61%. A greater than 40% decrease in platelet count after partial hepatectomy was an independent risk factor for delayed liver function recovery and postoperative morbidity. In conclusion, the decrease in platelet counts is an early marker to predict the liver function recovery and complications after hepatectomy.
Zeidler-Erdely, Patti C; Antonini, James M; Meighan, Terence G; Young, Shih-Houng; Eye, Tracy J; Hammer, Mary Ann; Erdely, Aaron
2016-08-01
Pulmonary toxicity studies often use bronchoalveolar lavage (BAL) to investigate potential adverse lung responses to a particulate exposure. The BAL cellular fraction is counted, using automated (i.e. Coulter Counter®), flow cytometry or manual (i.e. hemocytometer) methods, to determine inflammatory cell influx. The goal of the study was to compare the different counting methods to determine which is optimal for examining BAL cell influx after exposure by inhalation or intratracheal instillation (ITI) to different particles with varying inherent pulmonary toxicities in both rat and mouse models. General findings indicate that total BAL cell counts using the automated and manual methods tended to agree after inhalation or ITI exposure to particle samples that are relatively nontoxic or at later time points after exposure to a pneumotoxic particle when the response resolves. However, when the initial lung inflammation and cytotoxicity was high after exposure to a pneumotoxic particle, significant differences were observed when comparing cell counts from the automated, flow cytometry and manual methods. When using total BAL cell count for differential calculations from the automated method, depending on the cell diameter size range cutoff, the data suggest that the number of lung polymorphonuclear leukocytes (PMN) varies. Importantly, the automated counts, regardless of the size cutoff, still indicated a greater number of total lung PMN when compared with the manual method, which agreed more closely with flow cytometry. The results suggest that either the manual method or flow cytometry would be better suited for BAL studies where cytotoxicity is an unknown variable.
Aizawa, Emiko; Tsuji, Hirokazu; Asahara, Takashi; Takahashi, Takuya; Teraishi, Toshiya; Yoshida, Sumiko; Ota, Miho; Koga, Norie; Hattori, Kotaro; Kunugi, Hiroshi
2016-09-15
Bifidobacterium and Lactobacillus in the gut have been suggested to have a beneficial effect on stress response and depressive disorder. We examined whether these bacterial counts are reduced in patients with major depressive disorder (MDD) than in healthy controls. Bifidobacterium and Lactobacillus counts in fecal samples were estimated in 43 patients and 57 controls using bacterial rRNA-targeted reverse transcription-quantitative polymerase chain reaction The patients had significantly lower Bifidobacterium counts (P=0.012) and tended to have lower Lactobacillus counts (P=0.067) than the controls. Individuals whose bacterial counts below the optimal cut-off point (9.53 and 6.49log10 cells/g for Bifidobacterium and Lactobacillus, respectively) were significantly more common in the patients than in the controls for both bacteria (Bifidobacterium: odds ratio 3.23, 95% confidence interval [CI] 1.38-7.54, P=0.010; Lactobacillus: 2.57, 95% CI 1.14-5.78, P=0.027). Using the same cut-off points, we observed an association between the bacterial counts and Irritable bowel syndrome. Frequency of fermented milk consumption was associated with higher Bifidobacterium counts in the patients. The findings should be interpreted with caution since effects of gender and diet were not fully taken into account in the analysis. Our results provide direct evidence, for the first time, that individuals with lower Bifidobacterium and/or Lactobacillus counts are more common in patients with MDD compared to controls. Our findings provide new insight into the pathophysiology of MDD and will enhance future research on the use of pro- and prebiotics in the treatment of MDD. Copyright © 2016 Elsevier B.V. All rights reserved.
Long-distance practical quantum key distribution by entanglement swapping.
Scherer, Artur; Sanders, Barry C; Tittel, Wolfgang
2011-02-14
We develop a model for practical, entanglement-based long-distance quantum key distribution employing entanglement swapping as a key building block. Relying only on existing off-the-shelf technology, we show how to optimize resources so as to maximize secret key distribution rates. The tools comprise lossy transmission links, such as telecom optical fibers or free space, parametric down-conversion sources of entangled photon pairs, and threshold detectors that are inefficient and have dark counts. Our analysis provides the optimal trade-off between detector efficiency and dark counts, which are usually competing, as well as the optimal source brightness that maximizes the secret key rate for specified distances (i.e. loss) between sender and receiver.
Validation of the SimSET simulation package for modeling the Siemens Biograph mCT PET scanner
NASA Astrophysics Data System (ADS)
Poon, Jonathan K.; Dahlbom, Magnus L.; Casey, Michael E.; Qi, Jinyi; Cherry, Simon R.; Badawi, Ramsey D.
2015-02-01
Monte Carlo simulation provides a valuable tool in performance assessment and optimization of system design parameters for PET scanners. SimSET is a popular Monte Carlo simulation toolkit that features fast simulation time, as well as variance reduction tools to further enhance computational efficiency. However, SimSET has lacked the ability to simulate block detectors until its most recent release. Our goal is to validate new features of SimSET by developing a simulation model of the Siemens Biograph mCT PET scanner and comparing the results to a simulation model developed in the GATE simulation suite and to experimental results. We used the NEMA NU-2 2007 scatter fraction, count rates, and spatial resolution protocols to validate the SimSET simulation model and its new features. The SimSET model overestimated the experimental results of the count rate tests by 11-23% and the spatial resolution test by 13-28%, which is comparable to previous validation studies of other PET scanners in the literature. The difference between the SimSET and GATE simulation was approximately 4-8% for the count rate test and approximately 3-11% for the spatial resolution test. In terms of computational time, SimSET performed simulations approximately 11 times faster than GATE simulations. The new block detector model in SimSET offers a fast and reasonably accurate simulation toolkit for PET imaging applications.
Compact multiwire proportional counters for the detection of fission fragments
NASA Astrophysics Data System (ADS)
Jhingan, Akhil; Sugathan, P.; Golda, K. S.; Singh, R. P.; Varughese, T.; Singh, Hardev; Behera, B. R.; Mandal, S. K.
2009-12-01
Two large area multistep position sensitive (two dimensional) multiwire proportional counters have been developed for experiments involving study of fission dynamics using general purpose scattering chamber facility at IUAC. Both detectors have an active area of 20×10 cm2 and provide position signals in horizontal (X) and vertical (Y) planes, timing signal for time of flight measurements and energy signal giving the differential energy loss in the active volume. The design features are optimized for the detection of low energy heavy ions at very low gas pressures. Special care was taken in setting up the readout electronics, constant fraction discriminators for position signals in particular, to get optimum position and timing resolutions along with high count rate handling capability of low energy heavy ions. A custom made charge sensitive preamplifier, having lower gain and shorter decay time, has been developed for extracting the differential energy loss signal. The position and time resolutions of the detectors were determined to be 1.1 mm full width at half maximum (FWHM) and 1.7 ns FWHM, respectively. The detector could handle heavy ion count rates exceeding 20 kHz without any breakdown. Time of flight signal in combination with differential energy loss signal gives a clean separation of fission fragments from projectile and target like particles. The timing and position signals of the detectors are used for fission coincidence measurements and subsequent extraction of their mass, angular, and total kinetic energy distributions. This article describes systematic study of these fission counters in terms of efficiency, time resolution, count rate handling capability, position resolution, and the readout electronics. The detector has been operated with both five electrode geometry and four electrode geometry, and a comparison has been made in their performances.
Optimization of Stochastic Response Surfaces Subject to Constraints with Linear Programming
1992-03-01
SEXTPT(EPDIM,NVAR), box(STEP,NVAR), SDEV(3) REAL BA-SET(NL,BCDIM,M,NVAR),BA(M,NVAR),CBA(NVAR) REAL CB(M), BMAT (MM),B _TEST(M) COMMON OPTBASIS, OPTEP...0.0) THEN COUNT - COUNT+1 XBASIC(N,SET,COUNT) = I DO 136 J - 1, M BMAT (J,COUNT) = A(J,I) 136 CONTINUE ENDIF 137 CONTINUE IF(COUNT.GT.M) WRITE...SET,I)= 0.0 DO 140 J = 1, M BMAT (J,I) = 0.0 140 CONTINUE 142 CONTINUE DO 148 I= 1, M BTEST(I) = 0.0 64 DO 146 J -1, NVAR BTEST(I)= BTEST(I)+XSOL(J)*A
2017-01-01
Summary The present study was done to optimize the power ultrasound processing for maximizing diastase activity of and minimizing hydroxymethylfurfural (HMF) content in honey using response surface methodology. Experimental design with treatment time (1-15 min), amplitude (20-100%) and volume (40-80 mL) as independent variables under controlled temperature conditions was studied and it was concluded that treatment time of 8 min, amplitude of 60% and volume of 60 mL give optimal diastase activity and HMF content, i.e. 32.07 Schade units and 30.14 mg/kg, respectively. Further thermal profile analyses were done with initial heating temperatures of 65, 75, 85 and 95 ºC until temperature of honey reached up to 65 ºC followed by holding time of 25 min at 65 ºC, and the results were compared with thermal profile of honey treated with optimized power ultrasound. The quality characteristics like moisture, pH, diastase activity, HMF content, colour parameters and total colour difference were least affected by optimized power ultrasound treatment. Microbiological analysis also showed lower counts of aerobic mesophilic bacteria and in ultrasonically treated honey than in thermally processed honey samples complete destruction of coliforms, yeasts and moulds. Thus, it was concluded that power ultrasound under suggested operating conditions is an alternative nonthermal processing technique for honey. PMID:29540991
Yi, Paul H; Cross, Michael B; Moric, Mario; Sporer, Scott M; Berger, Richard A; Della Valle, Craig J
2014-02-01
Diagnosis of periprosthetic joint infection (PJI) can be difficult in the early postoperative period after total hip arthroplasty (THA) because normal cues from the physical examination often are unreliable, and serological markers commonly used for diagnosis are elevated from the recent surgery. The purposes of this study were to determine the optimal cutoff values for erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), synovial fluid white blood cell (WBC) count, and differential for diagnosing PJI in the early postoperative period after primary THA. We reviewed 6033 consecutive primary THAs and identified 73 patients (1.2%) who underwent reoperation for any reason within the first 6 weeks postoperatively. Thirty-six of these patients were infected according to modified Musculoskeletal Infection Society criteria. Mean values for the diagnostic tests were compared between groups and receiver operating characteristic curves generated along with an area under the curve (AUC) to determine test performance and optimal cutoff values to diagnose infection. The best test for the diagnosis of PJI was the synovial fluid WBC count (AUC = 98%; optimal cutoff value 12,800 cells/μL) followed by the CRP (AUC = 93%; optimal cutoff value 93 mg/L), and synovial fluid differential (AUC = 91%; optimal cutoff value 89% PMN). The mean ESR (infected = 69 mm/hr, not infected = 46 mm/hr), CRP (infected = 192 mg/L, not infected = 30 mg/L), synovial fluid WBC count (infected = 84,954 cells/μL, not infected = 2391 cells/μL), and differential (infected = 91% polymorphonuclear cells [PMN], not infected = 63% PMN) all were significantly higher in the infected group. Optimal cutoff values for the diagnosis of PJI in the acute postoperative period were higher than those traditionally used for the diagnosis of chronic PJI. The serum CRP is an excellent screening test, whereas the synovial fluid WBC count is more specific.
Behera, G; Sutar, P P; Aditya, S
2017-11-01
The commercially available dry turmeric powder at 10.34% d.b. moisture content was decontaminated using microwaves at high power density for short time. To avoid the loss of moisture from turmeric due to high microwave power, the drying kinetics were modelled and considered during optimization of microwave decontamination process. The effect of microwave power density (10, 33.5 and 57 W g -1 ), exposure time (10, 20 and 30 s) and thickness of turmeric layer (1, 2 and 3 mm) on total plate, total yeast and mold (YMC) counts, color change (∆E), average final temperature of the product (T af ), water activity (a w ), Page model rate constant (k) and total moisture loss (ML) was studied. The perturbation analysis was carried out for all variables. It was found that to achieve more than one log reduction in yeast and mold count, a substantial reduction in moisture content takes place leading to the reduced output. The microwave power density significantly affected the YMC, T af and a w of turmeric powder. But the thickness of sample and microwave exposure time showed effect only on T af , a w and ML. The colour of turmeric and Page model rate constant were not significantly changed during the process as anticipated. The numerical optimization was done at 57.00 W g -1 power density, 1.64 mm thickness of sample layer and 30 s exposure time. It resulted into 1.6 × 10 7 CFU g -1 YMC, 82.71 °C T af , 0.383 a w and 8.41% (d.b.) final moisture content.
Optimization of PET instrumentation for brain activation studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlbom, M.; Cherry, S.R.; Hoffman, E.J.
By performing cerebral blood flow studies with positron emission tomography (PET), and comparing blood flow images of different states of activation, functional mapping of the brain is possible. The ability of current commercial instruments to perform such studies is investigated in this work, based on a comparison of noise equivalent count (NEC) rates. Differences in the NEC performance of the different scanners in conjunction with scanner design parameters, provide insights into the importance of block design (size, dead time, crystal thickness) and overall scanner design (sensitivity and scatter fraction) for optimizing data from activation studies. The newer scanners with removablemore » septa, operating with 3-D acquisition, have much higher sensitivity, but require new methodology for optimized operation. Only by administering multiple low doses (fractionation) of the flow tracer can the high sensitivity be utilized.« less
PageRank as a method to rank biomedical literature by importance.
Yates, Elliot J; Dixon, Louise C
2015-01-01
Optimal ranking of literature importance is vital in overcoming article overload. Existing ranking methods are typically based on raw citation counts, giving a sum of 'inbound' links with no consideration of citation importance. PageRank, an algorithm originally developed for ranking webpages at the search engine, Google, could potentially be adapted to bibliometrics to quantify the relative importance weightings of a citation network. This article seeks to validate such an approach on the freely available, PubMed Central open access subset (PMC-OAS) of biomedical literature. On-demand cloud computing infrastructure was used to extract a citation network from over 600,000 full-text PMC-OAS articles. PageRanks and citation counts were calculated for each node in this network. PageRank is highly correlated with citation count (R = 0.905, P < 0.01) and we thus validate the former as a surrogate of literature importance. Furthermore, the algorithm can be run in trivial time on cheap, commodity cluster hardware, lowering the barrier of entry for resource-limited open access organisations. PageRank can be trivially computed on commodity cluster hardware and is linearly correlated with citation count. Given its putative benefits in quantifying relative importance, we suggest it may enrich the citation network, thereby overcoming the existing inadequacy of citation counts alone. We thus suggest PageRank as a feasible supplement to, or replacement of, existing bibliometric ranking methods.
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
2015-08-01
lifetime ( t2 ) corresponds to protein- bound NADH (23). Conversely, protein-bound FAD corre- sponds to the short lifetime, whereas free FAD corresponds...single photon counting (TCSPC) electronics (SPC-150, Becker and Hickl). TCSPC uses a fast detector PMT to measure the time between a laser pulse and... Becker and Hickl). A binning of nine surrounding pixels was used. Then, the fluorescence lifetime components were computed for each pixel by deconvolving
Bassiouny, M R; El-Chennawi, F; Mansour, A K; Yahia, S; Darwish, A
2015-06-01
Umbilical cord blood (UCB) contains stem cells and can be used as an alternative to bone marrow transplantation. Engraftment is dependent on the total nucleated cell (TNC) and CD34+ cell counts of the cord blood units. This study was designed to evaluate the effect of the method of collection of the UCB on the yield of the cord blood units. Informed consent was obtained from 100 eligible mothers for donation of cord blood. Both in utero and ex utero methods were used for collection. The cord blood volume was measured. The TNC and the CD34+ cell counts were enumerated. We have found that in utero collection gave significantly larger volumes of cord blood and higher TNC counts than ex utero collection. There was no significant difference between both methods regarding the CD34+ cell counts. This study revealed a significant correlation between the volume of the collected cord blood and both TNC and CD34+ cell counts. It is better to collect cord blood in utero before placental delivery to optimize the quality of the cord blood unit. © 2015 AABB.
Effective count rates for PET scanners with reduced and extended axial field of view
NASA Astrophysics Data System (ADS)
MacDonald, L. R.; Harrison, R. L.; Alessio, A. M.; Hunter, W. C. J.; Lewellen, T. K.; Kinahan, P. E.
2011-06-01
We investigated the relationship between noise equivalent count (NEC) and axial field of view (AFOV) for PET scanners with AFOVs ranging from one-half to twice those of current clinical scanners. PET scanners with longer or shorter AFOVs could fulfill different clinical needs depending on exam volumes and site economics. Using previously validated Monte Carlo simulations, we modeled true, scattered and random coincidence counting rates for a PET ring diameter of 88 cm with 2, 4, 6, and 8 rings of detector blocks (AFOV 7.8, 15.5, 23.3, and 31.0 cm). Fully 3D acquisition mode was compared to full collimation (2D) and partial collimation (2.5D) modes. Counting rates were estimated for a 200 cm long version of the 20 cm diameter NEMA count-rate phantom and for an anthropomorphic object based on a patient scan. We estimated the live-time characteristics of the scanner from measured count-rate data and applied that estimate to the simulated results to obtain NEC as a function of object activity. We found NEC increased as a quadratic function of AFOV for 3D mode, and linearly in 2D mode. Partial collimation provided the highest overall NEC on the 2-block system and fully 3D mode provided the highest NEC on the 8-block system for clinically relevant activities. On the 4-, and 6-block systems 3D mode NEC was highest up to ~300 MBq in the anthropomorphic phantom, above which 3D NEC dropped rapidly, and 2.5D NEC was highest. Projected total scan time to achieve NEC-density that matches current clinical practice in a typical oncology exam averaged 9, 15, 24, and 61 min for the 8-, 6-, 4-, and 2-block ring systems, when using optimal collimation. Increasing the AFOV should provide a greater than proportional increase in NEC, potentially benefiting patient throughput-to-cost ratio. Conversely, by using appropriate collimation, a two-ring (7.8 cm AFOV) system could acquire whole-body scans achieving NEC-density levels comparable to current standards within long, but feasible, scan times.
Characterization and optimization of an optical and electronic architecture for photon counting
NASA Astrophysics Data System (ADS)
Correa, M. del M.; Pérez, F. R.
2018-04-01
This work shows a time-domain method for the discrimination and digitization of pulses coming from optical detectors, considering the presence of electronic noise and afterpulsing. The developed signal processing scheme is based on a time-to-digital converter (TDC) and a voltage discriminator. After setting appropriate parameters for taking spectra, acquisition data was corrected by wavelength, intensity response function, and noise suppression. The performance of this scheme is discussed by its characterization as well as the comparison of its spectra to those obtained by an Ocean Optics HR4000 commercial reference.
NASA Astrophysics Data System (ADS)
Chapon, Arnaud; Pigrée, Gilbert; Putmans, Valérie; Rogel, Gwendal
Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples' characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters.
Neilan, Anne M.; Karalius, Brad; Patel, Kunjal; Van Dyke, Russell B.; Abzug, Mark J.; Agwu, Allison L.; Williams, Paige L.; Purswani, Murli; Kacanek, Deborah; Oleske, James M.; Burchett, Sandra K.; Wiznia, Andrew; Chernoff, Miriam; Seage, George R.; Ciaranello, Andrea L.
2017-01-01
Importance As perinatally HIV-infected youth (PHIVY) in the US grow older and more treatment-experienced, clinicians need updated information about the impact of age, CD4 count, viral load (VL), and antiretroviral drug (ARV) use on risks of opportunistic infections (OIs), key clinical events, and mortality in order to understand patient risks and improve care. Objective To determine the incidence or first occurrence during follow-up of key clinical events (including CDC-B and CDC-C events) and mortality among PHIVY stratified by age, CD4, and VL/ARV status. Design In the PHACS Adolescent Master Protocol (AMP) and IMPAACT P1074 multicenter cohort studies (2007–2015), we estimated event rates during person-time spent in key strata of age (7–12, 13–17, and 18–30 years), CD4 count (<200, 200–499, and ≥500 cells/μL), and VL/ARV status (< or ≥ 400 copies/mL; ARVs or no ARVs). Setting 41 ambulatory sites in the US, including Puerto Rico. Participants 1,562 participants in AMP and P1074 were eligible, 1446 PHIVY were included. Exposure(s) for observational studies Age, CD4 count, VL, ARV use. Main outcomes Clinical event rates stratified by person-time in age, CD4 count, and VL/ARV categories. Results During a mean follow-up of 4.9 years, higher incidences of CDC-B events, CDC-C events and mortality were observed as participants aged. Older PHIVY (13–17 and 18–30 year-olds) spent more time with VL ≥400 copies/mL and with CD4 <200/μL compared to 7–12 year-olds (30% and 44% vs. 22% of person-time with VL ≥400 copies/mL; 5% and 18% vs. 2% of person-time with CD4 <200/μL; p<0.01 for each comparison). We observed higher rates of CDC-B events, CDC-C events, bacterial infections, and mortality at lower CD4 counts, as expected. The mortality rate in older PHIVY was 6–12 times that of the general US population. Higher rates of sexually transmitted infections were also observed at lower CD4 counts, after adjusting for age. Conclusions and relevance Older PHIVY were at increased risk of viremia, immunosuppression, CDC-B events, CDC-C events, and mortality. Interventions to improve ART adherence and optimize models of care for PHIVY as they age are urgently needed to improve long-term outcomes among PHIVY. PMID:28346597
Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-01-01
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo
2016-05-01
High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.
NASA Astrophysics Data System (ADS)
Idrees, Mohammed Oludare; Pradhan, Biswajeet; Buchroithner, Manfred F.; Shafri, Helmi Zulhaidi Mohd; Khairunniza Bejo, Siti
2016-07-01
As far back as early 15th century during the reign of the Ming Dynasty (1368 to 1634 AD), Gomantong cave in Sabah (Malaysia) has been known as one of the largest roosting sites for wrinkle-lipped bats (Chaerephon plicata) and swiftlet birds (Aerodramus maximus and Aerodramus fuciphagus) in very large colonies. Until recently, no study has been done to quantify or estimate the colony sizes of these inhabitants in spite of the grave danger posed to this avifauna by human activities and potential habitat loss to postspeleogenetic processes. This paper evaluates the transferability of a hybrid optimization image analysis-based method developed to detect and count cave roosting birds. The method utilizes high-resolution terrestrial laser scanning intensity image. First, segmentation parameters were optimized by integrating objective function and the statistical Taguchi methods. Thereafter, the optimized parameters were used as input into the segmentation and classification processes using two images selected from Simud Hitam (lower cave) and Simud Putih (upper cave) of the Gomantong cave. The result shows that the method is capable of detecting birds (and bats) from the image for accurate population censusing. A total number of 9998 swiftlet birds were counted from the first image while 1132 comprising of both bats and birds were obtained from the second image. Furthermore, the transferability evaluation yielded overall accuracies of 0.93 and 0.94 (area under receiver operating characteristic curve) for the first and second image, respectively, with p value of <0.0001 at 95% confidence level. The findings indicate that the method is not only efficient for the detection and counting cave birds for which it was developed for but also useful for counting bats; thus, it can be adopted in any cave.
Optimization and performance evaluation of the microPET II scanner for in vivo small-animal imaging
NASA Astrophysics Data System (ADS)
Yang, Yongfeng; Tai, Yuan-Chuan; Siegel, Stefan; Newport, Danny F.; Bai, Bing; Li, Quanzheng; Leahy, Richard M.; Cherry, Simon R.
2004-06-01
MicroPET II is a newly developed PET (positron emission tomography) scanner designed for high-resolution imaging of small animals. It consists of 17 640 LSO crystals each measuring 0.975 × 0.975 × 12.5 mm3, which are arranged in 42 contiguous rings, with 420 crystals per ring. The scanner has an axial field of view (FOV) of 4.9 cm and a transaxial FOV of 8.5 cm. The purpose of this study was to carefully evaluate the performance of the system and to optimize settings for in vivo mouse and rat imaging studies. The volumetric image resolution was found to depend strongly on the reconstruction algorithm employed and averaged 1.1 mm (1.4 µl) across the central 3 cm of the transaxial FOV when using a statistical reconstruction algorithm with accurate system modelling. The sensitivity, scatter fraction and noise-equivalent count (NEC) rate for mouse- and rat-sized phantoms were measured for different energy and timing windows. Mouse imaging was optimized with a wide open energy window (150-750 keV) and a 10 ns timing window, leading to a sensitivity of 3.3% at the centre of the FOV and a peak NEC rate of 235 000 cps for a total activity of 80 MBq (2.2 mCi) in the phantom. Rat imaging, due to the higher scatter fraction, and the activity that lies outside of the field of view, achieved a maximum NEC rate of 24 600 cps for a total activity of 80 MBq (2.2 mCi) in the phantom, with an energy window of 250-750 keV and a 6 ns timing window. The sensitivity at the centre of the FOV for these settings is 2.1%. This work demonstrates that different scanner settings are necessary to optimize the NEC count rate for different-sized animals and different injected doses. Finally, phantom and in vivo animal studies are presented to demonstrate the capabilities of microPET II for small-animal imaging studies.
A cylindrical SPECT camera with de-centralized readout scheme
NASA Astrophysics Data System (ADS)
Habte, F.; Stenström, P.; Rillbert, A.; Bousselham, A.; Bohm, C.; Larsson, S. A.
2001-09-01
An optimized brain single photon emission computed tomograph (SPECT) camera is being designed at Stockholm University and Karolinska Hospital. The design goal is to achieve high sensitivity, high-count rate and high spatial resolution. The sensitivity is achieved by using a cylindrical crystal, which gives a closed geometry with large solid angles. A de-centralized readout scheme where only a local environment around the light excitation is readout supports high-count rates. The high resolution is achieved by using an optimized crystal configuration. A 12 mm crystal plus 12 mm light guide combination gave an intrinsic spatial resolution better than 3.5 mm (140 keV) in a prototype system. Simulations show that a modified configuration can improve this value. A cylindrical configuration with a rotating collimator significantly simplifies the mechanical design of the gantry. The data acquisition and control system uses early digitization and subsequent digital signal processing to extract timing and amplitude information, and monitors the position of the collimator. The readout system consists of 12 or more modules each based on programmable logic and a digital signal processor. The modules send data to a PC file server-reconstruction engine via a Firewire (IEEE-1394) network.
Production of Engineered Fabrics Using Artificial Neural Network-Genetic Algorithm Hybrid Model
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Banerjee, Debamalya
2015-10-01
The process of fabric engineering which is generally practised in most of the textile mills is very complicated, repetitive, tedious and time consuming. To eliminate this trial and error approach, a new approach of fabric engineering has been attempted in this work. Data sets of construction parameters [comprising of ends per inch, picks per inch, warp count and weft count] and three fabric properties (namely drape coefficient, air permeability and thermal resistance) of 25 handloom cotton fabrics have been used. The weights and biases of three artificial neural network (ANN) models developed for the prediction of drape coefficient, air permeability and thermal resistance were used to formulate the fitness or objective function and constraints of the optimization problem. The optimization problem was solved using genetic algorithm (GA). In both the fabrics which were attempted for engineering, the target and simulated fabric properties were very close. The GA was able to search the optimum set of fabric construction parameters with reasonably good accuracy except in case of EPI. However, the overall result is encouraging and can be improved further by using larger data sets of handloom fabrics by hybrid ANN-GA model.
Self-esteem and optimism in men and women infected with HIV.
Anderson, E H
2000-01-01
Self-esteem and optimism have been associated with appraisal and outcomes in a variety of situations. The degree to which the contribution of self-esteem and optimism to outcomes over time is accounted for by the differences in threat (primary) or resource (secondary) appraisal has not been established in persons with human immunodeficiency virus (HIV). To examine the longitudinal relationship of personality (self-esteem and optimism) on primary and secondary appraisal and outcomes of well-being, mood, CD4+ T-lymphocyte count, and selected activities. Men (n = 56) and women (n = 42) infected with HIV completed eight self-report measures twice over 18 months. Hierarchical Multiple Regressions were used to examine the relationship of personality variables on appraisals and outcomes. The mediating effects of primary and secondary appraisals were explored. Self-esteem uniquely accounted for 6% of the variance in primary appraisal and 5% in secondary appraisal. Optimism accounted for 8% of the unique variance in secondary appraisal. Primary and secondary appraisal mediated differently between personality and outcome variables. A strong predictor of well-being, mood disturbance, and activity disruption at Time 2 was participants' initial level of these variables. Socioeconomic status was a strong predictor of mood. Self-esteem and optimism are important but different resources for adapting to HIV disease. Strategies for reducing threats and increasing resources associated with HIV may improve an individual's mood and sense of well-being.
NASA Astrophysics Data System (ADS)
Chen, Buxin; Zhang, Zheng; Sidky, Emil Y.; Xia, Dan; Pan, Xiaochuan
2017-11-01
Optimization-based algorithms for image reconstruction in multispectral (or photon-counting) computed tomography (MCT) remains a topic of active research. The challenge of optimization-based image reconstruction in MCT stems from the inherently non-linear data model that can lead to a non-convex optimization program for which no mathematically exact solver seems to exist for achieving globally optimal solutions. In this work, based upon a non-linear data model, we design a non-convex optimization program, derive its first-order-optimality conditions, and propose an algorithm to solve the program for image reconstruction in MCT. In addition to consideration of image reconstruction for the standard scan configuration, the emphasis is on investigating the algorithm’s potential for enabling non-standard scan configurations with no or minimum hardware modification to existing CT systems, which has potential practical implications for lowered hardware cost, enhanced scanning flexibility, and reduced imaging dose/time in MCT. Numerical studies are carried out for verification of the algorithm and its implementation, and for a preliminary demonstration and characterization of the algorithm in reconstructing images and in enabling non-standard configurations with varying scanning angular range and/or x-ray illumination coverage in MCT.
Artificial neural network-aided image analysis system for cell counting.
Sjöström, P J; Frydel, B R; Wahlberg, L U
1999-05-01
In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
A Protocol for Real-time 3D Single Particle Tracking.
Hou, Shangguo; Welsher, Kevin
2018-01-03
Real-time three-dimensional single particle tracking (RT-3D-SPT) has the potential to shed light on fast, 3D processes in cellular systems. Although various RT-3D-SPT methods have been put forward in recent years, tracking high speed 3D diffusing particles at low photon count rates remains a challenge. Moreover, RT-3D-SPT setups are generally complex and difficult to implement, limiting their widespread application to biological problems. This protocol presents a RT-3D-SPT system named 3D Dynamic Photon Localization Tracking (3D-DyPLoT), which can track particles with high diffusive speed (up to 20 µm 2 /s) at low photon count rates (down to 10 kHz). 3D-DyPLoT employs a 2D electro-optic deflector (2D-EOD) and a tunable acoustic gradient (TAG) lens to drive a single focused laser spot dynamically in 3D. Combined with an optimized position estimation algorithm, 3D-DyPLoT can lock onto single particles with high tracking speed and high localization precision. Owing to the single excitation and single detection path layout, 3D-DyPLoT is robust and easy to set up. This protocol discusses how to build 3D-DyPLoT step by step. First, the optical layout is described. Next, the system is calibrated and optimized by raster scanning a 190 nm fluorescent bead with the piezoelectric nanopositioner. Finally, to demonstrate real-time 3D tracking ability, 110 nm fluorescent beads are tracked in water.
Chen, Ying; Lin, Li
2017-07-01
Preeclampsia is a relatively common complication of pregnancy and considered to be associated with different degrees of coagulation dysfunction. This study was developed to evaluate the potential value of coagulation parameters for suggesting preeclampsia during the third trimester of pregnancy. Data from 188 healthy pregnant women, 125 patients with preeclampsia in the third trimester and 120 age-matched nonpregnant women were analyzed. Prothrombin time, prothrombin activity, activated partial thromboplastin time, fibrinogen (Fg), antithrombin, platelet count, mean platelet volume, platelet distribution width and plateletcrit were tested. All parameters, excluding prothrombin time, platelet distribution width and plateletcrit, differed significantly between healthy pregnant women and those with preeclampsia. Platelet count, antithrombin and Fg were significantly lower and mean platelet volume and prothrombin activity were significantly higher in patients with preeclampsia (P < 0.001). Among these parameters, the largest area under the receiver operating characteristic curve for preeclampsia was 0.872 for Fg with an optimal cutoff value of ≤2.87g/L (sensitivity = 0.68 and specificity = 0.98). For severe preeclampsia, the area under the curve for Fg reached up to 0.922 with the same optimal cutoff value (sensitivity = 0.84, specificity = 0.98, positive predictive value = 0.96 and negative predictive value = 0.93). Fg is a biomarker suggestive of preeclampsia in the third trimester of pregnancy, and our data provide a potential cutoff value of Fg ≤ 2.87g/L for screening preeclampsia, especially severe preeclampsia. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Weifei; Park, Dohyun; Choi, DongHoon
2013-12-01
A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.
Karon, Brad S; Tolan, Nicole V; Wockenfus, Amy M; Block, Darci R; Baumann, Nikola A; Bryant, Sandra C; Clements, Casey M
2017-11-01
Lactate, white blood cell (WBC) and neutrophil count, procalcitonin and immature granulocyte (IG) count were compared for the prediction of sepsis, and severe sepsis or septic shock, in patients presenting to the emergency department (ED). We prospectively enrolled 501 ED patients with a sepsis panel ordered for suspicion of sepsis. WBC, neutrophil, and IG counts were measured on a Sysmex XT-2000i analyzer. Lactate was measured by i-STAT, and procalcitonin by Brahms Kryptor. We classified patients as having sepsis using a simplification of the 1992 consensus conference sepsis definitions. Patients with sepsis were further classified as having severe sepsis or septic shock using established criteria. Univariate receiver operating characteristic (ROC) analysis was performed to determine odds ratio (OR), area under the ROC curve (AUC), and sensitivity/specificity at optimal cut-off for prediction of sepsis (vs. no sepsis), and prediction of severe sepsis or septic shock (vs. no sepsis). There were 267 patients without sepsis; and 234 with sepsis, including 35 patients with severe sepsis or septic shock. Lactate had the highest OR (1.44, 95th% CI 1.20-1.73) for the prediction of sepsis; while WBC, neutrophil count and percent (neutrophil/WBC) had OR>1.00 (p<0.05). All biomarkers had AUC<0.70 and sensitivity and specificity <70% at the optimal cut-off. Initial lactate was the best biomarker for predicting severe sepsis or septic shock, with an odds ratio (95th% CI) of 2.70 (2.02-3.61) and AUC 0.89 (0.82-0.96). Traditional biomarkers (lactate, WBC, neutrophil count, procalcitonin, IG) have limited utility in the prediction of sepsis. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Krasteva, Denitza T.
1998-01-01
Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.
Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less
NASA Astrophysics Data System (ADS)
Hall, Donald
Under a current award, NASA NNX 13AC13G "EXTENDING THE ASTRONOMICAL APPLICATION OF PHOTON COUNTING HgCdTe LINEAR AVALANCHE PHOTODIODE ARRAYS TO LOW BACKGROUND SPACE OBSERVATIONS" UH has used Selex SAPHIRA 320 x 256 MOVPE L-APD HgCdTe arrays developed for Adaptive Optics (AO) wavefront (WF) sensing to investigate the potential of this technology for low background space astronomy applications. After suppressing readout integrated circuit (ROIC) glow, we have placed upper limits on gain normalized dark current of 0.01 e-/sec at up to 8 volts avalanche bias, corresponding to avalanche gain of 5, and have operated with avalanche gains of up to several hundred at higher bias. We have also demonstrated detection of individual photon events. The proposed investigation would scale the format to 1536 x 1536 at 12um (the largest achievable in a standard reticule without requiring stitching) while incorporating reference pixels required at these low dark current levels. The primary objective is to develop, produce and characterize a 1.5k x 1.5k at 12um pitch MOVPE HgCdTe L-APD array, with nearly 30 times the pixel count of the 320 x 256 SAPHIRA, optimized for low background space astronomy. This will involve: 1) Selex design of a 1.5k x 1.5k at 12um pitch ROIC optimized for low background operation, silicon wafer fabrication at the German XFab foundry in 0.35 um 3V3 process and dicing/test at Selex, 2) provision by GL Scientific of a 3-side close-buttable carrier building from the heritage of the HAWAII xRG family, 3) Selex development and fabrication of 1.5k x 1.5k at 12 um pitch MOVPE HgCdTe L-APD detector arrays optimized for low background applications, 4) hybridization, packaging into a sensor chip assembly (SCA) with initial characterization by Selex and, 5) comprehensive characterization of low background performance, both in the laboratory and at ground based telescopes, by UH. The ultimate goal is to produce and eventually market a large format array, the L-APD equivalent of the Teledyne H1RG and H2RG, able to achieve sub-electron read noise and count 1 - 5 um photons with high quantum efficiency and low dark count rate while preserving their Poisson statistics and noise.
Ruberu, Shryamalie R; Liu, Yun-Gang; Wong, Carolyn T; Perera, S Kusum; Langlois, Gregg W; Doucette, Gregory J; Powell, Christine L
2003-01-01
A receptor binding assay (RBA) for detection of paralytic shellfish poisoning (PSP) toxins was formatted for use in a high throughput detection system using microplate scintillation counting. The RBA technology was transferred from the National Ocean Service, which uses a Wallac TriLux 1450 MicroBeta microplate scintillation counter, to the California Department of Health Services, which uses a Packard TopCount scintillation counter. Due to differences in the detector arrangement between these 2 counters, markedly different counting efficiencies were exhibited, requiring optimization of the RBA protocol for the TopCount instrument. Precision, accuracy, and sensitivity [limit of detection = 0.2 microg saxitoxin (STX) equiv/100 g shellfish tissue] of the modified protocol were equivalent to those of the original protocol. The RBA robustness and adaptability were demonstrated by an interlaboratory study, in which STX concentrations in shellfish generated by the TopCount were consistent with MicroBeta-derived values. Comparison of STX reference standards obtained from the U.S. Food and Drug Administration and the National Research Council, Canada, showed no observable differences. This study confirms the RBA's value as a rapid, high throughput screen prior to testing by the conventional mouse bioassay (MBA) and its suitability for providing an early warning of increasing PSP toxicity when toxin levels are below the MBA limit of detection.
Surpassing Humans and Computers with JellyBean: Crowd-Vision-Hybrid Counting Algorithms.
Sarma, Akash Das; Jain, Ayush; Nandi, Arnab; Parameswaran, Aditya; Widom, Jennifer
2015-11-01
Counting objects is a fundamental image processisng primitive, and has many scientific, health, surveillance, security, and military applications. Existing supervised computer vision techniques typically require large quantities of labeled training data, and even with that, fail to return accurate results in all but the most stylized settings. Using vanilla crowd-sourcing, on the other hand, can lead to significant errors, especially on images with many objects. In this paper, we present our JellyBean suite of algorithms, that combines the best of crowds and computer vision to count objects in images, and uses judicious decomposition of images to greatly improve accuracy at low cost. Our algorithms have several desirable properties: (i) they are theoretically optimal or near-optimal , in that they ask as few questions as possible to humans (under certain intuitively reasonable assumptions that we justify in our paper experimentally); (ii) they operate under stand-alone or hybrid modes, in that they can either work independent of computer vision algorithms, or work in concert with them, depending on whether the computer vision techniques are available or useful for the given setting; (iii) they perform very well in practice, returning accurate counts on images that no individual worker or computer vision algorithm can count correctly, while not incurring a high cost.
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Quantification of Covariance in Tropical Cyclone Activity across Teleconnected Basins
NASA Astrophysics Data System (ADS)
Tolwinski-Ward, S. E.; Wang, D.
2015-12-01
Rigorous statistical quantification of natural hazard covariance across regions has important implications for risk management, and is also of fundamental scientific interest. We present a multivariate Bayesian Poisson regression model for inferring the covariance in tropical cyclone (TC) counts across multiple ocean basins and across Saffir-Simpson intensity categories. Such covariability results from the influence of large-scale modes of climate variability on local environments that can alternately suppress or enhance TC genesis and intensification, and our model also simultaneously quantifies the covariance of TC counts with various climatic modes in order to deduce the source of inter-basin TC covariability. The model explicitly treats the time-dependent uncertainty in observed maximum sustained wind data, and hence the nominal intensity category of each TC. Differences in annual TC counts as measured by different agencies are also formally addressed. The probabilistic output of the model can be probed for probabilistic answers to such questions as: - Does the relationship between different categories of TCs differ statistically by basin? - Which climatic predictors have significant relationships with TC activity in each basin? - Are the relationships between counts in different basins conditionally independent given the climatic predictors, or are there other factors at play affecting inter-basin covariability? - How can a portfolio of insured property be optimized across space to minimize risk? Although we present results of our model applied to TCs, the framework is generalizable to covariance estimation between multivariate counts of natural hazards across regions and/or across peril types.
Absolute counting of neutrophils in whole blood using flow cytometry.
Brunck, Marion E G; Andersen, Stacey B; Timmins, Nicholas E; Osborne, Geoffrey W; Nielsen, Lars K
2014-12-01
Absolute neutrophil count (ANC) is used clinically to monitor physiological dysfunctions such as myelosuppression or infection. In the research laboratory, ANC is a valuable measure to monitor the evolution of a wide range of disease states in disease models. Flow cytometry (FCM) is a fast, widely used approach to confidently identify thousands of cells within minutes. FCM can be optimised for absolute counting using spiked-in beads or by measuring the sample volume analysed. Here we combine the 1A8 antibody, specific for the mouse granulocyte protein Ly6G, with flow cytometric counting in straightforward FCM assays for mouse ANC, easily implementable in the research laboratory. Volumetric and Trucount™ bead assays were optimized for mouse neutrophils, and ANC values obtained with these protocols were compared to ANC measured by a dual-platform assay using the Orphee Mythic 18 veterinary haematology analyser. The single platform assays were more precise with decreased intra-assay variability compared with ANC obtained using the dual protocol. Defining ANC based on Ly6G expression produces a 15% higher estimate than the dual protocol. Allowing for this difference in ANC definition, the flow cytometry counting assays using Ly6G can be used reliably in the research laboratory to quantify mouse ANC from a small volume of blood. We demonstrate the utility of the volumetric protocol in a time-course study of chemotherapy induced neutropenia using four drug regimens. © 2014 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Aspiyanto, Susilowati, Agustine; Iskandar, Jeti M.; Melanie, Hakiki; Maryati, Yati; Lotulung, Puspa D.
2017-01-01
Fermentation on spinach (Amaranthus sp.) vegetable by kombucha culture as an effort to get poliphenol as antioxidant compound had been done. Purification of fermented spinach extract suspension was carried out through microfiltration (MF) membrane (pore size 0.15 µm) fitted in dead-end Stirred Ultrafiltration Cell (SUFC) mode at fixed condition (stirrer rotation 400 rpm, room temperature, pressure 40 psia). Result of the experimental activity showed that long fermentation time increased total acids, total polyphenol and Total Plate Count (TPC), and decreased total solids and reducing sugar in biomass. The optimal fermentation time was reached for 2 weeks with total polyphenol recovery increasing of 92.76 % from before and after fermentation. On this optimal fermentation time, biomass had identified galic acid with relative intensity of 8 %, while as polyphenol monomer was resulted 5 kinds of polyphenol compounds with total intensity 27.97 % and molecular weight (MW) 191.1736, 193.1871 and 194.2170 at T2.5, T2.86 and T3.86. Long fermentation time increased functional properties of polyphenol as antioxidant.
2012-01-01
Background The risk of HIV-1 related mortality is strongly related to CD4 count. Guidance on optimal timing for initiation of antiretroviral therapy (ART) is still evolving, but the contribution of HIV-1 infection to excess mortality at CD4 cell counts above thresholds for HIV-1 treatment has not been fully described, especially in resource-poor settings. To compare mortality among HIV-1 infected and uninfected members of HIV-1 serodiscordant couples followed for up to 24 months, we conducted a secondary data analysis examining mortality among HIV-1 serodiscordant couples participating in a multicenter, randomized controlled trial at 14 sites in seven sub-Saharan African countries. Methods Predictors of death were examined using Cox regression and excess mortality by CD4 count and plasma HIV-1 RNA was computed using Poisson regression for correlated data. Results Among 3295 HIV serodiscordant couples, we observed 109 deaths from any cause (74 deaths among HIV-1 infected and 25 among HIV-1 uninfected persons). Among HIV-1 infected persons, the risk of death increased with lower CD4 count and higher plasma viral levels. HIV-1 infected persons had excess mortality due to medical causes of 15.2 deaths/1000 person years at CD4 counts of 250 – 349 cells/μl and 8.9 deaths at CD4 counts of 350 – 499 cells/μl. Above a CD4 count of 500 cells/μl, mortality was comparable among HIV-1 infected and uninfected persons. Conclusions Among African serodiscordant couples, there is a high rate of mortality attributable to HIV-1 infection at CD4 counts above the current threshold (200 – 350 cells/μl) for ART initiation in many African countries. These data indicate that earlier initiation of treatment is likely to provide clinical benefit if further expansion of ART access can be achieved. Trial Registration Clinicaltrials.gov (NCT00194519) PMID:23130818
Zhou, Baoqing; Chen, Bolu; Wu, Xin; Li, Fan; Yu, Pei; Aguilar, Zoraida P; Wei, Hua; Xu, Hengyi
2016-12-01
A rapid, reliable, and sensitive method for the detection of Cronobacter sakazakii, a common foodborne pathogen that may cause serious neonatal disease, has been developed. In this study, a rapid real-time quantitative PCR (qPCR) assay combined with sodium deoxycholate (SD) and propidium monoazide (PMA) was developed to detect C. sakazakii contamination in powdered infant formula (PIF). This method could eliminate the interference from dead or injured bacteria. Optimization studies indicated that SD and PMA at 0.08% (wt/vol) and 5µg/mL, respectively, were the most appropriate. In addition, qPCR, PMA-qPCR, SD-PMA-qPCR, and plate count assays were used to account for the number of viable bacteria in cell suspensions that were exposed to a 55°C water bath at different length of time. As a result, the viable number by PMA-qPCR showed significantly higher than of the number from SD-PMA-qPCR or plate counts. The number of viable bacteria was consistent between SD-PMA-qPCR and traditional plate counts, which indicated that SD treatment could eliminate the interference from dead or injured cells. Using the optimized parameters, the limit of detection with the SD-PMA-qPCR assay was 3.3×10 2 cfu/mL and 4.4×10 2 cfu/g in pure culture and in spiked PIF, respectively. A similar detection limit of 5.6×10 2 cfu/g was obtained in the presence of the Staphylococcus aureus (10 7 cfu/mL). The combined SD-PMA-qPCR assay holds promise for the rapid detection of viable C. sakazakii in PIF. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Angiogram, fundus, and oxygen saturation optic nerve head image fusion
NASA Astrophysics Data System (ADS)
Cao, Hua; Khoobehi, Bahram
2009-02-01
A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.
Long, Leroy L; Srinivasan, Manoj
2013-04-06
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk-run mixture at intermediate speeds and a walk-rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients-a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk-run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill.
Long, Leroy L.; Srinivasan, Manoj
2013-01-01
On a treadmill, humans switch from walking to running beyond a characteristic transition speed. Here, we study human choice between walking and running in a more ecological (non-treadmill) setting. We asked subjects to travel a given distance overground in a given allowed time duration. During this task, the subjects carried, and could look at, a stopwatch that counted down to zero. As expected, if the total time available were large, humans walk the whole distance. If the time available were small, humans mostly run. For an intermediate total time, humans often use a mixture of walking at a slow speed and running at a higher speed. With analytical and computational optimization, we show that using a walk–run mixture at intermediate speeds and a walk–rest mixture at the lowest average speeds is predicted by metabolic energy minimization, even with costs for transients—a consequence of non-convex energy curves. Thus, sometimes, steady locomotion may not be energy optimal, and not preferred, even in the absence of fatigue. Assuming similar non-convex energy curves, we conjecture that similar walk–run mixtures may be energetically beneficial to children following a parent and animals on long leashes. Humans and other animals might also benefit energetically from alternating between moving forward and standing still on a slow and sufficiently long treadmill. PMID:23365192
NASA Astrophysics Data System (ADS)
Suresh Babu, Arun Vishnu; Ramesh, Kiran; Gopalarathnam, Ashok
2017-11-01
In previous research, Ramesh et al. (JFM,2014) developed a low-order discrete vortex method for modeling unsteady airfoil flows with intermittent leading edge vortex (LEV) shedding using a leading edge suction parameter (LESP). LEV shedding is initiated using discrete vortices (DVs) whenever the Leading Edge Suction Parameter (LESP) exceeds a critical value. In subsequent research, the method was successfully employed by Ramesh et al. (JFS, 2015) to predict aeroelastic limit-cycle oscillations in airfoil flows dominated by intermittent LEV shedding. When applied to flows that require large number of time steps, the computational cost increases due to the increasing vortex count. In this research, we apply an amalgamation strategy to actively control the DV count, and thereby reduce simulation time. A pair each of LEVs and TEVs are amalgamated at every time step. The ideal pairs for amalgamation are identified based on the requirement that the flowfield in the vicinity of the airfoil is least affected (Spalart, 1988). Instead of placing the amalgamated vortex at the centroid, we place it at an optimal location to ensure that the leading-edge suction and the airfoil bound circulation are conserved. Results of the initial study are promising.
Array-scale performance of TES X-ray Calorimeters Suitable for Constellation-X
NASA Technical Reports Server (NTRS)
Kilbourne, C. A.; Bandler, S. R.; Brown, A. D.; Chervenak, J. A.; Eckart, M. E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.;
2008-01-01
Having developed a transition-edge-sensor (TES) calorimeter design that enables high spectral resolution in high fill-factor arrays, we now present array-scale results from 32-pixel arrays of identical closely packed TES pixels. Each pixel in such an array contains a Mo/Au bilayer with a transition temperature of 0.1 K and an electroplated Au or Au/Bi xray absorber. The pixels in an array have highly uniform physical characteristics and performance. The arrays are easy to operate due to the range of bias voltages and heatsink temperatures over which solution better than 3 eV at 6 keV can be obtained. Resolution better than 3 eV has also been obtained with 2x8 time-division SQUID multiplexing. We will present the detector characteristics and show spectra acquired through the read-out chain from the multiplexer electronics through the demultiplexer software to real-time signal processing. We are working towards demonstrating this performance over the range of count rates expected in the observing program of the Constellation-X observatory. We mill discuss the impact of increased counting rate on spectral resolution, including the effects of crosstalk and optimal-filtering dead time.
Count-doubling time safety circuit
Rusch, Gordon K.; Keefe, Donald J.; McDowell, William P.
1981-01-01
There is provided a nuclear reactor count-factor-increase time monitoring circuit which includes a pulse-type neutron detector, and means for counting the number of detected pulses during specific time periods. Counts are compared and the comparison is utilized to develop a reactor scram signal, if necessary.
Experimental Study for Automatic Colony Counting System Based Onimage Processing
NASA Astrophysics Data System (ADS)
Fang, Junlong; Li, Wenzhe; Wang, Guoxin
Colony counting in many colony experiments is detected by manual method at present, therefore it is difficult for man to execute the method quickly and accurately .A new automatic colony counting system was developed. Making use of image-processing technology, a study was made on the feasibility of distinguishing objectively white bacterial colonies from clear plates according to the RGB color theory. An optimal chromatic value was obtained based upon a lot of experiments on the distribution of the chromatic value. It has been proved that the method greatly improves the accuracy and efficiency of the colony counting and the counting result is not affected by using inoculation, shape or size of the colony. It is revealed that automatic detection of colony quantity using image-processing technology could be an effective way.
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
Validation of a Monte Carlo simulation of the Philips Allegro/GEMINI PET systems using GATE
NASA Astrophysics Data System (ADS)
Lamare, F.; Turzo, A.; Bizais, Y.; Cheze LeRest, C.; Visvikis, D.
2006-02-01
A newly developed simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop a Monte Carlo simulation of a fully three-dimensional (3D) clinical PET scanner. The Philips Allegro/GEMINI PET systems were simulated in order to (a) allow a detailed study of the parameters affecting the system's performance under various imaging conditions, (b) study the optimization and quantitative accuracy of emission acquisition protocols for dynamic and static imaging, and (c) further validate the potential of GATE for the simulation of clinical PET systems. A model of the detection system and its geometry was developed. The accuracy of the developed detection model was tested through the comparison of simulated and measured results obtained with the Allegro/GEMINI systems for a number of NEMA NU2-2001 performance protocols including spatial resolution, sensitivity and scatter fraction. In addition, an approximate model of the system's dead time at the level of detected single events and coincidences was developed in an attempt to simulate the count rate related performance characteristics of the scanner. The developed dead-time model was assessed under different imaging conditions using the count rate loss and noise equivalent count rates performance protocols of standard and modified NEMA NU2-2001 (whole body imaging conditions) and NEMA NU2-1994 (brain imaging conditions) comparing simulated with experimental measurements obtained with the Allegro/GEMINI PET systems. Finally, a reconstructed image quality protocol was used to assess the overall performance of the developed model. An agreement of <3% was obtained in scatter fraction, with a difference between 4% and 10% in the true and random coincidence count rates respectively, throughout a range of activity concentrations and under various imaging conditions, resulting in <8% differences between simulated and measured noise equivalent count rates performance. Finally, the image quality validation study revealed a good agreement in signal-to-noise ratio and contrast recovery coefficients for a number of different volume spheres and two different (clinical level based) tumour-to-background ratios. In conclusion, these results support the accurate modelling of the Philips Allegro/GEMINI PET systems using GATE in combination with a dead-time model for the signal flow description, which leads to an agreement of <10% in coincidence count rates under different imaging conditions and clinically relevant activity concentration levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 x 48 pixels, each 130 mu m x 130 mu m x 520 mu m thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gatingmore » time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.« less
Rostami, Kamran; Marsh, Michael N; Johnson, Matt W; Mohaghegh, Hamid; Heal, Calvin; Holmes, Geoffrey; Ensari, Arzu; Aldulaimi, David; Bancel, Brigitte; Bassotti, Gabrio; Bateman, Adrian; Becheanu, Gabriel; Bozzola, Anna; Carroccio, Antonio; Catassi, Carlo; Ciacci, Carolina; Ciobanu, Alexandra; Danciu, Mihai; Derakhshan, Mohammad H; Elli, Luca; Ferrero, Stefano; Fiorentino, Michelangelo; Fiorino, Marilena; Ganji, Azita; Ghaffarzadehgan, Kamran; Going, James J; Ishaq, Sauid; Mandolesi, Alessandra; Mathews, Sherly; Maxim, Roxana; Mulder, Chris J; Neefjes-Borst, Andra; Robert, Marie; Russo, Ilaria; Rostami-Nejad, Mohammad; Sidoni, Angelo; Sotoudeh, Masoud; Villanacci, Vincenzo; Volta, Umberto; Zali, Mohammad R; Srivastava, Amitabh
2017-01-01
Objectives Counting intraepithelial lymphocytes (IEL) is central to the histological diagnosis of coeliac disease (CD), but no definitive ‘normal’ IEL range has ever been published. In this multicentre study, receiver operating characteristic (ROC) curve analysis was used to determine the optimal cut-off between normal and CD (Marsh III lesion) duodenal mucosa, based on IEL counts on >400 mucosal biopsy specimens. Design The study was designed at the International Meeting on Digestive Pathology, Bucharest 2015. Investigators from 19 centres, eight countries of three continents, recruited 198 patients with Marsh III histology and 203 controls and used one agreed protocol to count IEL/100 enterocytes in well-oriented duodenal biopsies. Demographic and serological data were also collected. Results The mean ages of CD and control groups were 45.5 (neonate to 82) and 38.3 (2–88) years. Mean IEL count was 54±18/100 enterocytes in CD and 13±8 in normal controls (p=0.0001). ROC analysis indicated an optimal cut-off point of 25 IEL/100 enterocytes, with 99% sensitivity, 92% specificity and 99.5% area under the curve. Other cut-offs between 20 and 40 IEL were less discriminatory. Additionally, there was a sufficiently high number of biopsies to explore IEL counts across the subclassification of the Marsh III lesion. Conclusion Our ROC curve analyses demonstrate that for Marsh III lesions, a cut-off of 25 IEL/100 enterocytes optimises discrimination between normal control and CD biopsies. No differences in IEL counts were found between Marsh III a, b and c lesions. There was an indication of a continuously graded dose–response by IEL to environmental (gluten) antigenic influence. PMID:28893865
NASA Astrophysics Data System (ADS)
Dong, Kyung-Rae; Shim, Dong-Oh; Kim, Ho-Sung; Park, Yong-Soon; Chung, Woon-Kwan; Cho, Jae-Hwan
2013-02-01
In a nuclear medicine examination, methods to acquire a static image include the preset count method and the preset time method. The preset count method is used mainly in a static renal scan that utilizes 99 m Tc-DMSA (dimoercaptosuccinic acid) whereas the preset time method is used occasionally. When the preset count method is used, the same number of acquisition counts is acquired for each time, but the scan time varies. When the preset time method is used, the scan time is constant, but the number of counts acquired is not the same. Therefore, this study examined the dependence of the difference in information on the function and the shape of both sides of the kidneys on the counts acquired during a renal scan that utilizes 99 m Tc-DMSA. The study involved patients who had 40-60% relative function of one kidney among patients who underwent a 99 m Tc-DMSA renal scan in the Nuclear Medicine Department during the period from January 11 to March 31, 2012. A gamma camera was used to obtain the acquisition count continuously using 100,000 counts and 300,000 counts, and an acquisition time of 7 minutes (exceeding 300,000 counts). The function and the shape of the kidney were evaluated by measuring the relative function of both sides of the kidneys, the geometric mean, and the size of kidney before comparative analysis. According to the study results, neither the relative function nor the geometric mean of both sides of the kidneys varied significantly with the acquisition count. On the other hand, the size of the kidney tended to be larger with increasing acquisition count.
Hansen, Sarah J Z; Morovic, Wesley; DeMeules, Martha; Stahl, Buffy; Sindelar, Connie W
2018-01-01
The current standard for enumeration of probiotics to obtain colony forming units by plate counts has several drawbacks: long time to results, high variability and the inability to discern between bacterial strains. Accurate probiotic cell counts are important to confirm the delivery of a clinically documented dose for its associated health benefits. A method is described using chip-based digital PCR (cdPCR) to enumerate Bifidobacterium animalis subsp. lactis Bl-04 and Lactobacillus acidophilus NCFM both as single strains and in combination. Primers and probes were designed to differentiate the target strains against other strains of the same species using known single copy, genetic differences. The assay was optimized to include propidium monoazide pre-treatment to prevent amplification of DNA associated with dead probiotic cells as well as liberation of DNA from cells with intact membranes using bead beating. The resulting assay was able to successfully enumerate each strain whether alone or in multiplex. The cdPCR method had a 4 and 5% relative standard deviation (RSD) for Bl-04 and NCFM, respectively, making it more precise than plate counts with an industry accepted RSD of 15%. cdPCR has the potential to replace traditional plate counts because of its precision, strain specificity and the ability to obtain results in a matter of hours.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu
Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less
Radiotherapy Monte Carlo simulation using cloud computing technology.
Poole, C M; Cornelius, I; Trapp, J V; Langton, C M
2012-12-01
Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Chad; Gomez, Daniel R.; Wang, Hongmei
Purpose: Radiation pneumonitis (RP) is an inflammatory response to radiation therapy (RT). We assessed the association between RP and white blood cell (WBC) count, an established metric of systemic inflammation, after RT for non-small cell lung cancer. Methods and Materials: We retrospectively analyzed 366 patients with non-small cell lung cancer who received ≥60 Gy as definitive therapy. The primary endpoint was whether WBC count after RT (defined as 2 weeks through 3 months after RT completion) was associated with grade ≥3 or grade ≥2 RP. Median lung volume receiving ≥20 Gy (V{sub 20}) was 31%, and post-RT WBC counts rangedmore » from 1.7 to 21.2 × 10{sup 3} WBCs/μL. Odds ratios (ORs) associating clinical variables and post-RT WBC counts with RP were calculated via logistic regression. A recursive-partitioning algorithm was used to define optimal post-RT WBC count cut points. Results: Post-RT WBC counts were significantly higher in patients with grade ≥3 RP than without (P<.05). Optimal cut points for post-RT WBC count were found to be 7.4 and 8.0 × 10{sup 3}/μL for grade ≥3 and ≥2 RP, respectively. Univariate analysis revealed significant associations between post-RT WBC count and grade ≥3 (n=46, OR=2.6, 95% confidence interval [CI] 1.4‒4.9, P=.003) and grade ≥2 RP (n=164, OR=2.0, 95% CI 1.2‒3.4, P=.01). This association held in a stepwise multivariate regression. Of note, V{sub 20} was found to be significantly associated with grade ≥2 RP (OR=2.2, 95% CI 1.2‒3.4, P=.01) and trended toward significance for grade ≥3 RP (OR=1.9, 95% CI 1.0-3.5, P=.06). Conclusions: Post-RT WBC counts were significantly and independently associated with RP and have potential utility as a diagnostic or predictive marker for this toxicity.« less
Microbial air quality and bacterial surface contamination in ambulances during patient services.
Luksamijarulkul, Pipat; Pipitsangjan, Sirikun
2015-03-01
We sought to assess microbial air quality and bacterial surface contamination on medical instruments and the surrounding areas among 30 ambulance runs during service. We performed a cross-sectional study of 106 air samples collected from 30 ambulances before patient services and 212 air samples collected during patient services to assess the bacterial and fungal counts at the two time points. Additionally, 226 surface swab samples were collected from medical instrument surfaces and the surrounding areas before and after ambulance runs. Groups or genus of isolated bacteria and fungi were preliminarily identified by Gram's stain and lactophenol cotton blue. Data were analyzed using descriptive statistics, t-test, and Pearson's correlation coefficient with a p-value of less than 0.050 considered significant. The mean and standard deviation of bacterial and fungal counts at the start of ambulance runs were 318±485cfu/m(3) and 522±581cfu/m(3), respectively. Bacterial counts during patient services were 468±607cfu/m(3) and fungal counts were 656±612cfu/m(3). Mean bacterial and fungal counts during patient services were significantly higher than those at the start of ambulance runs, p=0.005 and p=0.030, respectively. For surface contamination, the overall bacterial counts before and after patient services were 0.8±0.7cfu/cm(2) and 1.3±1.1cfu/cm(2), respectively (p<0.001). The predominant isolated bacteria and fungi were Staphylococcus spp. and Aspergillus spp., respectively. Additionally, there was a significantly positive correlation between bacterial (r=0.3, p<0.010) and fungal counts (r=0.2, p=0.020) in air samples and bacterial counts on medical instruments and allocated areas. This study revealed high microbial contamination (bacterial and fungal) in ambulance air during services and higher bacterial contamination on medical instrument surfaces and allocated areas after ambulance services compared to the start of ambulance runs. Additionally, bacterial and fungal counts in ambulance air showed a significantly positive correlation with the bacterial surface contamination on medical instruments and allocated areas. Further studies should be conducted to determine the optimal intervention to reduce microbial contamination in the ambulance environment.
Loss to Follow-Up in a Community Clinic in South Africa – Roles of Gender, Pregnancy and CD4 count
Wang, Bingxia; Losina, Elena; Stark, Ruth; Munro, Alison; Walensky, Rochelle P.; Wilke, Marisa; Martin, Des; Lu, Zhigang; Freedberg, Kenneth A.; Wood, Robin
2013-01-01
Background Faith-based organizations have expanded access to antiretroviral therapy (ART) in community clinics across South Africa. Loss to follow-up (LTFU), however, limits both the potential individual and population treatment benefits and is an obstacle to optimal care. Objective To identify patient characteristics associated with LTFU six months after starting ART in patients in a large South African community clinic. Methods Patients initiating ART between April 2004 and October 2006 in one Catholic Relief Services HIV treatment clinic who had at least one follow-up visit were included in the analysis. Standardized instruments were used for data collection. Routine monitoring was performed every 6 months following ART initiation. Rates of LTFU over time were estimated by the Kaplan-Meier method. The log-rank test was used to examine the impact of age, baseline CD4 count, HIV RNA, gender and pregnancy status for women on LTFU. Cox proportional hazard regression was performed to analyze hazard ratios for LTFU. Results Data from 925 patients (age > 14 years), median age 36 years, 70% female (16% pregnant) were included in the analysis. Fifty one patients (6%) were lost to follow-up six months after ART initiation. When stratified by baseline CD4 count, gender and pregnancy status, pregnant women with lower baseline CD4 count (≤200 /μl) had 6.06 times (95% CI: 2.20 – 16.71) the hazard of LTFU compared to men. Conclusions HIV-infected pregnant women initiating ART are significantly more likely to be lost to follow-up in a community clinic in South Africa. Interventions to successfully retain pregnant women in care are urgently needed. PMID:21786730
Tutorial on X-ray photon counting detector characterization.
Ren, Liqiang; Zheng, Bin; Liu, Hong
2018-01-01
Recent advances in photon counting detection technology have led to significant research interest in X-ray imaging. As a tutorial level review, this paper covers a wide range of aspects related to X-ray photon counting detector characterization. The tutorial begins with a detailed description of the working principle and operating modes of a pixelated X-ray photon counting detector with basic architecture and detection mechanism. Currently available methods and techniques for charactering major aspects including energy response, noise floor, energy resolution, count rate performance (detector efficiency), and charge sharing effect of photon counting detectors are comprehensively reviewed. Other characterization aspects such as point spread function (PSF), line spread function (LSF), contrast transfer function (CTF), modulation transfer function (MTF), noise power spectrum (NPS), detective quantum efficiency (DQE), bias voltage, radiation damage, and polarization effect are also remarked. A cadmium telluride (CdTe) pixelated photon counting detector is employed for part of the characterization demonstration and the results are presented. This review can serve as a tutorial for X-ray imaging researchers and investigators to understand, operate, characterize, and optimize photon counting detectors for a variety of applications.
Si-strip photon counting detectors for contrast-enhanced spectral mammography
NASA Astrophysics Data System (ADS)
Chen, Buxin; Reiser, Ingrid; Wessel, Jan C.; Malakhov, Nail; Wawrzyniak, Gregor; Hartsough, Neal E.; Gandhi, Thulasi; Chen, Chin-Tu; Iwanczyk, Jan S.; Barber, William C.
2015-08-01
We report on the development of silicon strip detectors for energy-resolved clinical mammography. Typically, X-ray integrating detectors based on scintillating cesium iodide CsI(Tl) or amorphous selenium (a-Se) are used in most commercial systems. Recently, mammography instrumentation has been introduced based on photon counting Si strip detectors. The required performance for mammography in terms of the output count rate, spatial resolution, and dynamic range must be obtained with sufficient field of view for the application, thus requiring the tiling of pixel arrays and particular scanning techniques. Room temperature Si strip detector, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel, provided that the sensors are designed for rapid signal formation across the X-ray energy ranges of the application. We present our methods and results from the optimization of Si-strip detectors for contrast enhanced spectral mammography. We describe the method being developed for quantifying iodine contrast using the energy-resolved detector with fixed thresholds. We demonstrate the feasibility of the method by scanning an iodine phantom with clinically relevant contrast levels.
Sanchez-Cabeza, J A; Pujol, L
1995-05-01
The radiological examination of water requires a rapid screening technique that permits the determination of the gross alpha and beta activities of each sample in order to decide if further radiological analyses are necessary. In this work, the use of a low background liquid scintillation system (Quantulus 1220) is proposed to simultaneously determine the gross activities in water samples. Liquid scintillation is compared to more conventional techniques used in most monitoring laboratories. In order to determine the best counting configuration of the system, pulse shape discrimination was optimized for 6 scintillant/vial combinations. It was concluded that the best counting configuration was obtained with the scintillation cocktail Optiphase Hisafe 3 in Zinsser low diffusion vials. The detection limits achieved were 0.012 Bq L-1 and 0.14 Bq L-1 for gross alpha and beta activity respectively, after a 1:10 concentration process by simple evaporation and for a counting time of only 360 min. The proposed technique is rapid, gives spectral information, and is adequate to determine gross activities according to the World Health Organization (WHO) guideline values.
Optimally achieving milk bulk tank somatic cell count thresholds.
Troendle, Jason A; Tauer, Loren W; Gröhn, Yrjo T
2017-01-01
High somatic cell count in milk leads to reduced shelf life in fluid milk and lower processed yields in manufactured dairy products. As a result, farmers are often penalized for high bulk tank somatic cell count or paid a premium for low bulk tank somatic cell count. Many countries also require all milk from a farm to be lower than a specified regulated somatic cell count. Thus, farms often cull cows that have high somatic cell count to meet somatic cell count thresholds. Rather than naïvely cull the highest somatic cell count cows, a mathematical programming model was developed that determines the cows to be culled from the herd by maximizing the net present value of the herd, subject to meeting any specified bulk tank somatic cell count level. The model was applied to test-day cows on 2 New York State dairy farms. Results showed that the net present value of the herd was increased by using the model to meet the somatic cell count restriction compared with naïvely culling the highest somatic cell count cows. Implementation of the model would be straightforward in dairy management decision software. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Characterization of a hybrid energy-resolving photon-counting detector
NASA Astrophysics Data System (ADS)
Zang, A.; Pelzer, G.; Anton, G.; Ballabriga Sune, R.; Bisello, F.; Campbell, M.; Fauler, A.; Fiederle, M.; Llopart Cudie, X.; Ritter, I.; Tennert, F.; Wölfel, S.; Wong, W. S.; Michel, T.
2014-03-01
Photon-counting detectors in medical x-ray imaging provide a higher dose efficiency than integrating detectors. Even further possibilities for imaging applications arise, if the energy of each photon counted is measured, as for example K-edge-imaging or optimizing image quality by applying energy weighting factors. In this contribution, we show results of the characterization of the Dosepix detector. This hybrid photon- counting pixel detector allows energy resolved measurements with a novel concept of energy binning included in the pixel electronics. Based on ideas of the Medipix detector family, it provides three different modes of operation: An integration mode, a photon-counting mode, and an energy-binning mode. In energy-binning mode, it is possible to set 16 energy thresholds in each pixel individually to derive a binned energy spectrum in every pixel in one acquisition. The hybrid setup allows using different sensor materials. For the measurements 300 μm Si and 1 mm CdTe were used. The detector matrix consists of 16 x 16 square pixels for CdTe (16 x 12 for Si) with a pixel pitch of 220 μm. The Dosepix was originally intended for applications in the field of radiation measurement. Therefore it is not optimized towards medical imaging. The detector concept itself still promises potential as an imaging detector. We present spectra measured in one single pixel as well as in the whole pixel matrix in energy-binning mode with a conventional x-ray tube. In addition, results concerning the count rate linearity for the different sensor materials are shown as well as measurements regarding energy resolution.
Lens-free microscopy of cerebrospinal fluid for the laboratory diagnosis of meningitis
NASA Astrophysics Data System (ADS)
Delacroix, Robin; Morel, Sophie Nhu An; Hervé, Lionel; Bordy, Thomas; Blandin, Pierre; Dinten, Jean-Marc; Drancourt, Michel; Allier, Cédric
2018-02-01
The cytology of the cerebrospinal fluid is traditionally performed by an operator (physician, biologist) by means of a conventional light microscope. The operator visually counts the leukocytes (white blood cells) present in a sample of cerebrospinal fluid (10 μl). It is a tedious job and the result is operator-dependent. Here in order to circumvent the limitations of manual counting, we approach the question of numeration of erythrocytes and leukocytes for the cytological diagnosis of meningitis by means of lens-free microscopy. In a first step, a prospective counts of leukocytes was performed by five different operators using conventional optical microscopy. The visual counting yielded an overall 16.7% misclassification of 72 cerebrospinal fluid specimens in meningitis/non-meningitis categories using a 10 leukocyte/μL cut-off. In a second step, the lens-free microscopy algorithm was adapted step-by-step for counting cerebrospinal fluid cells and discriminating leukocytes from erythrocytes. The optimization of the automatic lens-free counting was based on the prospective analysis of 215 cerebrospinal fluid specimens. The optimized algorithm yielded a 100% sensitivity and a 86% specificity compared to confirmed diagnostics. In a third step, a blind lens-free microscopic analysis of 116 cerebrospinal fluid specimens, including six cases of microbiology confirmed infectious meningitis, yielded a 100% sensitivity and a 79% specificity. Adapted lens-free microscopy is thus emerging as an operator-independent technique for the rapid numeration of leukocytes and erythrocytes in cerebrospinal fluid. In particular, this technique is well suited to the rapid diagnosis of meningitis at point-of-care laboratories.
A self-optimizing scheme for energy balanced routing in Wireless Sensor Networks using SensorAnt.
Shamsan Saleh, Ahmed M; Ali, Borhanuddin Mohd; Rasid, Mohd Fadlee A; Ismail, Alyani
2012-01-01
Planning of energy-efficient protocols is critical for Wireless Sensor Networks (WSNs) because of the constraints on the sensor nodes' energy. The routing protocol should be able to provide uniform power dissipation during transmission to the sink node. In this paper, we present a self-optimization scheme for WSNs which is able to utilize and optimize the sensor nodes' resources, especially the batteries, to achieve balanced energy consumption across all sensor nodes. This method is based on the Ant Colony Optimization (ACO) metaheuristic which is adopted to enhance the paths with the best quality function. The assessment of this function depends on multi-criteria metrics such as the minimum residual battery power, hop count and average energy of both route and network. This method also distributes the traffic load of sensor nodes throughout the WSN leading to reduced energy usage, extended network life time and reduced packet loss. Simulation results show that our scheme performs much better than the Energy Efficient Ant-Based Routing (EEABR) in terms of energy consumption, balancing and efficiency.
Van Rie, Annelies; Patel, Monita R; Nana, Mbonze; Vanden Driessche, Koen; Tabala, Martine; Yotebieng, Marcel; Behets, Frieda
2014-03-01
A crucial question in managing HIV-infected patients with tuberculosis (TB) concerns when and how to initiate antiretroviral therapy (ART). The effectiveness of CD4-stratified ART initiation in a nurse-centered, integrated TB/HIV program at primary care in Kinshasa, Democratic Republic of Congo, was assessed. Prospective cohort study was conducted to assess the effect of CD4-stratified ART initiation by primary care nurses (513 TB patients, August 2007 to November 2009). ART was to be initiated at 1 month of TB treatment if CD4 count is <100 cells per cubic millimeter, at 2 months if CD4 count is 100-350 cells per cubic millimeter, and at the end of TB treatment after CD4 count reassessment if CD4 count is >350 cells per cubic millimeter. ART uptake and mortality were compared with a historical prospective cohort of 373 HIV-infected TB patients referred for ART to a centralized facility and 3577 HIV-negative TB patients (January 2006 to May 2007). ART uptake increased (17%-69%, P < 0.0001) and mortality during TB treatment decreased (20.1% vs 9.8%, P < 0.0003) after decentralized, nurse-initiated, CD4-stratified ART. Mortality among TB patients with CD4 count >100 cells per cubic millimeter was similar to that of HIV-negative TB patients (5.6% vs 6.3%, P = 0.65), but mortality among those with CD4 count <100 cells per cubic millimeter remained high (18.8%). Nurse-centered, CD4-stratified ART initiation at primary care level was effective in increasing timely ART uptake and reducing mortality among TB patients but may not be adequate to prevent mortality among those presenting with severe immunosuppression. Further research is needed to determine the optimal management at primary care level of TB patients with CD4 counts <100 cells per cubic millimeter.
Measurement of luminescence decays: High performance at low cost
NASA Astrophysics Data System (ADS)
Sulkes, Mark; Sulkes, Zoe
2011-11-01
The availability of inexpensive ultra bright LEDs spanning the visible and near-ultraviolet combined with the availability of inexpensive electronics equipment makes it possible to construct a high performance luminescence lifetime apparatus (˜5 ns instrumental response or better) at low cost. A central need for time domain measurement systems is the ability to obtain short (˜1 ns or less) excitation light pulses from the LEDs. It is possible to build the necessary LED driver using a simple avalanche transistor circuit. We describe first a circuit to test for small signal NPN transistors that can avalanche. We then describe a final optimized avalanche mode circuit that we developed on a prototyping board by measuring driven light pulse duration as a function of the circuit on the board and passive component values. We demonstrate that the combination of the LED pulser and a 1P28 photomultiplier tube used in decay waveform acquisition has a time response that allows for detection and lifetime determination of luminescence decays down to ˜5 ns. The time response and data quality afforded with the same components in time-correlated single photon counting are even better. For time-correlated single photon counting an even simpler NAND-gate based LED driver circuit is also applicable. We also demonstrate the possible utility of a simple frequency domain method for luminescence lifetime determinations.
High count-rate study of two TES x-ray microcalorimeters with different transition temperatures
NASA Astrophysics Data System (ADS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.; Sadleir, John E.; Smith, Stephen J.; Wassell, Edward J.
2017-10-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures (T c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T cs had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.3 eV at 6 keV from lower and higher T c devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the so-called event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96% throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T c (faster) device, and 5.8 eV FWHM with 97% throughput with the lower T c (slower) device at 722 Hz.
Optimization of hot water treatment for removing microbial colonies on fresh blueberry surface.
Kim, Tae Jo; Corbitt, Melody P; Silva, Juan L; Wang, Dja Shin; Jung, Yean-Sung; Spencer, Barbara
2011-08-01
Blueberries for the frozen market are washed but this process sometimes is not effective or further contaminates the berries. This study was designed to optimize conditions for hot water treatment (temperature, time, and antimicrobial concentration) to remove biofilm and decrease microbial load on blueberries. Scanning electron microscopy (SEM) image showed a well-developed microbial biofilm on blueberries dipped in room temperature water. The biofilm consisted of yeast and bacterial cells attached to the berry surface in the form of microcolonies, which produced exopolymer substances between or upon the cells. Berry exposure to 75 and 90 °C showed little to no microorganisms on the blueberry surface; however, the sensory quality (wax/bloom) of berries at those temperatures was unacceptable. Response surface plots showed that increasing temperature was a significant factor on reduction of aerobic plate counts (APCs) and yeast/mold counts (YMCs) while adding Boxyl® did not have significant effect on APC. Overlaid contour plots showed that treatments of 65 to 70 °C for 10 to 15 s showed maximum reductions of 1.5 and 2.0 log CFU/g on APCs and YMCs, respectively; with acceptable level of bloom/wax score on fresh blueberries. This study showed that SEM, response surface, and overlaid contour plots proved successful in arriving at optima to reduce microbial counts while maintaining bloom/wax on the surface of the blueberries. Since chemical sanitizing treatments such as chlorine showed ineffectiveness to reduce microorganisms loaded on berry surface (Beuchat and others 2001, Sapers 2001), hot water treatment on fresh blueberries could maximize microbial reduction with acceptable quality of fresh blueberries. © 2011 Institute of Food Technologists®
Ogungbenro, Kayode; Aarons, Leon
2011-08-01
In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.
Ontogeny of con A and PHA responses of chicken blood cells in MHC-compatible lines 6(3) and 7(2).
Fredericksen, T L; Gilmour, D G
1983-06-01
The development of T cell responsiveness to Con A and PHA was examined in two MHC-compatible inbred chicken lines, RPRL 6(3) and 7(2), at ages 2 to 118 days posthatching. These lines are respectively resistant or susceptible to Marek's disease, a naturally occurring, virally induced T cell lymphoma. Between-line comparisons were made of optimal in vitro responses of diluted serum-free blood cells to each mitogen in two groups of chicks tested over ages 2 to 63 and 41 to 118 days. Over 2 to 63 days, Con A responses increased with age at the same rate in each line, but 7(2) responses averaged 2.3 times higher than 6(3). The increase with age was dependent on blood lymphocyte counts, which also increased with age in parallel in both lines. In contrast, the between-line difference in responsiveness was dependent on intrinsic reactivity of cells as well as lymphocyte counts. Covariance analysis was used to estimate that line 7(2) was 1.4 times higher than 6(3) in intrinsic cell reactivity, after accounting for the effect of the twofold higher blood lymphocyte counts in 7(2), and that this intrinsic difference contributed almost one-half the total difference. Over 41 to 118 days Con A responses no longer increased with age, although lymphocyte counts were still increasing, and the line difference (2.6 times) was now almost entirely contributed by a 2.3-fold superiority of 7(2) blood cells in intrinsic reactivity. The line difference in PHA responses was the reverse of the above in young chicks, with 6(3) responses greater than 7(2) in spite of lower lymphocyte counts. In additional chicks tested over 5 to 26 days, intrinsic reactivity of 6(3) cells to PHA averaged 4.5 times higher than 7(2). There was an abrupt decline in intrinsic reactivity of line 6(3) blood cells between 26 and 41 days to a level equal with 7(2). After this age, line 7(2) responses were 1.8 times greater than those of 6(3), and this difference was dependent solely on lymphocyte count differences. The results suggest that different gene systems mediate blood cell responses to PHA as compared with Con A. The pattern of developmental differences between inbred lines indicates the existence of distinct or partly overlapping T cell subsets with different reactivities to PHA or Con A, and of higher suppressor activity of adherent cells in line 6(3) blood. Both these differences may be related to line 6(3) inherited resistance to Marek's disease.
Chen, Han; Xu, Cheng; Persson, Mats; Danielsson, Mats
2015-01-01
Abstract. Head computed tomography (CT) plays an important role in the comprehensive evaluation of acute stroke. Photon-counting spectral detectors, as promising candidates for use in the next generation of x-ray CT systems, allow for assigning more weight to low-energy x-rays that generally contain more contrast information. Most importantly, the spectral information can be utilized to decompose the original set of energy-selective images into several basis function images that are inherently free of beam-hardening artifacts, a potential advantage for further improving the diagnosis accuracy. We are developing a photon-counting spectral detector for CT applications. The purpose of this work is to determine the optimal beam quality for material decomposition in two head imaging cases: nonenhanced imaging and K-edge imaging. A cylindrical brain tissue of 16-cm diameter, coated by a 6-mm-thick bone layer and 2-mm-thick skin layer, was used as a head phantom. The imaging target was a 5-mm-thick blood vessel centered in the head phantom. In K-edge imaging, two contrast agents, iodine and gadolinium, with the same concentration (5 mg/mL) were studied. Three parameters that affect beam quality were evaluated: kVp settings (50 to 130 kVp), filter materials (Z=13 to 83), and filter thicknesses [0 to 2 half-value layer (HVL)]. The image qualities resulting from the varying x-ray beams were compared in terms of two figures of merit (FOMs): squared signal-difference-to-noise ratio normalized by brain dose (SDNR2/BD) and that normalized by skin dose (SDNR2/SD). For nonenhanced imaging, the results show that the use of the 120-kVp spectrum filtered by 2 HVL copper (Z=29) provides the best performance in both FOMs. When iodine is used in K-edge imaging, the optimal filter is 2 HVL iodine (Z=53) and the optimal kVps are 60 kVp in terms of SDNR2/BD and 75 kVp in terms of SDNR2/SD. A tradeoff of 65 kVp was proposed to lower the potential risk of skin injuries if a relatively long exposure time is necessarily performed in the iodinated imaging. In the case of gadolinium imaging, both SD and BD can be minimized at 120 kVp filtered with 2 HVL thulium (Z=69). The results also indicate that with the same concentration and their respective optimal spectrum, the values of SDNR2/BD and SDNR2/SD in gadolinium imaging are, respectively, around 3 and 10 times larger than those in iodine imaging. However, since gadolinium is used in much lower concentrations than iodine in the clinic, iodine may be a preferable candidate for K-edge imaging. PMID:26835495
Abdominal fat volume estimation by stereology on CT: a comparison with manual planimetry.
Manios, G E; Mazonakis, M; Voulgaris, C; Karantanas, A; Damilakis, J
2016-03-01
To deploy and evaluate a stereological point-counting technique on abdominal CT for the estimation of visceral (VAF) and subcutaneous abdominal fat (SAF) volumes. Stereological volume estimations based on point counting and systematic sampling were performed on images from 14 consecutive patients who had undergone abdominal CT. For the optimization of the method, five sampling intensities in combination with 100 and 200 points were tested. The optimum stereological measurements were compared with VAF and SAF volumes derived by the standard technique of manual planimetry on the same scans. Optimization analysis showed that the selection of 200 points along with the sampling intensity 1/8 provided efficient volume estimations in less than 4 min for VAF and SAF together. The optimized stereology showed strong correlation with planimetry (VAF: r = 0.98; SAF: r = 0.98). No statistical differences were found between the two methods (VAF: P = 0.81; SAF: P = 0.83). The 95% limits of agreement were also acceptable (VAF: -16.5%, 16.1%; SAF: -10.8%, 10.7%) and the repeatability of stereology was good (VAF: CV = 4.5%, SAF: CV = 3.2%). Stereology may be successfully applied to CT images for the efficient estimation of abdominal fat volume and may constitute a good alternative to the conventional planimetric technique. Abdominal obesity is associated with increased risk of disease and mortality. Stereology may quantify visceral and subcutaneous abdominal fat accurately and consistently. The application of stereology to estimating abdominal volume fat reduces processing time. Stereology is an efficient alternative method for estimating abdominal fat volume.
Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.
Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène
2016-06-01
Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. Copyright © 2016 Elsevier Ltd. All rights reserved.
Single molecule fluorescence burst detection of DNA fragments separated by capillary electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haab, B.B.; Mathies, R.A.
A method has been developed for detecting DNA separated by capillary gel electrophoresis (CGE) using single molecule photon burst counting. A confocal fluorescence microscope was used to observe the fluorescence bursts from single molecules of DNA multiply labeled with the thiazole orange derivative TO6 as they passed through the nearly 2-{mu}m diameter focused laser beam. Amplified photo-electron pulses from the photomultiplier are grouped into bins of 360-450 {mu}s in duration, and the resulting histogram is stored in a computer for analysis. Solutions of M13 DNA were first flowed through the capillary at various concentrations, and the resulting data were usedmore » to optimize the parameters for digital filtering using a low-pass Fourier filter, selecting a discriminator level for peak detection, and applying a peak-calling algorithm. The optimized single molecule counting method was then applied to an electrophoretic separation of M13 DNA and to a separation of pBR 322 DNA from pRL 277 DNA. Clusters of discreet fluorescence bursts were observed at the expected appearance time of each DNA band. The auto-correlation function of these data indicated transit times that were consistent with the observed electrophoretic velocity. These separations were easily detected when only 50-100 molecules of DNA per band traveled through the detection region. This new detection technology should lead to the routine analysis of DNA in capillary columns with an on-column sensitivity of nearly 100 DNA molecules/band or better. 45 refs., 10 figs.« less
Peron, Guillaume; Hines, James E.
2014-01-01
Many industrial and agricultural activities involve wildlife fatalities by collision, poisoning or other involuntary harvest: wind turbines, highway network, utility network, tall structures, pesticides, etc. Impacted wildlife may benefit from official protection, including the requirement to monitor the impact. Carcass counts can often be conducted to quantify the number of fatalities, but they need to be corrected for carcass persistence time (removal by scavengers and decay) and detection probability (searcher efficiency). In this article we introduce a new piece of software that fits a superpopulation capture-recapture model to raw count data. It uses trial data to estimate detection and daily persistence probabilities. A recurrent issue is that fatalities of rare, protected species are infrequent, in which case the software offers the option to switch to an ‘evidence of absence’ mode, i.e., estimate the number of carcasses that may have been missed by field crews. The software allows distinguishing between different turbine types (e.g. different vegetation cover under turbines, or different technical properties), as well between two carcass age-classes or states, with transition between those classes (e.g, fresh and dry). There is a data simulation capacity that may be used at the planning stage to optimize sampling design. Resulting mortality estimates can be used 1) to quantify the required amount of compensation, 2) inform mortality projections for proposed development sites, and 3) inform decisions about management of existing sites.
Foddai, Antonio; Elliott, Christopher T.; Grant, Irene R.
2010-01-01
Thermal inactivation experiments were carried out to assess the utility of a recently optimized phage amplification assay to accurately enumerate viable Mycobacterium avium subsp. paratuberculosis cells in milk. Ultra-heat-treated (UHT) whole milk was spiked with large numbers of M. avium subsp. paratuberculosis organisms (106 to 107 CFU/ml) and dispensed in 100-μl aliquots in thin-walled 200-μl PCR tubes. A Primus 96 advanced thermal cycler (Peqlab, Erlangen, Germany) was used to achieve the following time and temperature treatments: (i) 63°C for 3, 6, and 9 min; (ii) 68°C for 20, 40, and 60 s; and (iii) 72°C for 5, 10, 15, and 25 s. After thermal stress, the number of surviving M. avium subsp. paratuberculosis cells was assessed by both phage amplification assay and culture on Herrold's egg yolk medium (HEYM). A high correlation between PFU/ml and CFU/ml counts was observed for both unheated (r2 = 0.943) and heated (r2 = 0.971) M. avium subsp. paratuberculosis cells. D and z values obtained using the two types of counts were not significantly different (P > 0.05). The D68°C, mean D63°C, and D72°C for four M. avium subsp. paratuberculosis strains were 81.8, 9.8, and 4.2 s, respectively, yielding a mean z value of 6.9°C. Complete inactivation of 106 to 107 CFU of M. avium subsp. paratuberculosis/ml milk was not observed for any of the time-temperature combinations studied; 5.2- to 6.6-log10 reductions in numbers were achieved depending on the temperature and time. Nonlinear thermal inactivation kinetics were consistently observed for this bacterium. This study confirms that the optimized phage assay can be employed in place of conventional culture on HEYM to speed up the acquisition of results (48 h instead of a minimum of 6 weeks) for inactivation experiments involving M. avium subsp. paratuberculosis-spiked samples. PMID:20097817
A COMPARISON OF GALAXY COUNTING TECHNIQUES IN SPECTROSCOPICALLY UNDERSAMPLED REGIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specian, Mike A.; Szalay, Alex S., E-mail: mspecia1@jhu.edu, E-mail: szalay@jhu.edu
2016-11-01
Accurate measures of galactic overdensities are invaluable for precision cosmology. Obtaining these measurements is complicated when members of one’s galaxy sample lack radial depths, most commonly derived via spectroscopic redshifts. In this paper, we utilize the Sloan Digital Sky Survey’s Main Galaxy Sample to compare seven methods of counting galaxies in cells when many of those galaxies lack redshifts. These methods fall into three categories: assigning galaxies discrete redshifts, scaling the numbers counted using regions’ spectroscopic completeness properties, and employing probabilistic techniques. We split spectroscopically undersampled regions into three types—those inside the spectroscopic footprint, those outside but adjacent to it,more » and those distant from it. Through Monte Carlo simulations, we demonstrate that the preferred counting techniques are a function of region type, cell size, and redshift. We conclude by reporting optimal counting strategies under a variety of conditions.« less
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
NASA Astrophysics Data System (ADS)
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non-destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including: the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half-maximum (FWHM) across the entire dynamic range, and a noise floor about 20 keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications
Barber, W. C.; Wessel, J. C.; Nygard, E.; Iwanczyk, J. S.
2014-01-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications. PMID:25937684
Energy dispersive CdTe and CdZnTe detectors for spectral clinical CT and NDT applications.
Barber, W C; Wessel, J C; Nygard, E; Iwanczyk, J S
2015-06-01
We are developing room temperature compound semiconductor detectors for applications in energy-resolved high-flux single x-ray photon-counting spectral computed tomography (CT), including functional imaging with nanoparticle contrast agents for medical applications and non destructive testing (NDT) for security applications. Energy-resolved photon-counting can provide reduced patient dose through optimal energy weighting for a particular imaging task in CT, functional contrast enhancement through spectroscopic imaging of metal nanoparticles in CT, and compositional analysis through multiple basis function material decomposition in CT and NDT. These applications produce high input count rates from an x-ray generator delivered to the detector. Therefore, in order to achieve energy-resolved single photon counting in these applications, a high output count rate (OCR) for an energy-dispersive detector must be achieved at the required spatial resolution and across the required dynamic range for the application. The required performance in terms of the OCR, spatial resolution, and dynamic range must be obtained with sufficient field of view (FOV) for the application thus requiring the tiling of pixel arrays and scanning techniques. Room temperature cadmium telluride (CdTe) and cadmium zinc telluride (CdZnTe) compound semiconductors, operating as direct conversion x-ray sensors, can provide the required speed when connected to application specific integrated circuits (ASICs) operating at fast peaking times with multiple fixed thresholds per pixel provided the sensors are designed for rapid signal formation across the x-ray energy ranges of the application at the required energy and spatial resolutions, and at a sufficiently high detective quantum efficiency (DQE). We have developed high-flux energy-resolved photon-counting x-ray imaging array sensors using pixellated CdTe and CdZnTe semiconductors optimized for clinical CT and security NDT. We have also fabricated high-flux ASICs with a two dimensional (2D) array of inputs for readout from the sensors. The sensors are guard ring free and have a 2D array of pixels and can be tiled in 2D while preserving pixel pitch. The 2D ASICs have four energy bins with a linear energy response across sufficient dynamic range for clinical CT and some NDT applications. The ASICs can also be tiled in 2D and are designed to fit within the active area of the sensors. We have measured several important performance parameters including; the output count rate (OCR) in excess of 20 million counts per second per square mm with a minimum loss of counts due to pulse pile-up, an energy resolution of 7 keV full width at half maximum (FWHM) across the entire dynamic range, and a noise floor about 20keV. This is achieved by directly interconnecting the ASIC inputs to the pixels of the CdZnTe sensors incurring very little input capacitance to the ASICs. We present measurements of the performance of the CdTe and CdZnTe sensors including the OCR, FWHM energy resolution, noise floor, as well as the temporal stability and uniformity under the rapidly varying high flux expected in CT and NDT applications.
Optimization of miRNA-seq data preprocessing.
Tam, Shirley; Tsao, Ming-Sound; McPherson, John D
2015-11-01
The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments. © The Author 2015. Published by Oxford University Press.
Optimization of the procedure for counting the eggs of Fasciola gigantica in bovine faeces.
Suhardono; Roberts, J A; Copeman, D B
2006-07-01
This paper describes a method for counting eggs of F. gigantica in bovine faeces that optimizes the proportion of eggs recovered and the repeatability of estimates. The method uses 3 g of faeces suspended in 0.05% Tween 20. The suspension is passed through three 6 cm diameter sieves in tandem to remove fibrous debris, with respective apertures of 1 mm, 450 microm, and either 266 or 200 microm. The filtrate is allowed to sediment for 3 min in a conical flask; the sediment is recovered, then resuspended in 200 ml of 0.05% Tween 20 and allowed to sediment. After 3 min the sediment is washed in a sieve with an aperture of 53 microm, which retains the eggs. Eggs suspended in 15 ml of 1% methylene blue are counted using a dissecting microscope. Use of Tween 20 instead of water as the suspending agent for faeces gave a significant threefold increased the proportion of eggs recovered and reduced variability between repeated counts. This method is able to detect about one-third of the eggs present. It was concluded that the high proportion of F. gigantica eggs lost may be due to the presence of hydrophobic and covalent bonds on the eggs that bind them to debris, with which they are discarded.
Disposable bioluminescence-based biosensor for detection of bacterial count in food.
Luo, Jinping; Liu, Xiaohong; Tian, Qing; Yue, Weiwei; Zeng, Jing; Chen, Guangquan; Cai, Xinxia
2009-11-01
A biosensor for rapid detection of bacterial count based on adenosine 5'-triphosphate (ATP) bioluminescence has been developed. The biosensor is composed of a key sensitive element and a photomultiplier tube used as a detector element. The disposable sensitive element consists of a sampler, a cartridge where intracellular ATP is chemically extracted from bacteria, and a microtube where the extracted ATP reacts with the luciferin-luciferase reagent to produce bioluminescence. The bioluminescence signal is transformed into relevant electrical signal by the detector and further measured with a homemade luminometer. Parameters affecting the amount of the extracted ATP, including the types of ATP extractants, the concentrations of ATP extractant, and the relevant neutralizing reagent, were optimized. Under the optimal experimental conditions, the biosensor showed a linear response to standard bacteria in a concentration range from 10(3) to 10(8) colony-forming units (CFU) per milliliter with a correlation coefficient of 0.925 (n=22) within 5min. Moreover, the bacterial count of real food samples obtained by the biosensor correlated well with those by the conventional plate count method. The proposed biosensor, with characteristics of low cost, easy operation, and fast response, provides potential application to rapid evaluation of bacterial contamination in the food industry, environment monitoring, and other fields.
Dried Blood Spot RNA Transcriptomes Correlate with Transcriptomes Derived from Whole Blood RNA.
Reust, Mary J; Lee, Myung Hee; Xiang, Jenny; Zhang, Wei; Xu, Dong; Batson, Tatiana; Zhang, Tuo; Downs, Jennifer A; Dupnik, Kathryn M
2018-05-01
Obtaining RNA from clinical samples collected in resource-limited settings can be costly and challenging. The goals of this study were to 1) optimize messenger RNA extraction from dried blood spots (DBS) and 2) determine how transcriptomes generated from DBS RNA compared with RNA isolated from blood collected in Tempus tubes. We studied paired samples collected from eight adults in rural Tanzania. Venous blood was collected on Whatman 903 Protein Saver cards and in tubes with RNA preservation solution. Our optimal DBS RNA extraction used 8 × 3-mm DBS punches as the starting material, bead beater disruption at maximum speed for 60 seconds, extraction with Illustra RNAspin Mini RNA Isolation kit, and purification with Zymo RNA Concentrator kit. Spearman correlations of normalized gene counts in DBS versus whole blood ranged from 0.887 to 0.941. Bland-Altman plots did not show a trend toward over- or under-counting at any gene size. We report a method to obtain sufficient RNA from DBS to generate a transcriptome. The DBS transcriptome gene counts correlated well with whole blood transcriptome gene counts. Dried blood spots for transcriptome studies could be an option when field conditions preclude appropriate collection, storage, or transport of whole blood for RNA studies.
Deng, Shijie; Morrison, Alan P
2012-09-15
This Letter presents an active quench-and-reset circuit for Geiger-mode avalanche photodiodes (GM-APDs). The integrated circuit was fabricated using a conventional 0.35 μm complementary metal oxide semiconductor process. Experimental results show that the circuit is capable of linearly setting the hold-off time from several nanoseconds to microseconds with a resolution of 6.5 ns. This allows the selection of the optimal afterpulse-free hold-off time for the GM-APD via external digital inputs or additional signal processing circuitry. Moreover, this circuit resets the APD automatically following the end of the hold-off period, thus simplifying the control for the end user. Results also show that a minimum dead time of 28.4 ns is achieved, demonstrating a saturated photon-counting rate of 35.2 Mcounts/s.
Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc
2013-01-01
We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292
X-ray characterization of a multichannel smart-pixel array detector.
Ross, Steve; Haji-Sheikh, Michael; Huntington, Andrew; Kline, David; Lee, Adam; Li, Yuelin; Rhee, Jehyuk; Tarpley, Mary; Walko, Donald A; Westberg, Gregg; Williams, George; Zou, Haifeng; Landahl, Eric
2016-01-01
The Voxtel VX-798 is a prototype X-ray pixel array detector (PAD) featuring a silicon sensor photodiode array of 48 × 48 pixels, each 130 µm × 130 µm × 520 µm thick, coupled to a CMOS readout application specific integrated circuit (ASIC). The first synchrotron X-ray characterization of this detector is presented, and its ability to selectively count individual X-rays within two independent arrival time windows, a programmable energy range, and localized to a single pixel is demonstrated. During our first trial run at Argonne National Laboratory's Advance Photon Source, the detector achieved a 60 ns gating time and 700 eV full width at half-maximum energy resolution in agreement with design parameters. Each pixel of the PAD holds two independent digital counters, and the discriminator for X-ray energy features both an upper and lower threshold to window the energy of interest discarding unwanted background. This smart-pixel technology allows energy and time resolution to be set and optimized in software. It is found that the detector linearity follows an isolated dead-time model, implying that megahertz count rates should be possible in each pixel. Measurement of the line and point spread functions showed negligible spatial blurring. When combined with the timing structure of the synchrotron storage ring, it is demonstrated that the area detector can perform both picosecond time-resolved X-ray diffraction and fluorescence spectroscopy measurements.
Denimal, Emmanuel; Marin, Ambroise; Guyot, Stéphane; Journaux, Ludovic; Molin, Paul
2015-08-01
In biology, hemocytometers such as Malassez slides are widely used and are effective tools for counting cells manually. In a previous work, a robust algorithm was developed for grid extraction in Malassez slide images. This algorithm was evaluated on a set of 135 images and grids were accurately detected in most cases, but there remained failures for the most difficult images. In this work, we present an optimization of this algorithm that allows for 100% grid detection and a 25% improvement in grid positioning accuracy. These improvements make the algorithm fully reliable for grid detection. This optimization also allows complete erasing of the grid without altering the cells, which eases their segmentation.
Novel Photon-Counting Detectors for Free-Space Communication
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Yang, Guan; Sun, Xiaoli; Lu, Wei; Merritt, Scott; Beck, Jeff
2016-01-01
We present performance data for novel photon counting detectors for free space optical communication. NASA GSFC is testing the performance of three novel photon counting detectors 1) a 2x8 mercury cadmium telluride avalanche array made by DRS Inc. 2) a commercial 2880 silicon avalanche photodiode array and 3) a prototype resonant cavity silicon avalanche photodiode array. We will present and compare dark count, photon detection efficiency, wavelength response and communication performance data for these detectors. We discuss system wavelength trades and architectures for optimizing overall communication link sensitivity, data rate and cost performance. The HgCdTe APD array has photon detection efficiencies of greater than 50 were routinely demonstrated across 5 arrays, with one array reaching a maximum PDE of 70. High resolution pixel-surface spot scans were performed and the junction diameters of the diodes were measured. The junction diameter was decreased from 31 m to 25 m resulting in a 2x increase in e-APD gain from 470 on the 2010 array to 1100 on the array delivered to NASA GSFC. Mean single photon SNRs of over 12 were demonstrated at excess noise factors of 1.2-1.3.The commercial silicon APD array has a fast output with rise times of 300ps and pulse widths of 600ps. Received and filtered signals from the entire array are multiplexed onto this single fast output. The prototype resonant cavity silicon APD array is being developed for use at 1 micron wavelength.
Role of data aggregation in biosurveillance detection strategies with applications from ESSENCE.
Burkom, Howard S; Elbert, Y; Feldman, A; Lin, J
2004-09-24
Syndromic surveillance systems are used to monitor daily electronic data streams for anomalous counts of features of varying specificity. The monitored quantities might be counts of clinical diagnoses, sales of over-the-counter influenza remedies, school absenteeism among a given age group, and so forth. Basic data-aggregation decisions for these systems include determining which records to count and how to group them in space and time. This paper discusses the application of spatial and temporal data-aggregation strategies for multiple data streams to alerting algorithms appropriate to the surveillance region and public health threat of interest. Such a strategy was applied and evaluated for a complex, authentic, multisource, multiregion environment, including >2 years of data records from a system-evaluation exercise for the Defense Advanced Research Project Agency (DARPA). Multivariate and multiple univariate statistical process control methods were adapted and applied to the DARPA data collection. Comparative parametric analyses based on temporal aggregation were used to optimize the performance of these algorithms for timely detection of a set of outbreaks identified in the data by a team of epidemiologists. The sensitivity and timeliness of the most promising detection methods were tested at empirically calculated thresholds corresponding to multiple practical false-alert rates. Even at the strictest false-alert rate, all but one of the outbreaks were detected by the best method, and the best methods achieved a 1-day median time before alert over the set of test outbreaks. These results indicate that a biosurveillance system can provide a substantial alerting-timeliness advantage over traditional public health monitoring for certain outbreaks. Comparative analyses of individual algorithm results indicate further achievable improvement in sensitivity and specificity.
NASA Technical Reports Server (NTRS)
Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.
2017-01-01
Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to the eccentric anomaly and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are shown in excess of 1000 revolutions while subject to Earths J2 perturbation and lunar gravity.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Cho, H-M; Ding, H; Ziemer, B P; Molloi, S
2014-12-07
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm(2) in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
NASA Astrophysics Data System (ADS)
Cho, H.-M.; Ding, H.; Ziemer, BP; Molloi, S.
2014-12-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using x-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for x-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of x-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded x-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of x-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of x-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic x-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the x-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory.
Cho, H-M; Ding, H; Ziemer, BP; Molloi, S
2014-01-01
Accurate energy calibration is critical for the application of energy-resolved photon-counting detectors in spectral imaging. The aim of this study is to investigate the feasibility of energy response calibration and characterization of a photon-counting detector using X-ray fluorescence. A comprehensive Monte Carlo simulation study was performed using Geant4 Application for Tomographic Emission (GATE) to investigate the optimal technique for X-ray fluorescence calibration. Simulations were conducted using a 100 kVp tungsten-anode spectra with 2.7 mm Al filter for a single pixel cadmium telluride (CdTe) detector with 3 × 3 mm2 in detection area. The angular dependence of X-ray fluorescence and scatter background was investigated by varying the detection angle from 20° to 170° with respect to the beam direction. The effects of the detector material, shape, and size on the recorded X-ray fluorescence were investigated. The fluorescent material size effect was considered with and without the container for the fluorescent material. In order to provide validation for the simulation result, the angular dependence of X-ray fluorescence from five fluorescent materials was experimentally measured using a spectrometer. Finally, eleven of the fluorescent materials were used for energy calibration of a CZT-based photon-counting detector. The optimal detection angle was determined to be approximately at 120° with respect to the beam direction, which showed the highest fluorescence to scatter ratio (FSR) with a weak dependence on the fluorescent material size. The feasibility of X-ray fluorescence for energy calibration of photon-counting detectors in the diagnostic X-ray energy range was verified by successfully calibrating the energy response of a CZT-based photon-counting detector. The results of this study can be used as a guideline to implement the X-ray fluorescence calibration method for photon-counting detectors in a typical imaging laboratory. PMID:25369288
Rostami, Kamran; Marsh, Michael N; Johnson, Matt W; Mohaghegh, Hamid; Heal, Calvin; Holmes, Geoffrey; Ensari, Arzu; Aldulaimi, David; Bancel, Brigitte; Bassotti, Gabrio; Bateman, Adrian; Becheanu, Gabriel; Bozzola, Anna; Carroccio, Antonio; Catassi, Carlo; Ciacci, Carolina; Ciobanu, Alexandra; Danciu, Mihai; Derakhshan, Mohammad H; Elli, Luca; Ferrero, Stefano; Fiorentino, Michelangelo; Fiorino, Marilena; Ganji, Azita; Ghaffarzadehgan, Kamran; Going, James J; Ishaq, Sauid; Mandolesi, Alessandra; Mathews, Sherly; Maxim, Roxana; Mulder, Chris J; Neefjes-Borst, Andra; Robert, Marie; Russo, Ilaria; Rostami-Nejad, Mohammad; Sidoni, Angelo; Sotoudeh, Masoud; Villanacci, Vincenzo; Volta, Umberto; Zali, Mohammad R; Srivastava, Amitabh
2017-12-01
Counting intraepithelial lymphocytes (IEL) is central to the histological diagnosis of coeliac disease (CD), but no definitive 'normal' IEL range has ever been published. In this multicentre study, receiver operating characteristic (ROC) curve analysis was used to determine the optimal cut-off between normal and CD (Marsh III lesion) duodenal mucosa, based on IEL counts on >400 mucosal biopsy specimens. The study was designed at the International Meeting on Digestive Pathology, Bucharest 2015. Investigators from 19 centres, eight countries of three continents, recruited 198 patients with Marsh III histology and 203 controls and used one agreed protocol to count IEL/100 enterocytes in well-oriented duodenal biopsies. Demographic and serological data were also collected. The mean ages of CD and control groups were 45.5 (neonate to 82) and 38.3 (2-88) years. Mean IEL count was 54±18/100 enterocytes in CD and 13±8 in normal controls (p=0.0001). ROC analysis indicated an optimal cut-off point of 25 IEL/100 enterocytes, with 99% sensitivity, 92% specificity and 99.5% area under the curve. Other cut-offs between 20 and 40 IEL were less discriminatory. Additionally, there was a sufficiently high number of biopsies to explore IEL counts across the subclassification of the Marsh III lesion. Our ROC curve analyses demonstrate that for Marsh III lesions, a cut-off of 25 IEL/100 enterocytes optimises discrimination between normal control and CD biopsies. No differences in IEL counts were found between Marsh III a, b and c lesions. There was an indication of a continuously graded dose-response by IEL to environmental (gluten) antigenic influence. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Gerestein, C G; Eijkemans, M J C; de Jong, D; van der Burg, M E L; Dykgraaf, R H M; Kooi, G S; Baalbergen, A; Burger, C W; Ansink, A C
2009-02-01
Prognosis in women with ovarian cancer mainly depends on International Federation of Gynecology and Obstetrics stage and the ability to perform optimal cytoreductive surgery. Since ovarian cancer has a heterogeneous presentation and clinical course, predicting progression-free survival (PFS) and overall survival (OS) in the individual patient is difficult. The objective of this study was to determine predictors of PFS and OS in women with advanced stage epithelial ovarian cancer (EOC) after primary cytoreductive surgery and first-line platinum-based chemotherapy. Retrospective observational study. Two teaching hospitals and one university hospital in the south-western part of the Netherlands. Women with advanced stage EOC. All women who underwent primary cytoreductive surgery for advanced stage EOC followed by first-line platinum-based chemotherapy between January 1998 and October 2004 were identified. To investigate independent predictors of PFS and OS, a Cox' proportional hazard model was used. Nomograms were generated with the identified predictive parameters. The primary outcome measure was OS and the secondary outcome measures were response and PFS. A total of 118 women entered the study protocol. Median PFS and OS were 15 and 44 months, respectively. Preoperative platelet count (P = 0.007), and residual disease <1 cm (P = 0.004) predicted PFS with a optimism corrected c-statistic of 0.63. Predictive parameters for OS were preoperative haemoglobin serum concentration (P = 0.012), preoperative platelet counts (P = 0.031) and residual disease <1 cm (P = 0.028) with a optimism corrected c-statistic of 0.67. PFS could be predicted by postoperative residual disease and preoperative platelet counts, whereas residual disease, preoperative platelet counts and preoperative haemoglobin serum concentration were predictive for OS. The proposed nomograms need to be externally validated.
Joensen, Ulla Nordström; Jensen, Tina Kold; Jensen, Martin Blomberg; Almstrup, Kristian; Olesen, Inge Ahlmann; Juul, Anders; Andersson, Anna-Maria; Carlsen, Elisabeth; Petersen, Jørgen Holm; Toppari, Jorma; Skakkebæk, Niels E
2012-01-01
Objectives Considerable interest and controversy over a possible decline in semen quality during the 20th century raised concern that semen quality could have reached a critically low level where it might affect human reproduction. The authors therefore initiated a study to assess reproductive health in men from the general population and to monitor changes in semen quality over time. Design Cross-sectional study of men from the general Danish population. Inclusion criteria were place of residence in the Copenhagen area, and both the man and his mother being born and raised in Denmark. Men with severe or chronic diseases were not included. Setting Danish one-centre study. Participants 4867 men, median age 19 years, included from 1996 to 2010. Outcome measures Semen volume, sperm concentration, total sperm count, sperm motility and sperm morphology. Results Only 23% of participants had optimal sperm concentration and sperm morphology. Comparing with historic data of men attending a Copenhagen infertility clinic in the 1940s and men who recently became fathers, these two groups had significantly better semen quality than our study group from the general population. Over the 15 years, median sperm concentration increased from 43 to 48 million/ml (p=0.02) and total sperm count from 132 to 151 million (p=0.001). The median percentage of motile spermatozoa and abnormal spermatozoa were 68% and 93%, and did not change during the study period. Conclusions This large prospective study of semen quality among young men of the general population showed an increasing trend in sperm concentration and total sperm count. However, only one in four men had optimal semen quality. In addition, one in four will most likely face a prolonged waiting time to pregnancy if they in the future want to father a child and another 15% are at risk of the need of fertility treatment. Thus, reduced semen quality seems so frequent that it may impair the fertility rates and further increase the demand for assisted reproduction. PMID:22761286
Tanada, H; Ikemoto, T; Masutani, R; Tanaka, H; Takubo, T
2014-02-01
In this study, we evaluated the performance of the ADVIA 120 hematology system for cerebrospinal fluid (CSF) assay. Cell counts and leukocyte differentials in CSF were examined with the ADVIA 120 hematology system, while simultaneously confirming an effective hemolysis agent for automated CSF cell counts. The detection limits of both white blood cell (WBC) counts and red blood cell (RBC) counts on the measurement of CSF cell counts by the ADVIA 120 hematology system were superior at 2 cells/μL (10(-6) L). The WBC count was linear up to 9.850 cells/μL, and the RBC count was linear up to approximately 20 000 cells/μL. The intrarun reproducibility indicated good precision. The leukocyte differential of CSF cells, performed by the ADVIA120 hematology system, showed good correlation with the microscopic procedure. The VersaLyse hemolysis solution efficiently lysed the samples without interfering with cell counts and leukocyte differential, even in a sample that included approximately 50 000/μL RBC. These data show the ADVIA 120 hematology system correctly measured the WBC count and leukocyte differential in CSF. The VersaLyse hemolysis solution is considered to be optimal for hemolysis treatment of CSF when measuring cell counts and differentials by the ADVIA 120 hematology system. © 2013 John Wiley & Sons Ltd.
Goldman, A. J.
2006-01-01
Dr. Christoph Witzgall, the honoree of this Symposium, can count among his many contributions to applied mathematics and mathematical operations research a body of widely-recognized work on the optimal location of facilities. The present paper offers to non-specialists a sketch of that field and its evolution, with emphasis on areas most closely related to Witzgall’s research at NBS/NIST. PMID:27274920
The effect of microchannel plate gain depression on PAPA photon counting cameras
NASA Astrophysics Data System (ADS)
Sams, Bruce J., III
1991-03-01
PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.
Optimal Pulse Processing, Pile-Up Decomposition, and Applications of Silicon Drift Detectors at LCLS
Blaj, G.; Kenney, C. J.; Dragone, A.; ...
2017-10-11
Silicon drift detectors (SDDs) revolutionized spectroscopy in fields as diverse as geology and dentistry. For a subset of experiments at ultrafast, X-ray free-electron lasers (FELs), SDDs can make substantial contributions. Often the unknown spectrum is interesting, carrying science data, or the background measurement is useful to identify unexpected signals. Many measurements involve only several discrete photon energies known a priori, allowing single-event decomposition of pile-up and spectroscopic photon counting. We designed a pulse function and demonstrated that the signal amplitude (i.e., proportional to the detected energy and obtained from fitting with the pulse function), rise time, and pulse height aremore » interrelated, and at short peaking times, the pulse height and pulse area are not optimal estimators for detected energy; instead, the signal amplitude and rise time are obtained for each pulse by fitting, thus removing the need for pulse shaping. By avoiding pulse shaping, rise times of tens of nanoseconds resulted in reduced pulse pile-up and allowed decomposition of remaining pulse pile-up at photon separation times down to hundreds of nanoseconds while yielding time-of-arrival information with the precision of 10 ns. Waveform fitting yields simultaneously high energy resolution and high counting rates (two orders of magnitude higher than current digital pulse processors). At pulsed sources or high photon rates, photon pile-up still occurs. We showed that pile-up spectrum fitting is relatively simple and preferable to pile-up spectrum deconvolution. We then developed a photon pile-up statistical model for constant intensity sources, extended it to variable intensity sources (typical for FELs), and used it to fit a complex pileup spectrum. We subsequently developed a Bayesian pile-up decomposition method that allows decomposing pile-up of single events with up to six photons from six monochromatic lines with 99% accuracy. The usefulness of SDDs will continue into the X-ray FEL era of science. Their successors, the ePixS hybrid pixel detectors, already offer hundreds of pixels, each with a similar performance to an SDD, in a compact, robust and affordable package.« less
Optimal Pulse Processing, Pile-Up Decomposition, and Applications of Silicon Drift Detectors at LCLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaj, G.; Kenney, C. J.; Dragone, A.
Silicon drift detectors (SDDs) revolutionized spectroscopy in fields as diverse as geology and dentistry. For a subset of experiments at ultrafast, X-ray free-electron lasers (FELs), SDDs can make substantial contributions. Often the unknown spectrum is interesting, carrying science data, or the background measurement is useful to identify unexpected signals. Many measurements involve only several discrete photon energies known a priori, allowing single-event decomposition of pile-up and spectroscopic photon counting. We designed a pulse function and demonstrated that the signal amplitude (i.e., proportional to the detected energy and obtained from fitting with the pulse function), rise time, and pulse height aremore » interrelated, and at short peaking times, the pulse height and pulse area are not optimal estimators for detected energy; instead, the signal amplitude and rise time are obtained for each pulse by fitting, thus removing the need for pulse shaping. By avoiding pulse shaping, rise times of tens of nanoseconds resulted in reduced pulse pile-up and allowed decomposition of remaining pulse pile-up at photon separation times down to hundreds of nanoseconds while yielding time-of-arrival information with the precision of 10 ns. Waveform fitting yields simultaneously high energy resolution and high counting rates (two orders of magnitude higher than current digital pulse processors). At pulsed sources or high photon rates, photon pile-up still occurs. We showed that pile-up spectrum fitting is relatively simple and preferable to pile-up spectrum deconvolution. We then developed a photon pile-up statistical model for constant intensity sources, extended it to variable intensity sources (typical for FELs), and used it to fit a complex pileup spectrum. We subsequently developed a Bayesian pile-up decomposition method that allows decomposing pile-up of single events with up to six photons from six monochromatic lines with 99% accuracy. The usefulness of SDDs will continue into the X-ray FEL era of science. Their successors, the ePixS hybrid pixel detectors, already offer hundreds of pixels, each with a similar performance to an SDD, in a compact, robust and affordable package.« less
Clinical Features, Treatment, and Outcome of HIV-Associated Immune Thrombocytopenia in the HAART Era
Ambler, Kimberley L. S.; Vickars, Linda M.; Leger, Chantal S.; Foltz, Lynda M.; Montaner, Julio S. G.; Harris, Marianne; Dias Lima, Viviane; Leitch, Heather A.
2012-01-01
The characteristics of HIV-associated ITP were documented prior to the HAART era, and the optimal treatment beyond HAART is unknown. We performed a review of patients with HIV-associated ITP and at least one platelet count <20 × 109/L since January 1996. Of 5290 patients in the BC Centre for Excellence in HIV/AIDS database, 31 (0.6%) had an ITP diagnosis and platelet count <20 × 109/L. Initial ITP treatment included IVIG, n = 12; steroids, n = 10; anti-RhD, n = 8; HAART, n = 3. Sixteen patients achieved response and nine patients achieved complete response according to the International Working Group criteria. Median time to response was 14 days. Platelet response was not significantly associated with treatment received, but complete response was lower in patients with a history of injection drug use. Complications of ITP treatment occurred in two patients and there were four unrelated deaths. At a median followup of 48 months, 22 patients (71%) required secondary ITP treatment. This is to our knowledge the largest series of severe HIV-associated ITP reported in the HAART era. Although most patients achieved a safe platelet count with primary ITP treatment, nearly all required retreatment for ITP recurrence. New approaches to the treatment of severe ITP in this population are needed. PMID:22693513
Ambler, Kimberley L S; Vickars, Linda M; Leger, Chantal S; Foltz, Lynda M; Montaner, Julio S G; Harris, Marianne; Dias Lima, Viviane; Leitch, Heather A
2012-01-01
The characteristics of HIV-associated ITP were documented prior to the HAART era, and the optimal treatment beyond HAART is unknown. We performed a review of patients with HIV-associated ITP and at least one platelet count <20 × 10(9)/L since January 1996. Of 5290 patients in the BC Centre for Excellence in HIV/AIDS database, 31 (0.6%) had an ITP diagnosis and platelet count <20 × 10(9)/L. Initial ITP treatment included IVIG, n = 12; steroids, n = 10; anti-RhD, n = 8; HAART, n = 3. Sixteen patients achieved response and nine patients achieved complete response according to the International Working Group criteria. Median time to response was 14 days. Platelet response was not significantly associated with treatment received, but complete response was lower in patients with a history of injection drug use. Complications of ITP treatment occurred in two patients and there were four unrelated deaths. At a median followup of 48 months, 22 patients (71%) required secondary ITP treatment. This is to our knowledge the largest series of severe HIV-associated ITP reported in the HAART era. Although most patients achieved a safe platelet count with primary ITP treatment, nearly all required retreatment for ITP recurrence. New approaches to the treatment of severe ITP in this population are needed.
Measurement of nitrogen in the body using a commercial PGNAA system--phantom experiments.
Chichester, D L; Empey, E
2004-01-01
An industrial prompt-gamma neutron activation analysis (PGNAA) system, originally designed for the real-time elemental analyses of bulk coal on a conveyor belt, has been studied to examine the feasibility of using such a system for body composition analysis. Experiments were conducted to measure nitrogen in a simple, tissue equivalent phantom comprised of 2.7 wt% of nitrogen. The neutron source for these experiments was 365 MBq (18.38 microg) of 252Cf located within an engineered low Z moderator and it yielded a dose rate in the measurement position of 3.91 mSv/h; data were collected using a 2780 cm(3) NaI(Tl) cylindrical detector with a digital signal processor and a 512 channel MCA. Source, moderator and detector geometries were unaltered from the system's standard configuration, where they have been optimized for considerations such as neutron thermalization, measurement sensitivity and uniformity, background radiation and external dose minimization. Based on net counts in the 10.8 MeV PGNAA nitrogen photopeak and its escape peaks the dose dependent nitrogen count rate was 11,600 counts/mSv with an uncertainty of 3.0% after 0.32 mSv (4.9 min), 2.0% after 0.74 mSv (11.4 min) and 1.0% after 3.02 mSv (46.4 min).
Microbial Air Quality and Bacterial Surface Contamination in Ambulances During Patient Services
Luksamijarulkul, Pipat; Pipitsangjan, Sirikun
2015-01-01
Objectives We sought to assess microbial air quality and bacterial surface contamination on medical instruments and the surrounding areas among 30 ambulance runs during service. Methods We performed a cross-sectional study of 106 air samples collected from 30 ambulances before patient services and 212 air samples collected during patient services to assess the bacterial and fungal counts at the two time points. Additionally, 226 surface swab samples were collected from medical instrument surfaces and the surrounding areas before and after ambulance runs. Groups or genus of isolated bacteria and fungi were preliminarily identified by Gram’s stain and lactophenol cotton blue. Data were analyzed using descriptive statistics, t-test, and Pearson’s correlation coefficient with a p-value of less than 0.050 considered significant. Results The mean and standard deviation of bacterial and fungal counts at the start of ambulance runs were 318±485cfu/m3 and 522±581cfu/m3, respectively. Bacterial counts during patient services were 468±607cfu/m3 and fungal counts were 656±612cfu/m3. Mean bacterial and fungal counts during patient services were significantly higher than those at the start of ambulance runs, p=0.005 and p=0.030, respectively. For surface contamination, the overall bacterial counts before and after patient services were 0.8±0.7cfu/cm2 and 1.3±1.1cfu/cm2, respectively (p<0.001). The predominant isolated bacteria and fungi were Staphylococcus spp. and Aspergillus spp., respectively. Additionally, there was a significantly positive correlation between bacterial (r=0.3, p<0.010) and fungal counts (r=0.2, p=0.020) in air samples and bacterial counts on medical instruments and allocated areas. Conclusions This study revealed high microbial contamination (bacterial and fungal) in ambulance air during services and higher bacterial contamination on medical instrument surfaces and allocated areas after ambulance services compared to the start of ambulance runs. Additionally, bacterial and fungal counts in ambulance air showed a significantly positive correlation with the bacterial surface contamination on medical instruments and allocated areas. Further studies should be conducted to determine the optimal intervention to reduce microbial contamination in the ambulance environment. PMID:25960835
Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity
NASA Astrophysics Data System (ADS)
Chen, Lei; Kimura, Shinji
In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.
Arima, Nobuyuki; Nishimura, Reiki; Osako, Tomofumi; Nishiyama, Yasuyuki; Fujisue, Mamiko; Okumura, Yasuhiro; Nakano, Masahiro; Tashima, Rumiko; Toyozumi, Yasuo
2016-01-01
In this case-control study, we investigated the most suitable cell counting area and the optimal cutoff point of the Ki-67 index. Thirty recurrent cases were selected among hormone receptor (HR)-positive/HER2-negative breast cancer patients. As controls, 90 nonrecurrent cases were randomly selected by allotting 3 controls to each recurrent case based on the following criteria: age, nodal status, tumor size, and adjuvant endocrine therapy alone. Both the hot spot and the average area of the tumor were evaluated on a Ki-67 immunostaining slide. The median Ki-67 index value at the hot spot and average area were 25.0 and 14.5%, respectively. Irrespective of the area counted, the Ki-67 index value was significantly higher in all of the recurrent cases (p < 0.0001). The multivariate analysis revealed that the Ki-67 index value of 20% at the hot spot was the most suitable cutoff point for predicting recurrence. Moreover, higher x0394;Ki-67 index value (the difference between the hot spot and the average area, ≥10%) and lower progesterone receptor expression (<20%) were significantly correlated with recurrence. A higher Ki-67 index value at the hot spot strongly correlated with recurrence, and the optimal cutoff point was found to be 20%. © 2015 S. Karger AG, Basel.
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
2012-01-01
Background Most clinical guidelines recommend that AIDS-free, HIV-infected persons with CD4 cell counts below 0.350 × 109 cells/L initiate combined antiretroviral therapy (cART), but the optimal CD4 cell count at which cART should be initiated remains a matter of debate. Objective To identify the optimal CD4 cell count at which cART should be initiated. Design Prospective observational data from the HIV-CAUSAL Collaboration and dynamic marginal structural models were used to compare cART initiation strategies for CD4 thresholds between 0.200 and 0.500 × 109 cells/L. Setting HIV clinics in Europe and the Veterans Health Administration system in the United States. Patients 20 971 HIV-infected, therapy-naive persons with baseline CD4 cell counts at or above 0.500 × 109 cells/L and no previous AIDS-defining illnesses, of whom 8392 had a CD4 cell count that decreased into the range of 0.200 to 0.499 × 109 cells/L and were included in the analysis. Measurements Hazard ratios and survival proportions for all-cause mortality and a combined end point of AIDS-defining illness or death. Results Compared with initiating cART at the CD4 cell count threshold of 0.500 × 109 cells/L, the mortality hazard ratio was 1.01 (95% CI, 0.84 to 1.22) for the 0.350 threshold and 1.20 (CI, 0.97 to 1.48) for the 0.200 threshold. The corresponding hazard ratios were 1.38 (CI, 1.23 to 1.56) and 1.90 (CI, 1.67 to 2.15), respectively, for the combined end point of AIDS-defining illness or death. Limitations CD4 cell count at cART initiation was not randomized. Residual confounding may exist. Conclusion Initiation of cART at a threshold CD4 count of 0.500 × 109 cells/L increases AIDS-free survival. However, mortality did not vary substantially with the use of CD4 thresholds between 0.300 and 0.500 ×109 cells/L. Primary Funding Source National Institutes of Health. PMID:21502648
Cain, Lauren E; Logan, Roger; Robins, James M; Sterne, Jonathan A C; Sabin, Caroline; Bansi, Loveleen; Justice, Amy; Goulet, Joseph; van Sighem, Ard; de Wolf, Frank; Bucher, Heiner C; von Wyl, Viktor; Esteve, Anna; Casabona, Jordi; del Amo, Julia; Moreno, Santiago; Seng, Remonie; Meyer, Laurence; Perez-Hoyos, Santiago; Muga, Roberto; Lodi, Sara; Lanoy, Emilie; Costagliola, Dominique; Hernan, Miguel A
2011-04-19
Most clinical guidelines recommend that AIDS-free, HIV-infected persons with CD4 cell counts below 0.350 × 10(9) cells/L initiate combined antiretroviral therapy (cART), but the optimal CD4 cell count at which cART should be initiated remains a matter of debate. To identify the optimal CD4 cell count at which cART should be initiated. Prospective observational data from the HIV-CAUSAL Collaboration and dynamic marginal structural models were used to compare cART initiation strategies for CD4 thresholds between 0.200 and 0.500 × 10(9) cells/L. HIV clinics in Europe and the Veterans Health Administration system in the United States. 20, 971 HIV-infected, therapy-naive persons with baseline CD4 cell counts at or above 0.500 × 10(9) cells/L and no previous AIDS-defining illnesses, of whom 8392 had a CD4 cell count that decreased into the range of 0.200 to 0.499 × 10(9) cells/L and were included in the analysis. Hazard ratios and survival proportions for all-cause mortality and a combined end point of AIDS-defining illness or death. Compared with initiating cART at the CD4 cell count threshold of 0.500 × 10(9) cells/L, the mortality hazard ratio was 1.01 (95% CI, 0.84 to 1.22) for the 0.350 threshold and 1.20 (CI, 0.97 to 1.48) for the 0.200 threshold. The corresponding hazard ratios were 1.38 (CI, 1.23 to 1.56) and 1.90 (CI, 1.67 to 2.15), respectively, for the combined end point of AIDS-defining illness or death. CD4 cell count at cART initiation was not randomized. Residual confounding may exist. Initiation of cART at a threshold CD4 count of 0.500 × 10(9) cells/L increases AIDS-free survival. However, mortality did not vary substantially with the use of CD4 thresholds between 0.300 and 0.500 × 10(9) cells/L.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Characterization of scintillator crystals for usage as prompt gamma monitors in particle therapy
NASA Astrophysics Data System (ADS)
Roemer, K.; Pausch, G.; Bemmerer, D.; Berthel, M.; Dreyer, A.; Golnik, C.; Hueso-González, F.; Kormoll, T.; Petzoldt, J.; Rohling, H.; Thirolf, P.; Wagner, A.; Wagner, L.; Weinberger, D.; Fiedler, F.
2015-10-01
Particle therapy in oncology is advantageous compared to classical radiotherapy due to its well-defined penetration depth. In the so-called Bragg peak, the highest dose is deposited; the tissue behind the cancerous area is not exposed. Different factors influence the range of the particle and thus the target area, e.g. organ motion, mispositioning of the patient or anatomical changes. In order to avoid over-exposure of healthy tissue and under-dosage of cancerous regions, the penetration depth of the particle has to be monitored, preferably already during the ongoing therapy session. The verification of the ion range can be performed using prompt gamma emissions, which are produced by interactions between projectile and tissue, and originate from the same location and time of the nuclear reaction. The prompt gamma emission profile and the clinically relevant penetration depth are correlated. Various imaging concepts based on the detection of prompt gamma rays are currently discussed: collimated systems with counting detectors, Compton cameras with (at least) two detector planes, or the prompt gamma timing method, utilizing the particle time-of-flight within the body. For each concept, the detection system must meet special requirements regarding energy, time, and spatial resolution. Nonetheless, the prerequisites remain the same: the gamma energy region (2 to 10 MeV), high counting rates and the stability in strong background radiation fields. The aim of this work is the comparison of different scintillation crystals regarding energy and time resolution for optimized prompt gamma detection.
Classifier-Guided Sampling for Complex Energy System Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backlund, Peter B.; Eddy, John P.
2015-09-01
This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of omore » bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.« less
Optimization of the R-SQUID noise thermometer
NASA Astrophysics Data System (ADS)
Seppä, Heikki
1986-02-01
The Josephson junction can be used to convert voltage into frequency and thus it can be used to convert voltage fluctuations generated by Johnson noise in a resistor into frequency fluctuations. As a consequence, the temperature of the resistor can be defined by measuring the variance of the frequency fluctuations. Unfortunately, the absolute determination of temperature by this approach is disturbed by several undesirable effects: a rolloff introduced by the bandwidth of the postdetection filter, additional noise caused by rf amplifiers, and a mixed noise effect caused by the nonlinearity of the Josephson junction together with rf noise in the tank circuit. Furthermore, the variance is a statistical quantity and therefore the limited number of frequency counts produces inaccuracy in a temperature measurement. In this work the total inaccuracy of the noise thermometer is analyzed and the optimal choice of the parameters is derived. A practical way to find the optimal conditions for the Josephson junction noise thermometer is discussed. The inspection shows that under the optimal conditions the total error is dependent only on the temperature under determination, the equivalent noise temperature of the preamplifier, the bias frequency of the SQUID, and the total time used for the measurement.
James F. Lynch
1995-01-01
Effects of count duration, time-of-day, and aural stimuli were studied in a series of unlimited-radius point counts conducted during winter in Quintana Roo, Mexico. The rate at which new species were detected was approximately three times higher during the first 5 minutes of each 15- minute count than in the final 5 minutes. The number of individuals and species...
Wolk, D M; Johnson, C H; Rice, E W; Marshall, M M; Grahn, K F; Plummer, C B; Sterling, C R
2000-04-01
The microsporidia have recently been recognized as a group of pathogens that have potential for waterborne transmission; however, little is known about the effects of routine disinfection on microsporidian spore viability. In this study, in vitro growth of Encephalitozoon syn. Septata intestinalis, a microsporidium found in the human gut, was used as a model to assess the effect of chlorine on the infectivity and viability of microsporidian spores. Spore inoculum concentrations were determined by using spectrophotometric measurements (percent transmittance at 625 nm) and by traditional hemacytometer counting. To determine quantitative dose-response data for spore infectivity, we optimized a rabbit kidney cell culture system in 24-well plates, which facilitated calculation of a 50% tissue culture infective dose (TCID(50)) and a minimal infective dose (MID) for E. intestinalis. The TCID(50) is a quantitative measure of infectivity and growth and is the number of organisms that must be present to infect 50% of the cell culture wells tested. The MID is as a measure of a system's permissiveness to infection and a measure of spore infectivity. A standardized MID and a standardized TCID(50) have not been reported previously for any microsporidian species. Both types of doses are reported in this paper, and the values were used to evaluate the effects of chlorine disinfection on the in vitro growth of microsporidia. Spores were treated with chlorine at concentrations of 0, 1, 2, 5, and 10 mg/liter. The exposure times ranged from 0 to 80 min at 25 degrees C and pH 7. MID data for E. intestinalis were compared before and after chlorine disinfection. A 3-log reduction (99.9% inhibition) in the E. intestinalis MID was observed at a chlorine concentration of 2 mg/liter after a minimum exposure time of 16 min. The log(10) reduction results based on percent transmittance-derived spore counts were equivalent to the results based on hemacytometer-derived spore counts. Our data suggest that chlorine treatment may be an effective water treatment for E. intestinalis and that spectrophotometric methods may be substituted for labor-intensive hemacytometer methods when spores are counted in laboratory-based chlorine disinfection studies.
Optimized collectives using a DMA on a parallel computer
Chen, Dong [Croton On Hudson, NY; Gabor, Dozsa [Ardsley, NY; Giampapa, Mark E [Irvington, NY; Heidelberger,; Phillip, [Cortlandt Manor, NY
2011-02-08
Optimizing collective operations using direct memory access controller on a parallel computer, in one aspect, may comprise establishing a byte counter associated with a direct memory access controller for each submessage in a message. The byte counter includes at least a base address of memory and a byte count associated with a submessage. A byte counter associated with a submessage is monitored to determine whether at least a block of data of the submessage has been received. The block of data has a predetermined size, for example, a number of bytes. The block is processed when the block has been fully received, for example, when the byte count indicates all bytes of the block have been received. The monitoring and processing may continue for all blocks in all submessages in the message.
Rad-hard Dual-threshold High-count-rate Silicon Pixel-array Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Adam
In this program, a Voxtel-led team demonstrates a full-format (192 x 192, 100-µm pitch, VX-810) high-dynamic-range x-ray photon-counting sensor—the Dual Photon Resolved Energy Acquisition (DUPREA) sensor. Within the Phase II program the following tasks were completed: 1) system analysis and definition of the DUPREA sensor requirements; 2) design, simulation, and fabrication of the full-format VX-810 ROIC design; 3) design, optimization, and fabrication of thick, fully depleted silicon photodiodes optimized for x-ray photon collection; 4) hybridization of the VX-810 ROIC to the photodiode array in the creation of the optically sensitive focal-plane array; 5) development of an evaluation camera; and 6)more » electrical and optical characterization of the sensor.« less
Design of a novel instrument for active neutron interrogation of artillery shells.
Bélanger-Champagne, Camille; Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from [Formula: see text]% to [Formula: see text]% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s.
Design of a novel instrument for active neutron interrogation of artillery shells
Vainionpää, Hannes; Peura, Pauli; Toivonen, Harri; Eerola, Paula; Dendooven, Peter
2017-01-01
The most common explosives can be uniquely identified by measuring the elemental H/N ratio with a precision better than 10%. Monte Carlo simulations were used to design two variants of a new prompt gamma neutron activation instrument that can achieve this precision. The instrument features an intense pulsed neutron generator with precise timing. Measuring the hydrogen peak from the target explosive is especially challenging because the instrument itself contains hydrogen, which is needed for neutron moderation and shielding. By iterative design optimization, the fraction of the hydrogen peak counts coming from the explosive under interrogation increased from 53-7+7% to 74-10+8% (statistical only) for the benchmark design. In the optimized design variants, the hydrogen signal from a high-explosive shell can be measured to a statistics-only precision better than 1% in less than 30 minutes for an average neutron production yield of 109 n/s. PMID:29211773
A New Pulse Pileup Rejection Method Based on Position Shift Identification
NASA Astrophysics Data System (ADS)
Gu, Z.; Prout, D. L.; Taschereau, R.; Bai, B.; Chatziioannou, A. F.
2016-02-01
Pulse pileup events degrade the signal-to-noise ratio (SNR) of nuclear medicine data. When such events occur in multiplexed detectors, they cause spatial misposition, energy spectrum distortion and degraded timing resolution, which leads to image artifacts. Pulse pileup is pronounced in PETbox4, a bench top PET scanner dedicated to high sensitivity and high resolution imaging of mice. In that system, the combination of high absolute sensitivity, long scintillator decay time (BGO) and highly multiplexed electronics lead to a significant fraction of pulse pileup, reached at lower total activity than for comparable instruments. In this manuscript, a new pulse pileup rejection method named position shift rejection (PSR) is introduced. The performance of PSR is compared with a conventional leading edge rejection (LER) method and with no pileup rejection implemented (NoPR). A comprehensive digital pulse library was developed for objective evaluation and optimization of the PSR and LER, in which pulse waveforms were directly recorded from real measurements exactly representing the signals to be processed. Physical measurements including singles event acquisition, peak system sensitivity and NEMA NU-4 image quality phantom were also performed in the PETbox4 system to validate and compare the different pulse pile-up rejection methods. The evaluation of both physical measurements and model pulse trains demonstrated that the new PSR performs more accurate pileup event identification and avoids erroneous rejection of valid events. For the PETbox4 system, this improvement leads to a significant recovery of sensitivity at low count rates, amounting to about 1/4th of the expected true coincidence events, compared to the LER method. Furthermore, with the implementation of PSR, optimal image quality can be achieved near the peak noise equivalent count rate (NECR).
Origins of timed cancer treatment: early marker rhythm-guided individualized chronochemotherapy*
Halberg, Franz; Prem, Konald; Halberg, Francine; Norman, Catherine; Cornélissen, Germaine
2008-01-01
A 21-year old patient who presented in 1973 with a rare and highly malignant ovarian endodermal sinus tumor with spillage into the peritoneal cavity is alive and well today after receiving chronochemotherapy. During the first four courses of treatment, medications were given at different circadian stages. Complete blood counts and marker variables such as mood, vigor, nausea, and temperature were monitored around the clock and analyzed by cosinor to seek times of highest tolerance. Remaining treatment courses were administered at a time corresponding to the patient's best drug tolerance, rather than extrapolating the timing of optimal cyclophosphamide administration from also-implemented parallel laboratory studies on mice. Notwithstanding remaining hurdles in bringing chronochemotherapy to the clinic for routine care, merits of marker rhythm-guided chronotherapy documented in this and other case reports have led to the doubling of the two-year disease-free survival of patients with large perioral tumors in a clinical trial. PMID:17228525
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shcheslavskiy, V., E-mail: vis@becker-hickl.de; Becker, W.; Morozov, P.
Time resolution is one of the main characteristics of the single photon detectors besides quantum efficiency and dark count rate. We demonstrate here an ultrafast time-correlated single photon counting (TCSPC) setup consisting of a newly developed single photon counting board SPC-150NX and a superconducting NbN single photon detector with a sensitive area of 7 × 7 μm. The combination delivers a record instrument response function with a full width at half maximum of 17.8 ps and system quantum efficiency ∼15% at wavelength of 1560 nm. A calculation of the root mean square value of the timing jitter for channels withmore » counts more than 1% of the peak value yielded about 7.6 ps. The setup has also good timing stability of the detector–TCSPC board.« less
Turcott, R G; Lowen, S B; Li, E; Johnson, D H; Tsuchitani, C; Teich, M C
1994-01-01
The behavior of lateral-superior-olive (LSO) auditory neurons over large time scales was investigated. Of particular interest was the determination as to whether LSO neurons exhibit the same type of fractal behavior as that observed in primary VIII-nerve auditory neurons. It has been suggested that this fractal behavior, apparent on long time scales, may play a role in optimally coding natural sounds. We found that a nonfractal model, the nonstationary dead-time-modified Poisson point process (DTMP), describes the LSO firing patterns well for time scales greater than a few tens of milliseconds, a region where the specific details of refractoriness are unimportant. The rate is given by the sum of two decaying exponential functions. The process is completely specified by the initial values and time constants of the two exponentials and by the dead-time relation. Specific measures of the firing patterns investigated were the interspike-interval (ISI) histogram, the Fano-factor time curve (FFC), and the serial count correlation coefficient (SCC) with the number of action potentials in successive counting times serving as the random variable. For all the data sets we examined, the latter portion of the recording was well approximated by a single exponential rate function since the initial exponential portion rapidly decreases to a negligible value. Analytical expressions available for the statistics of a DTMP with a single exponential rate function can therefore be used for this portion of the data. Good agreement was obtained among the analytical results, the computer simulation, and the experimental data on time scales where the details of refractoriness are insignificant.(ABSTRACT TRUNCATED AT 250 WORDS)
Shu, Guowei; Shi, Xiaoyu; Chen, He; Ji, Zhe; Meng, Jiangpeng
2018-03-23
Hypertension is a serious threat to human health and food-derived angiotensin converting enzyme (ACE; EC 3.4.15.1) inhibitory peptides can be used to regulate high blood pressure without side effects. The composition of the nutrient medium for the production of these peptides by fermenting goat milk with Lactobacillus bulgaricus LB6 was optimized to increase the ACE inhibitory activity by Box-Behnken design (BBD) of response surface methodology (RSM) in the present study. Soybean peptone, glucose, and casein had significant effects on both ACE inhibition rate and viable counts of L. bulgaricus LB6 during incubation. The results showed that the maximum values of ACE inhibition rate and viable counts for L. bulgaricus LB6 were reaching to 86.37 ± 0.53% and 8.06 × 10 7 under the optimal conditions, which were 0.35% (w/w) soybean peptone, 1.2% (w/w) glucose, and 0.15% (w/w) casein. The results were in close agreement with the model prediction. The optimal values of the medium component concentrations can be a good reference for obtaining ACE inhibitory peptides from goat milk.
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
Rajhans, Rajib; Kumar, G Sai; Dubey, Pawan K; Sharma, G Taru
2010-03-29
The present study was designed to compare the expression profile of two developmentally important genes (HSP-70.1 and GLUT-1) and TCN (total cell number) count in fast (group A) and slow (group B) cleaved buffalo embryos to access their in vitro developmental competence. Buffalo COCs (cumulus oocyte complexes) were collected from local abattoir ovaries and subjected to in vitro maturation in: TCM-199 supplemented with 10% FBS (fetal bovine serum), BSA (3 mg/ml), sodium pyruvate (0.25 mM) and 20 ng/ml EGF (epidermal growth factor) at 38.5 degrees C under 5% CO2. In vitro derived embryos were collected at 4-8, 8-16 cell, morula and blastocyst stages at specific time points for gene expression analysis and total cell count. A semiquantitative RT-PCR (reverse transcriptase-PCR) assay was used to determine the HSP-70.1 and GLUT-1 transcripts. Results showed that developmental competence and TCN count in fast (group A)-cleaving embryos was significantly (P<0.05) higher than in the slow group (group B). The gene transcript of HSP-70.1 and GLUT-1 was expressed in oocytes (immature and mature) and throughout the embryonic developmental stages in the fast group (group A), while in the slow (group B) cleaving embryos, the expression of HSP-70.1 was absent in all the embryonic developmental stages, and expression of GLUT-1 was absent after 8-16 cell stage. In conclusion, TCN count and expression profile of HSP-70.1 and GLUT-1 genes in buffalo embryos are different taking into account the cleavage rate. Quality of such embryos for research purposes, TCN and expression profiling of developmentally important genes should be employed to optimize the in vitro culture system to produce superior quality of embryos.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Oz, Yasemin; Kiremitci, Abdurrahman; Dag, Ilknur; Metintas, Selma; Kiraz, Nuri
2013-01-01
We evaluated the postantifungal effects (PAFEs) of caspofungin (CAS), voriconazole (VOR), amphotericin B (AmB), and the combinations of CAS + VOR and CAS + AmB against 30 clinical Candida krusei isolates at 0.25, 1 and 4 times the MIC of each individually and in the indicated combinations. Antifungals were removed after 1 hour and colony counts were performed at 0, 2, 6, 24, and 48 h. VOR did not display any measurable PAFE regardless of antifungal concentrations, while AmB and CAS exhibited dose-dependent PAFE. The most effective agent producing a prolonged PAFE in this study was CAS. Although the combination of CAS with VOR generated longer PAFEs at 0.25 and 1 times their respective MICs in comparison with CAS alone, this combination was indifferent rather than synergistic. However, the combination of CAS with AmB at 4 times their MICs exhibited the best performance, reducing the colony counts during the 48 h after removal of drugs and resulted in synergic interaction in respect to 20 (67%) isolates. Consequently, CAS has a prolonged PAFE in vitro against C. krusei isolates, and the combination of AmB + CAS may increase significantly the efficacy of CAS. Our data may be useful in optimizing dosing regimens for these agents and their combinations, although further studies are needed to explore the clinical usefulness of our results.
Optimization of optical proximity correction to reduce mask write time using genetic algorithm
NASA Astrophysics Data System (ADS)
Dick, Gregory J.; Cao, Liang; Asthana, Abhishek; Cheng, Jing; Power, David N.
2018-03-01
The ever increasing pattern densities and design complexities make the tuning of optical proximity correction (OPC) recipes very challenging. One known method for tuning is genetic algorithm (GA). Previously GA has been demonstrated to fine tune OPC recipes in order to achieve better results for possible 1D and 2D geometric concerns like bridging and pinching. This method, however, did not take into account the impact of excess segmentation on downstream operations like fracturing and mask writing. This paper introduces a general methodology to significantly reduce the number of excess edges in the OPC output, thus reducing the number of flashes generated at fracture and subsequently the write time at mask build. GA is used to reduce the degree of unwarranted segmentation while ensuring good OPC quality. An Objective Function (OF) is utilized to ensure quality convergence and process-variation (PV) plus an additional weighed factor to reduce clustered edge count. The technique is applied to 14nm metal layer OPC recipes in order to identify excess segmentation and to produce a modified recipe that significantly reduces these segments. OPC output file sizes is shown to be reduced by 15% or more and overall edge count is shown to be reduced by 10% or more. At the same time overall quality of the OPC recipe is shown to be maintained via OPC Verification (OPCV) results.
Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging
Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz
2013-01-01
Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951
Henzlova, Daniela; Menlove, Howard Olsen; Croft, Stephen; ...
2015-06-15
In the field of nuclear safeguards, passive neutron multiplicity counting (PNMC) is a method typically employed in non-destructive assay (NDA) of special nuclear material (SNM) for nonproliferation, verification and accountability purposes. PNMC is generally performed using a well-type thermal neutron counter and relies on the detection of correlated pairs or higher order multiplets of neutrons emitted by an assayed item. To assay SNM, a set of parameters for a given well-counter is required to link the measured multiplicity rates to the assayed item properties. Detection efficiency, die-away time, gate utilization factors (tightly connected to die-away time) as well as optimummore » gate width setting are among the key parameters. These parameters along with the underlying model assumptions directly affect the accuracy of the SNM assay. In this paper we examine the role of gate utilization factors and the single exponential die-away time assumption and their impact on the measurements for a range of plutonium materials. In addition, we examine the importance of item-optimized coincidence gate width setting as opposed to using a universal gate width value. Finally, the traditional PNMC based on multiplicity shift register electronics is extended to Feynman-type analysis and application of this approach to Pu mass assay is demonstrated.« less
Richard, Arianne C; Lyons, Paul A; Peters, James E; Biasci, Daniele; Flint, Shaun M; Lee, James C; McKinney, Eoin F; Siegel, Richard M; Smith, Kenneth G C
2014-08-04
Although numerous investigations have compared gene expression microarray platforms, preprocessing methods and batch correction algorithms using constructed spike-in or dilution datasets, there remains a paucity of studies examining the properties of microarray data using diverse biological samples. Most microarray experiments seek to identify subtle differences between samples with variable background noise, a scenario poorly represented by constructed datasets. Thus, microarray users lack important information regarding the complexities introduced in real-world experimental settings. The recent development of a multiplexed, digital technology for nucleic acid measurement enables counting of individual RNA molecules without amplification and, for the first time, permits such a study. Using a set of human leukocyte subset RNA samples, we compared previously acquired microarray expression values with RNA molecule counts determined by the nCounter Analysis System (NanoString Technologies) in selected genes. We found that gene measurements across samples correlated well between the two platforms, particularly for high-variance genes, while genes deemed unexpressed by the nCounter generally had both low expression and low variance on the microarray. Confirming previous findings from spike-in and dilution datasets, this "gold-standard" comparison demonstrated signal compression that varied dramatically by expression level and, to a lesser extent, by dataset. Most importantly, examination of three different cell types revealed that noise levels differed across tissues. Microarray measurements generally correlate with relative RNA molecule counts within optimal ranges but suffer from expression-dependent accuracy bias and precision that varies across datasets. We urge microarray users to consider expression-level effects in signal interpretation and to evaluate noise properties in each dataset independently.
How to deal with climate change uncertainty in the planning of engineering systems
NASA Astrophysics Data System (ADS)
Spackova, Olga; Dittes, Beatrice; Straub, Daniel
2016-04-01
The effect of extreme events such as floods on the infrastructure and built environment is associated with significant uncertainties: These include the uncertain effect of climate change, uncertainty on extreme event frequency estimation due to limited historic data and imperfect models, and, not least, uncertainty on future socio-economic developments, which determine the damage potential. One option for dealing with these uncertainties is the use of adaptable (flexible) infrastructure that can easily be adjusted in the future without excessive costs. The challenge is in quantifying the value of adaptability and in finding the optimal sequence of decision. Is it worth to build a (potentially more expensive) adaptable system that can be adjusted in the future depending on the future conditions? Or is it more cost-effective to make a conservative design without counting with the possible future changes to the system? What is the optimal timing of the decision to build/adjust the system? We develop a quantitative decision-support framework for evaluation of alternative infrastructure designs under uncertainties, which: • probabilistically models the uncertain future (trough a Bayesian approach) • includes the adaptability of the systems (the costs of future changes) • takes into account the fact that future decisions will be made under uncertainty as well (using pre-posterior decision analysis) • allows to identify the optimal capacity and optimal timing to build/adjust the infrastructure. Application of the decision framework will be demonstrated on an example of flood mitigation planning in Bavaria.
Concept report: Microprocessor control of electrical power system
NASA Technical Reports Server (NTRS)
Perry, E.
1977-01-01
An electrical power system which uses a microprocessor for systems control and monitoring is described. The microprocessor controlled system permits real time modification of system parameters for optimizing a system configuration, especially in the event of an anomaly. By reducing the components count, the assembling and testing of the unit is simplified, and reliability is increased. A resuable modular power conversion system capable of satisfying a large percentage of space applications requirements is examined along with the programmable power processor. The PC global controller which handles systems control and external communication is analyzed, and a software description is given. A systems application summary is also included.
Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex
Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo
2015-01-01
The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537
2014-08-01
tumor size, measured by mammography, MRI, or ultrasound. These methods evaluate the regimen that the patient received. Molecular changes induced by...photon counting electronics (SPC-150, Becker and Hickl) and a GaAsP PMT (H7422P-40, Hamamatsu). Photon count rates were maintained above 5x10 5...conditions (33), thus making them an attractive system to evaluate tumor response to drugs. We used OMI to assess the response of primary breast tumor
Tunneling Statistics for Analysis of Spin-Readout Fidelity
NASA Astrophysics Data System (ADS)
Gorman, S. K.; He, Y.; House, M. G.; Keizer, J. G.; Keith, D.; Fricke, L.; Hile, S. J.; Broome, M. A.; Simmons, M. Y.
2017-09-01
We investigate spin and charge dynamics of a quantum dot of phosphorus atoms coupled to a radio-frequency single-electron transistor (SET) using full counting statistics. We show how the magnetic field plays a role in determining the bunching or antibunching tunneling statistics of the donor dot and SET system. Using the counting statistics, we show how to determine the lowest magnetic field where spin readout is possible. We then show how such a measurement can be used to investigate and optimize single-electron spin-readout fidelity.
Howell, W.D.
1957-08-20
An apparatus for automatically recording the results of counting operations on trains of electrical pulses is described. The disadvantages of prior devices utilizing the two common methods of obtaining the count rate are overcome by this apparatus; in the case of time controlled operation, the disclosed system automatically records amy information stored by the scaler but not transferred to the printer at the end of the predetermined time controlled operations and, in the case of count controlled operation, provision is made to prevent a weak sample from occupying the apparatus for an excessively long period of time.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources.
Klumpp, John; Brandl, Alexander
2015-03-01
A particle counting and detection system is proposed that searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data (e.g., time between counts), as this was shown to be a more sensitive technique for detecting low count rate sources compared to analyzing counts per unit interval (Luo et al. 2013). Two distinct versions of the detection system are developed. The first is intended for situations in which the sample is fixed and can be measured for an unlimited amount of time. The second version is intended to detect sources that are physically moving relative to the detector, such as a truck moving past a fixed roadside detector or a waste storage facility under an airplane. In both cases, the detection system is expected to be active indefinitely; i.e., it is an online detection system. Both versions of the multi-energy detection systems are compared to their respective gross count rate detection systems in terms of Type I and Type II error rates and sensitivity.
Alternative Optimizations of X-ray TES Arrays: Soft X-rays, High Count Rates, and Mixed-Pixel Arrays
NASA Technical Reports Server (NTRS)
Kilbourne, C. A.; Bandler, S. R.; Brown, A.-D.; Chervenak, J. A.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.
2007-01-01
We are developing arrays of superconducting transition-edge sensors (TES) for imaging spectroscopy telescopes such as the XMS on Constellation-X. While our primary focus has been on arrays that meet the XMS requirements (of which, foremost, is an energy resolution of 2.5 eV at 6 keV and a bandpass from approx. 0.3 keV to 12 keV), we have also investigated other optimizations that might be used to extend the XMS capabilities. In one of these optimizations, improved resolution below 1 keV is achieved by reducing the heat capacity. Such pixels can be based on our XMS-style TES's with the separate absorbers omitted. These pixels can added to an array with broadband response either as a separate array or interspersed, depending on other factors that include telescope design and science requirements. In one version of this approach, we have designed and fabricated a composite array of low-energy and broad-band pixels to provide high spectral resolving power over a broader energy bandpass than could be obtained with a single TES design. The array consists of alternating pixels with and without overhanging absorbers. To explore optimizations for higher count rates, we are also optimizing the design and operating temperature of pixels that are coupled to a solid substrate. We will present the performance of these variations and discuss other optimizations that could be used to enhance the XMS or enable other astrophysics experiments.
NASA Astrophysics Data System (ADS)
Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.
2013-06-01
In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.
Spectral dispersion and fringe detection in IOTA
NASA Technical Reports Server (NTRS)
Traub, W. A.; Lacasse, M. G.; Carleton, N. P.
1990-01-01
Pupil plane beam combination, spectral dispersion, detection, and fringe tracking are discussed for the IOTA interferometer. A new spectrometer design is presented in which the angular dispersion with respect to wavenumber is nearly constant. The dispersing element is a type of grism, a series combination of grating and prism, in which the constant parts of the dispersion add, but the slopes cancel. This grism is optimized for the display of channelled spectra. The dispersed fringes can be tracked by a matched-filter photon-counting correlator algorithm. This algorithm requires very few arithmetic operations per detected photon, making it well-suited for real-time fringe tracking. The algorithm is able to adapt to different stellar spectral types, intensity levels, and atmospheric time constants. The results of numerical experiments are reported.
High Count-Rate Study of Two TES X-Ray Microcalorimeters With Different Transition Temperatures
NASA Technical Reports Server (NTRS)
Lee, Sang-Jun; Adams, Joseph S.; Bandler, Simon R.; Betancourt-Martinez, Gabriele L.; Chervenak, James A.; Eckart, Megan E.; Finkbeiner, Fred M.; Kelley, Richard L.; Kilbourne, Caroline A.; Porter, Frederick S.;
2017-01-01
We have developed transition-edge sensor (TES) microcalorimeter arrays with high count-rate capability and high energy resolution to carry out x-ray imaging spectroscopy observations of various astronomical sources and the Sun. We have studied the dependence of the energy resolution and throughput (fraction of processed pulses) on the count rate for such microcalorimeters with two different transition temperatures T(sub c). Devices with both transition temperatures were fabricated within a single microcalorimeter array directly on top of a solid substrate where the thermal conductance of the microcalorimeter is dependent upon the thermal boundary resistance between the TES sensor and the dielectric substrate beneath. Because the thermal boundary resistance is highly temperature dependent, the two types of device with different T(sub c)(sup s) had very different thermal decay times, approximately one order of magnitude different. In our earlier report, we achieved energy resolutions of 1.6 and 2.eV at 6 keV from lower and higher T(sub c) devices, respectively, using a standard analysis method based on optimal filtering in the low flux limit. We have now measured the same devices at elevated x-ray fluxes ranging from 50 Hz to 1000 Hz per pixel. In the high flux limit, however, the standard optimal filtering scheme nearly breaks down because of x-ray pile-up. To achieve the highest possible energy resolution for a fixed throughput, we have developed an analysis scheme based on the socalled event grade method. Using the new analysis scheme, we achieved 5.0 eV FWHM with 96 Percent throughput for 6 keV x-rays of 1025 Hz per pixel with the higher T(sub c) (faster) device, and 5.8 eV FWHM with 97 Percent throughput with the lower T(sub c) (slower) device at 722 Hz.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Miklós, István; Darling, Aaron E
2009-06-22
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.
NASA Astrophysics Data System (ADS)
Wild, Walter James
1988-12-01
External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.
Egger, Sam; Petoumenos, Kathy; Kamarulzaman, Adeeba; Hoy, Jennifer; Sungkanuparph, Somnuek; Chuah, John; Falster, Kathleen; Zhou, Jialun; Law, Matthew G
2009-04-15
Random effects models were used to explore how the shape of CD4 cell count responses after commencing combination antiretroviral therapy (cART) develop over time and, in particular, the role of baseline and follow-up covariates. Patients in Asia Pacific HIV Observational Database who first commenced cART after January 1, 1997, and who had a baseline CD4 cell count and viral load measure and at least 1 follow-up measure between 6 and 24 months, were included. CD4 cell counts were determined at every 6-month period after the commencement of cART for up to 6 years. A total of 1638 patients fulfilled the inclusion criteria with a median follow-up time of 58 months. Lower post-cART mean CD4 cell counts were found to be associated with increasing age (P < 0.001), pre-cART hepatitis C coinfection (P = 0.038), prior AIDS (P = 0.019), baseline viral load < or equal to 100,000 copies per milliliter (P < 0.001), and the Asia Pacific region compared with Australia (P = 0.005). A highly significant 3-way interaction between the effects of time, baseline CD4 cell count, and post-cART viral burden (P < 0.0001) was demonstrated. Higher long-term mean CD4 cell counts were associated with lower baseline CD4 cell count and consistently undetectable viral loads. Among patients with consistently detectable viral load, CD4 cell counts seemed to converge for all baseline CD4 levels. Our analysis suggest that the long-term shape of post-cART CD4 cell count changes depends only on a 3-way interaction between baseline CD4 cell count, viral load response, and time.
Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge
2016-09-01
The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.
Time required for motor activity in lucid dreams.
Erlacher, Daniel; Schredl, Michael
2004-12-01
The present study investigated the relationship between the time required for specific tasks (counting and performing squats) in lucid dreams and in the waking state. Five proficient lucid dreamers (26-34 yr. old, M=29.8, SD=3.0; one woman and four men) participated. Analysis showed that the time needed for counting in a lucid dream is comparable to the time needed for counting in wakefulness, but motor activities required more time in lucid dreams than in the waking state.
Cottenden, Jennielee; Filter, Emily R; Cottreau, Jon; Moore, David; Bullock, Martin; Huang, Weei-Yuarn; Arnason, Thomas
2018-03-01
- Pathologists routinely assess Ki67 immunohistochemistry to grade gastrointestinal and pancreatic neuroendocrine tumors. Unfortunately, manual counts of the Ki67 index are very time consuming and eyeball estimation has been criticized as unreliable. Manual Ki67 counts performed by cytotechnologists could potentially save pathologist time and improve accuracy. - To assess the concordance between manual Ki67 index counts performed by cytotechnologists versus eyeball estimates and manual Ki67 counts by pathologists. - One Ki67 immunohistochemical stain was retrieved from each of 18 archived gastrointestinal or pancreatic neuroendocrine tumor resections. We compared pathologists' Ki67 eyeball estimates on glass slides and printed color images with manual counts performed by 3 cytotechnologists and gold standard manual Ki67 index counts by 3 pathologists. - Tumor grade agreement between pathologist image eyeball estimate and gold standard pathologist manual count was fair (κ = 0.31; 95% CI, 0.030-0.60). In 9 of 20 cases (45%), the mean pathologist eyeball estimate was 1 grade higher than the mean pathologist manual count. There was almost perfect agreement in classifying tumor grade between the mean cytotechnologist manual count and the mean pathologist manual count (κ = 0.910; 95% CI, 0.697-1.00). In 20 cases, there was only 1 grade disagreement between the 2 methods. Eyeball estimation by pathologists required less than 1 minute, whereas manual counts by pathologists required a mean of 17 minutes per case. - Eyeball estimation of the Ki67 index has a high rate of tumor grade misclassification compared with manual counting. Cytotechnologist manual counts are accurate and save pathologist time.
Gasquoine, Philip G; Weimer, Amy A; Amador, Arnoldo
2017-04-01
To measure specificity as failure rates for non-clinical, bilingual, Mexican Americans on three popular performance validity measures: (a) the language format Reliable Digit Span; (b) visual-perceptual format Test of Memory Malingering; and (c) visual-perceptual format Dot Counting, using optimal/suboptimal effort cut scores developed for monolingual, English-speakers. Participants were 61 consecutive referrals, aged between 18 and 65 years, with <16 years of education who were subjectively bilingual (confirmed via formal assessment) and chose the language of assessment, Spanish or English, for the performance validity tests. Failure rates were 38% for Reliable Digit Span, 3% for the Test of Memory Malingering, and 7% for Dot Counting. For Reliable Digit Span, the failure rates for Spanish (46%) and English (31%) languages of administration did not differ significantly. Optimal/suboptimal effort cut scores derived for monolingual English-speakers can be used with Spanish/English bilinguals when using the visual-perceptual format Test of Memory Malingering and Dot Counting. The high failure rate for Reliable Digit Span suggests it should not be used as a performance validity measure with Spanish/English bilinguals, irrespective of the language of test administration, Spanish or English.
NASA Astrophysics Data System (ADS)
Korneta, Wojciech; Gomes, Iacyel
2017-11-01
Traditional bistable sensors use external bias signal to drive its response between states and their detection strategy is based on the output power spectral density or the residence time difference (RTD) in two sensor states. Recently, the noise activated nonlinear dynamic sensors driven only by noise based on RTD technique have been proposed. Here, we present experimental results of dc voltage measurements by noise-driven bistable sensor based on electronic Chua's circuit operating in a chaotic regime where two single scroll attractors coexist. The output of the sensor is quantified by the proportion of the time the sensor stays in one state to the total observation time and by the spike-count rate with spikes defined by crossings between attractors. The relationship between the stimuli and particular observable for different noise intensities is obtained, the usefulness of each coding scheme is discussed, and the optimal noise intensity for detection is indicated. It is shown that the obtained relationship is the same for any observation time when population coding is used. The optimal time window for both detection and the number of units in population coding is found. Our results may be useful for analyses and understanding of the neural activity and in designing bistable storage elements at length scales where thermal fluctuations drastically increase and the effect of noise must be taken into consideration.
NASA Astrophysics Data System (ADS)
Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.
2018-01-01
Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.
NASA Astrophysics Data System (ADS)
Aziz, Jonathan D.; Parker, Jeffrey S.; Scheeres, Daniel J.; Englander, Jacob A.
2018-06-01
Low-thrust trajectories about planetary bodies characteristically span a high count of orbital revolutions. Directing the thrust vector over many revolutions presents a challenging optimization problem for any conventional strategy. This paper demonstrates the tractability of low-thrust trajectory optimization about planetary bodies by applying a Sundman transformation to change the independent variable of the spacecraft equations of motion to an orbit angle and performing the optimization with differential dynamic programming. Fuel-optimal geocentric transfers are computed with the transfer duration extended up to 2000 revolutions. The flexibility of the approach to higher fidelity dynamics is shown with Earth's J 2 perturbation and lunar gravity included for a 500 revolution transfer.
Children's Counting Strategies for Time Quantification and Integration.
ERIC Educational Resources Information Center
Wilkening, Friedrich; And Others
1987-01-01
Investigated whether and how children age 5 to 7 employed counting to measure and integrate the duration of two events, which were accompanied by metronome beats for half the children. The rhythm enhanced use of counting in younger children. By age 7, most counted spontaneously, using sensible counting strategies. (SKC)
Dynamic time-correlated single-photon counting laser ranging
NASA Astrophysics Data System (ADS)
Peng, Huan; Wang, Yu-rong; Meng, Wen-dong; Yan, Pei-qin; Li, Zhao-hui; Li, Chen; Pan, Hai-feng; Wu, Guang
2018-03-01
We demonstrate a photon counting laser ranging experiment with a four-channel single-photon detector (SPD). The multi-channel SPD improve the counting rate more than 4×107 cps, which makes possible for the distance measurement performed even in daylight. However, the time-correlated single-photon counting (TCSPC) technique cannot distill the signal easily while the fast moving targets are submersed in the strong background. We propose a dynamic TCSPC method for fast moving targets measurement by varying coincidence window in real time. In the experiment, we prove that targets with velocity of 5 km/s can be detected according to the method, while the echo rate is 20% with the background counts of more than 1.2×107 cps.
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Paul, Rick L
2011-01-01
Radiochemical neutron activation analysis (RNAA) with retention on hydrated manganese dioxide (HMD) has played a key role in the certification of As in biological materials at NIST. Although this method provides very high and reproducible yields and detection limits at low microgram/kilogram levels, counting geometry uncertainties may arise from unequal distribution of As in the HMD, and arsenic detection limits may not be optimal due to significant retention of other elements. An alternate RNAA procedure with separation of arsenic by solvent extraction has been investigated. After digestion of samples in nitric and perchloric acids, As(III) is extracted from 2 M sulfuric acid solution into a solution of zinc diethyldithiocarbamate in chloroform. Counting of (76)As allows quantitation of arsenic. Addition of an (77)As tracer solution prior to dissolution allows correction for chemical yield and counting geometries, further improving reproducibility. The HMD and solvent extraction procedures for arsenic were compared through analysis of SRMs 1577c (bovine liver), 1547 (peach leaves), and 1575a (pine needles). Both methods gave As results in agreement with certified values with comparable reproducibility. However, the solvent extraction method yields a factor of 3 improvement in detection limits and is less time-consuming than the HMD method. The new method shows great promise for use in As certification in reference materials.
Optimizing traffic counting procedures.
DOT National Transportation Integrated Search
1986-01-01
Estimates of annual average daily traffic volumes are important in the planning and operations of state highway departments. These estimates are used in the planning of new construction and improvement of existing facilities, and, in some cases, in t...
Optimizing automatic traffic recorders network in Minnesota.
DOT National Transportation Integrated Search
2016-01-01
Accurate traffic counts are important for budgeting, traffic planning, and roadway design. With thousands of : centerline miles of roadways, it is not possible to install continuous counters at all locations of interest (e.g., : intersections). There...
Designing to win in sub-90nm mask production
NASA Astrophysics Data System (ADS)
Zhang, Yuan
2005-11-01
An informal survey conducted with key customers by Photronics indicates that the time gap between technology nodes has accelerated in recent years. Previously the cycle was three years. However, between 130nm and 90nm there was less than a 2 year gap, and between 90nm and 65nm a 1.5 year gap exists. As a result, the technical challenges have increased substantially. In addition, mask costs are rising exponentially due to high capital equipment cost, a shrinking customer base, long write times and increased applications of 193nm EAPSM or AAPSM. Collaboration among EDA companies, mask houses and wafer manufacturers is now more important than ever. This paper will explore avenues for reducing mask costs, mainly in the areas of: write-time reduction through design for manufacturing (DFM), and yield improvement through specification relaxation. Our study conducted through layout vertex modeling suggests that a simple design shape such as a square versus a circle or an angled structure helps reduce shot count and write time. Shot count reduction through mask layout optimization, and advancement in new generation E-beam writers can reduce write time up to 65%. An advanced laser writer can produce those less critical E-beam layers in less than half the time of an e-beam writer. Additionally, the emerging imprint lithography brings new life and new challenges to the photomask industry with applications in many fields outside of the semiconductor industry. As immersion lithography is introduced for 45nm device production, polarization and MEEF effects due to the mask will become severe. Larger magnification not only provides benefits on CD control and MEEF, but also extends the life time of current 90nm/65nm tool sets where 45nm mask sets can be produced at a lower cost.
Covering Resilience: A Recent Development for Binomial Checkpointing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walther, Andrea; Narayanan, Sri Hari Krishna
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, required, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algorithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massive parallel simulationsmore » and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We describe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding implementation and discuss first numerical results.« less
Kynetic resazurin assay (KRA) for bacterial quantification of foodborne pathogens
NASA Astrophysics Data System (ADS)
Arenas, Yaxal; Mandel, Arkady; Lilge, Lothar
2012-03-01
Fast detection of bacterial concentrations is important for the food industry and for healthcare. Early detection of infections and appropriate treatment is essential since, the delay of treatments for bacterial infections tends to be associated with higher mortality rates. In the food industry and in healthcare, standard procedures require the count of colony-forming units in order to quantify bacterial concentrations, however, this method is time consuming and reports require three days to be completed. An alternative is metabolic-colorimetric assays which provide time efficient in vitro bacterial concentrations. A colorimetric assay based on Resazurin was developed as a time kinetic assay (KRA) suitable for bacterial concentration measurements. An optimization was performed by finding excitation and emission wavelengths for fluorescent acquisition. A comparison of two non-related bacteria, foodborne pathogens Escherichia coli and Listeria monocytogenes, was performed in 96 well plates. A metabolic and clonogenic dependence was established for fluorescent kinetic signals.
Optimal Irrigation and Debridement of Infected Joint Implants
Schwechter, Evan M.; Folk, David; Varshney, Avanish K.; Fries, Bettina C.; Kim, Sun Jin; Hirsh, David M.
2014-01-01
Acute postoperative and acute, late hematogenous prosthetic joint infections have been treated with 1-stage irrigation and debridement with polyethylene exchange. Success rates, however, are highly variable. Reported studies demonstrate that detergents are effective at decreasing bacterial colony counts on orthopedic implants. Our hypothesis is that the combination of a detergent and an antiseptic would be more effective than using a detergent alone to decrease colony counts from a methicillin-resistant Staphylococcus aureus biofilm-coated titanium alloy disk simulating an orthopedic implant. In our study of various agents tested, chlorhexidine gluconate scrub (antiseptic and detergent) was the most effective at decreasing bacterial colony counts both prereincubation and postreincubation of the disks; pulse lavage and scrubbing were not more effective than pulse lavage alone. PMID:21641757
Pagels, Peter; Boldemann, Cecilia; Raustorp, Anders
2011-01-01
To compare pedometer steps with accelerometer counts and to analyse minutes of engagement in light, moderate and vigorous physical activity in 3- to 5-year-old children during preschool time. Physical activity was recorded during preschool time for five consecutive days in 55 three- to five-year-old children. The children wore a Yamax SW200 pedometer and an Actigraph GTIM Monitor. The average time spent at preschool was 7.22 h/day with an average step of 7313 (±3042). Steps during preschool time increased with increasing age. The overall correlation between mean step counts and mean accelerometer counts (r = 0.67, p < 0.001), as well as time in light to vigorous activity (r = 0.76, p < 0.001), were moderately high. Step counts and moderate to vigorous physical activity minutes were poorly correlated in 3 years old (r = 0.19, p < 0.191) and moderately correlated (r = 0.50, p < 0.001) for children 4 to 5 years old. Correlation between the preschool children's pedometer-determined step counts and total engagement in physical activity during preschool time was moderately high. Children's step counts at preschool were low, and the time spent in moderate and vigorous physical activity at preschool was very short. © 2010 The Author(s)/Journal Compilation © 2010 Foundation Acta Paediatrica.
TOFPET 2: A high-performance circuit for PET time-of-flight
NASA Astrophysics Data System (ADS)
Di Francesco, Agostino; Bugalho, Ricardo; Oliveira, Luis; Rivetti, Angelo; Rolo, Manuel; Silva, Jose C.; Varela, Joao
2016-07-01
We present a readout and digitization ASIC featuring low-noise and low-power for time-of flight (TOF) applications using SiPMs. The circuit is designed in standard CMOS 110 nm technology, has 64 independent channels and is optimized for time-of-flight measurement in Positron Emission Tomography (TOF-PET). The input amplifier is a low impedance current conveyor based on a regulated common-gate topology. Each channel has quad-buffered analogue interpolation TDCs (time binning 20 ps) and charge integration ADCs with linear response at full scale (1500 pC). The signal amplitude can also be derived from the measurement of time-over-threshold (ToT). Simulation results show that for a single photo-electron signal with charge 200 (550) fC generated by a SiPM with (320 pF) capacitance the circuit has 24 (30) dB SNR, 75 (39) ps r.m.s. resolution, and 4 (8) mW power consumption. The event rate is 600 kHz per channel, with up to 2 MHz dark counts rejection.
Method to determine 226Ra in small sediment samples by ultralow background liquid scintillation.
Sanchez-Cabeza, Joan-Albert; Kwong, Laval Liong Wee; Betti, Maria
2010-08-15
(210)Pb dating of sediment cores is a widely used tool to reconstruct ecosystem evolution and historical pollution during the last century. Although (226)Ra can be determined by gamma spectrometry, this method shows severe limitations which are, among others, sample size requirements and counting times. In this work, we propose a new strategy based on the analysis of (210)Pb through (210)Po in equilibrium by alpha spectrometry, followed by the determination of (226)Ra (base or supported (210)Pb) without any further chemical purification by liquid scintillation and with a higher sample throughput. Although gamma spectrometry might still be required to determine (137)Cs as an independent tracer, the effort can then be focused only on those sections dated around 1963, when maximum activities are expected. In this work, we optimized the counting conditions, calibrated the system for changing quenching, and described the new method to determine (226)Ra in small sediment samples, after (210)Po determination, allowing a more precise determination of excess (210)Pb ((210)Pb(ex)). The method was validated with reference materials IAEA-384, IAEA-385, and IAEA-313.
Spent Fuel Assay with an Ultra-High Rate HPGe Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, James; Fulsom, Bryan; Pitts, Karl
2015-07-01
Traditional verification of spent nuclear fuel (SNF) includes determination of initial enrichment, burnup and cool down time (IE, BU, CT). Along with neutron measurements, passive gamma assay provides important information for determining BU and CT. Other gamma-ray-based assay methods such as passive tomography and active delayed gamma offer the potential to measure the spatial distribution of fission products and the fissile isotopic concentration of the fuel, respectively. All fuel verification methods involving gamma-ray spectroscopy require that the spectrometers manage very high count rates while extracting the signatures of interest. PNNL has developed new digital filtering and analysis techniques to producemore » an ultra-high rate gamma-ray spectrometer from a standard coaxial high-purity germanium (HPGe) crystal. This 37% relative efficiency detector has been operated for SNF measurements at input count rates of 500-1300 kcps and throughput in excess of 150 kcps. Optimized filtering algorithms preserve the spectroscopic capability of the system even at these high rates. This paper will present the results of both passive and active SNF measurement performed with this system at PNNL. (authors)« less
Single-photon imaging in complementary metal oxide semiconductor processes
Charbon, E.
2014-01-01
This paper describes the basics of single-photon counting in complementary metal oxide semiconductors, through single-photon avalanche diodes (SPADs), and the making of miniaturized pixels with photon-counting capability based on SPADs. Some applications, which may take advantage of SPAD image sensors, are outlined, such as fluorescence-based microscopy, three-dimensional time-of-flight imaging and biomedical imaging, to name just a few. The paper focuses on architectures that are best suited to those applications and the trade-offs they generate. In this context, architectures are described that efficiently collect the output of single pixels when designed in large arrays. Off-chip readout circuit requirements are described for a variety of applications in physics, medicine and the life sciences. Owing to the dynamic nature of SPADs, designs featuring a large number of SPADs require careful analysis of the target application for an optimal use of silicon real estate and of limited readout bandwidth. The paper also describes the main trade-offs involved in architecting such chips and the solutions adopted with focus on scalability and miniaturization. PMID:24567470
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
NASA Astrophysics Data System (ADS)
Lockhart, M.; Henzlova, D.; Croft, S.; Cutler, T.; Favalli, A.; McGahee, Ch.; Parker, R.
2018-01-01
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli(DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory and implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. The current paper discusses and presents the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. In order to assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. The DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.
[Prognostic value of absolute monocyte count in chronic lymphocytic leukaemia].
Szerafin, László; Jakó, János; Riskó, Ferenc
2015-04-01
The low peripheral absolute lymphocyte and high monocyte count have been reported to correlate with poor clinical outcome in various lymphomas and other cancers. However, a few data known about the prognostic value of absolute monocyte count in chronic lymphocytic leukaemia. The aim of the authors was to investigate the impact of absolute monocyte count measured at the time of diagnosis in patients with chronic lymphocytic leukaemia on the time to treatment and overal survival. Between January 1, 2005 and December 31, 2012, 223 patients with newly-diagnosed chronic lymphocytic leukaemia were included. The rate of patients needing treatment, time to treatment, overal survival and causes of mortality based on Rai stages, CD38, ZAP-70 positivity and absolute monocyte count were analyzed. Therapy was necessary in 21.1%, 57.4%, 88.9%, 88.9% and 100% of patients in Rai stage 0, I, II, III an IV, respectively; in 61.9% and 60.8% of patients exhibiting CD38 and ZAP-70 positivity, respectively; and in 76.9%, 21.2% and 66.2% of patients if the absolute monocyte count was <0.25 G/l, between 0.25-0.75 G/l and >0.75 G/l, respectively. The median time to treatment and the median overal survival were 19.5, 65, and 35.5 months; and 41.5, 65, and 49.5 months according to the three groups of monocyte counts. The relative risk of beginning the therapy was 1.62 (p<0.01) in patients with absolute monocyte count <0.25 G/l or >0.75 G/l, as compared to those with 0.25-0.75 G/l, and the risk of overal survival was 2.41 (p<0.01) in patients with absolute monocyte count lower than 0.25 G/l as compared to those with higher than 0.25 G/l. The relative risks remained significant in Rai 0 patients, too. The leading causes of mortality were infections (41.7%) and the chronic lymphocytic leukaemia (58.3%) in patients with low monocyte count, while tumours (25.9-35.3%) and other events (48.1 and 11.8%) occurred in patients with medium or high monocyte counts. Patients with low and high monocyte counts had a shorter time to treatment compared to patients who belonged to the intermediate monocyte count group. The low absolute monocyte count was associated with increased mortality caused by infectious complications and chronic lymphocytic leukaemia. The absolute monocyte count may give additional prognostic information in Rai stage 0, too.
It Is Time to Count Learning Communities
ERIC Educational Resources Information Center
Henscheid, Jean M.
2015-01-01
As the modern learning community movement turns 30, it is time to determine just how many, and what type, of these programs exist at America's colleges and universities. This article first offers a rationale for counting learning communities followed by a description of how disparate counts and unclear definitions hamper efforts to embed these…
Activity patterns and monitoring numbers of Horned Puffins and Parakeet Auklets
Hatch, Shyla A.
2002-01-01
Nearshore counts of birds on the water and time-lapse photography were used to monitor seasonal activity patterns and interannual variation in numbers of Horned Puffins (Fratercula corniculata) and Parakeet Auklets (Aethia psittacula) at the Semidi Islands, Alaska. The best period for over-water counts was mid egg-laying through hatching in auklets and late prelaying through early hatching in puffins. Daily counts (07.00 h-09.30 h) varied widely, with peak numbers and days with few or no birds present occurring throughout the census period. Variation among annual means in four years amounted to 26% and 72% of total count variation in puffins and auklets, respectively. Time-lapse photography of nesting habitat in early incubation revealed a morning (08.00 h-12.00 h) peak in the number of puffins loitering on study plots. Birds recorded in time-lapse images never comprised more than a third of the estimated breeding population on a plot. Components of variance in the time-lapse study were 29% within hours, 9% among hours (08.00 h-12.00 h), and 62% among days (8-29 June). Variability of overwater and land-based counts is reduced by standardizing the time of day when counts are made, but weather conditions had little influence on either type of count. High international variation of population indices implies low power to detect numerical trends in crevice-nesting auklets and puffins.
Cosmic ray neutron background reduction using localized coincidence veto neutron counting
Menlove, Howard O.; Bourret, Steven C.; Krick, Merlyn S.
2002-01-01
This invention relates to both the apparatus and method for increasing the sensitivity of measuring the amount of radioactive material in waste by reducing the interference caused by cosmic ray generated neutrons. The apparatus includes: (a) a plurality of neutron detectors, each of the detectors including means for generating a pulse in response to the detection of a neutron; and (b) means, coupled to each of the neutrons detectors, for counting only some of the pulses from each of the detectors, whether cosmic ray or fission generated. The means for counting includes a means that, after counting one of the pulses, vetos the counting of additional pulses for a prescribed period of time. The prescribed period of time is between 50 and 200 .mu.s. In the preferred embodiment the prescribed period of time is 128 .mu.s. The veto means can be an electronic circuit which includes a leading edge pulse generator which passes a pulse but blocks any subsequent pulse for a period of between 50 and 200 .mu.s. Alternately, the veto means is a software program which includes means for tagging each of the pulses from each of the detectors for both time and position, means for counting one of the pulses from a particular position, and means for rejecting those of the pulses which originate from the particular position and in a time interval on the order of the neutron die-away time in polyethylene or other shield material. The neutron detectors are grouped in pods, preferably at least 10. The apparatus also includes means for vetoing the counting of coincidence pulses from all of the detectors included in each of the pods which are adjacent to the pod which includes the detector which produced the pulse which was counted.
Optimization and shelf life of a low-lactose yogurt with Lactobacillus rhamnosus HN001.
Ibarra, A; Acha, R; Calleja, M-T; Chiralt-Boix, A; Wittig, E
2012-07-01
Lactose intolerance results in gastrointestinal discomfort and the malabsorption of certain nutrients, such as calcium. The replacement of milk with low-lactose and probiotic-enriched dairy products is an effective strategy of mitigating the symptoms of lactose intolerance. Lactobacillus rhamnosus HN001 (HN001) is a safe, immunity-stimulating probiotic. We have developed a process to increase the hydrolysis of lactose and HN001 growth in yogurt versus β-galactosidase (βG) concentration and enzymatic hydrolysis time (EHT) before bacterial fermentation. The objective of this study was to optimize the conditions by which yogurt is processed as a function of βG and EHT using a multifactorial design, with lactose content, HN001 growth, process time, and sensory quality as dependent variables. Further, the shelf life of the optimized yogurt was evaluated. In the optimization study, polynomials explained the dependent variables. Based on Pearson correlation coefficients, HN001 growth correlated positively with the hydrolysis of lactose. However, low lactose content and high HN001 count increased the fermentation time and lowered the sensory quality. The optimized conditions-using polynomials to obtain yogurt with >1 × 10(7) cfu of HN001/mL, <10 g of lactose/L, and a minimum overall sensory quality of 7 on the Karlsruhe scale-yielded a theoretical value of 910 neutral lactose units/kg for βG and 2.3h for EHT, which were validated in an industrial-scale assay. Based on a shelf-life study at 3 temperatures, the hydrolysis of lactose and the growth of HN001 continue during storage. Arrhenius equations were developed for the variables in the shelf-life study. Our results demonstrate that it is feasible to develop a low-lactose yogurt to which HN001 has been added for lactose-intolerant persons who wish to strengthen their immune system. Copyright © 2012 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M
2017-02-01
The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.
Multifunction audio digitizer for communications systems
NASA Technical Reports Server (NTRS)
Monford, L. G., Jr.
1971-01-01
Digitizer accomplishes both N bit pulse code modulation /PCM/ and delta modulation, and provides modulation indicating variable signal gain and variable sidetone. Other features include - low package count, variable clock rate to optimize bandwidth, and easily expanded PCM output.
Sampling and counting genome rearrangement scenarios
2015-01-01
Background Even for moderate size inputs, there are a tremendous number of optimal rearrangement scenarios, regardless what the model is and which specific question is to be answered. Therefore giving one optimal solution might be misleading and cannot be used for statistical inferring. Statistically well funded methods are necessary to sample uniformly from the solution space and then a small number of samples are sufficient for statistical inferring. Contribution In this paper, we give a mini-review about the state-of-the-art of sampling and counting rearrangement scenarios, focusing on the reversal, DCJ and SCJ models. Above that, we also give a Gibbs sampler for sampling most parsimonious labeling of evolutionary trees under the SCJ model. The method has been implemented and tested on real life data. The software package together with example data can be downloaded from http://www.renyi.hu/~miklosi/SCJ-Gibbs/ PMID:26452124
Development and flight testing of UV optimized Photon Counting CCDs
NASA Astrophysics Data System (ADS)
Hamden, Erika T.
2018-06-01
I will discuss the latest results from the Hamden UV/Vis Detector Lab and our ongoing work using a UV optimized EMCCD in flight. Our lab is currently testing efficiency and performance of delta-doped, anti-reflection coated EMCCDs, in collaboration with JPL. The lab has been set-up to test quantum efficiency, dark current, clock-induced-charge, and read noise. I will describe our improvements to our circuit boards for lower noise, updates from a new, more flexible NUVU controller, and the integration of an EMCCD in the FIREBall-2 UV spectrograph. I will also briefly describe future plans to conduct radiation testing on delta-doped EMCCDs (both warm, unbiased and cold, biased configurations) thus summer and longer term plans for testing newer photon counting CCDs as I move the HUVD Lab to the University of Arizona in the Fall of 2018.
Avalanche photodiode photon counting receivers for space-borne lidars
NASA Technical Reports Server (NTRS)
Sun, Xiaoli; Davidson, Frederic M.
1991-01-01
Avalanche photodiodes (APD) are studied for uses as photon counting detectors in spaceborne lidars. Non-breakdown APD photon counters, in which the APD's are biased below the breakdown point, are shown to outperform: (1) conventional APD photon counters biased above the breakdown point; (2) conventional APD photon counters biased above the breakdown point; and (3) APD's in analog mode when the received optical signal is extremely weak. Non-breakdown APD photon counters were shown experimentally to achieve an effective photon counting quantum efficiency of 5.0 percent at lambda = 820 nm with a dead time of 15 ns and a dark count rate of 7000/s which agreed with the theoretically predicted values. The interarrival times of the counts followed an exponential distribution and the counting statistics appeared to follow a Poisson distribution with no after pulsing. It is predicted that the effective photon counting quantum efficiency can be improved to 18.7 percent at lambda = 820 nm and 1.46 percent at lambda = 1060 nm with a dead time of a few nanoseconds by using more advanced commercially available electronic components.
MEMS PolyMUMPS-Based Miniature Microphone for Directional Sound Sensing
2007-09-01
of the translating mode Phir=-atan((2*wr*er*w)/(wr^2-w^2));% Phase constant rocking Phit =-atan((2*wt*et*w)/(wt^2-w^2));% Phase constant translating...2.5e-6)+1 Yl(count)=8e6*(At*sin(w.*t(count)+ Phit ) + Ar*cos(w.*t(count)+Phir)); %left membrane displacement as a function of time in micrometers...Xl(count)=-(((.5)^2-Yl(count).^2).^.5); Yr(count)=8e6*(At*sin(w.*t(count)+ Phit ) - Ar*cos(w.*t(count)+Phir)); %right membrane displacement
Rodger, Alison J; Lodwick, Rebecca; Schechter, Mauro; Deeks, Steven; Amin, Janaki; Gilson, Richard; Paredes, Roger; Bakowska, Elzbieta; Engsig, Frederik N; Phillips, Andrew
2013-03-27
Due to the success of antiretroviral therapy (ART), it is relevant to ask whether death rates in optimally treated HIV are higher than the general population. The objective was to compare mortality rates in well controlled HIV-infected adults in the SMART and ESPRIT clinical trials with the general population. Non-IDUs aged 20-70 years from the continuous ART control arms of ESPRIT and SMART were included if the person had both low HIV plasma viral loads (≤400 copies/ml SMART, ≤500 copies/ml ESPRIT) and high CD4(+) T-cell counts (≥350 cells/μl) at any time in the past 6 months. Standardized mortality ratios (SMRs) were calculated by comparing death rates with the Human Mortality Database. Three thousand, two hundred and eighty individuals [665 (20%) women], median age 43 years, contributed 12,357 person-years of follow-up. Sixty-two deaths occurred during follow up. Commonest cause of death was cardiovascular disease (CVD) or sudden death (19, 31%), followed by non-AIDS malignancy (12, 19%). Only two deaths (3%) were AIDS-related. Mortality rate was increased compared with the general population with a CD4(+) cell count between 350 and 499 cells/μl [SMR 1.77, 95% confidence interval (CI) 1.17-2.55]. No evidence for increased mortality was seen with CD4(+) cell counts greater than 500 cells/μl (SMR 1.00, 95% CI 0.69-1.40). In HIV-infected individuals on ART, with a recent undetectable viral load, who maintained or had recovery of CD4(+) cell counts to at least 500 cells/μl, we identified no evidence for a raised risk of death compared with the general population.
Point-of-care, portable microfluidic blood analyzer system
NASA Astrophysics Data System (ADS)
Maleki, Teimour; Fricke, Todd; Quesenberry, J. T.; Todd, Paul W.; Leary, James F.
2012-03-01
Recent advances in MEMS technology have provided an opportunity to develop microfluidic devices with enormous potential for portable, point-of-care, low-cost medical diagnostic tools. Hand-held flow cytometers will soon be used in disease diagnosis and monitoring. Despite much interest in miniaturizing commercially available cytometers, they remain costly, bulky, and require expert operation. In this article, we report progress on the development of a battery-powered handheld blood analyzer that will quickly and automatically process a drop of whole human blood by real-time, on-chip magnetic separation of white blood cells (WBCs), fluorescence analysis of labeled WBC subsets, and counting a reproducible fraction of the red blood cells (RBCs) by light scattering. The whole blood (WB) analyzer is composed of a micro-mixer, a special branching/separation system, an optical detection system, and electronic readout circuitry. A droplet of un-processed blood is mixed with the reagents, i.e. magnetic beads and fluorescent stain in the micro-mixer. Valve-less sorting is achieved by magnetic deflection of magnetic microparticle-labeled WBC. LED excitation in combination with an avalanche photodiode (APD) detection system is used for counting fluorescent WBC subsets using several colors of immune-Qdots, while counting a reproducible fraction of red blood cells (RBC) is performed using a laser light scatting measurement with a photodiode. Optimized branching/channel width is achieved using Comsol Multi-Physics™ simulation. To accommodate full portability, all required power supplies (40v, +/-10V, and +3V) are provided via step-up voltage converters from one battery. A simple onboard lock-in amplifier is used to increase the sensitivity/resolution of the pulse counting circuitry.
Dosage optimization in positron emission tomography: state-of-the-art methods and future prospects
Karakatsanis, Nicolas A; Fokou, Eleni; Tsoumpas, Charalampos
2015-01-01
Positron emission tomography (PET) is widely used nowadays for tumor staging and therapy response in the clinic. However, average PET radiation exposure has increased due to higher PET utilization. This study aims to review state-of-the-art PET tracer dosage optimization methods after accounting for the effects of human body attenuation and scan protocol parameters on the counting rate. In particular, the relationship between the noise equivalent count rate (NECR) and the dosage (NECR-dosage curve) for a range of clinical PET systems and body attenuation sizes will be systematically studied to prospectively estimate the minimum dosage required for sufficiently high NECR. The optimization criterion can be determined either as a function of the peak of the NECR-dosage curve or as a fixed NECR score when NECR uniformity across a patient population is important. In addition, the systematic NECR assessments within a controllable environment of realistic simulations and phantom experiments can lead to a NECR-dosage response model, capable of predicting the optimal dosage for every individual PET scan. Unlike conventional guidelines suggesting considerably large dosage levels for obese patients, NECR-based optimization recommends: i) moderate dosage to achieve 90% of peak NECR for obese patients, ii) considerable dosage reduction for slimmer patients such that uniform NECR is attained across the patient population, and iii) prolongation of scans for PET/MR protocols, where longer PET acquisitions are affordable due to lengthy MR sequences, with motion compensation becoming important then. Finally, the need for continuous adaptation of dosage optimization to emerging technologies will be discussed. PMID:26550543
Dose Calibration of the ISS-RAD Fast Neutron Detector
NASA Technical Reports Server (NTRS)
Zeitlin, C.
2015-01-01
The ISS-RAD instrument has been fabricated by Southwest Research Institute and delivered to NASA for flight to the ISS in late 2015 or early 2016. ISS-RAD is essentially two instruments that share a common interface to ISS. The two instruments are the Charged Particle Detector (CPD), which is very similar to the MSL-RAD detector on Mars, and the Fast Neutron Detector (FND), which is a boron-loaded plastic scintillator with readout optimized for the 0.5 to 10 MeV energy range. As the FND is completely new, it has been necessary to develop methodology to allow it to be used to measure the neutron dose and dose equivalent. This talk will focus on the methods developed and their implementation using calibration data obtained in quasi-monoenergetic (QMN) neutron fields at the PTB facility in Braunschweig, Germany. The QMN data allow us to determine an approximate response function, from which we estimate dose and dose equivalent contributions per detected neutron as a function of the pulse height. We refer to these as the "pSv per count" curves for dose equivalent and the "pGy per count" curves for dose. The FND is required to provide a dose equivalent measurement with an accuracy of ?10% of the known value in a calibrated AmBe field. Four variants of the analysis method were developed, corresponding to two different approximations of the pSv per count curve, and two different implementations, one for real-time analysis onboard ISS and one for ground analysis. We will show that the preferred method, when applied in either real-time or ground analysis, yields good accuracy for the AmBe field. We find that the real-time algorithm is more susceptible to chance-coincidence background than is the algorithm used in ground analysis, so that the best estimates will come from the latter.
Changing mortality risk associated with CD4 cell response to antiretroviral therapy in South Africa
Lawn, Stephen D.; Little, Francesca; Bekker, Linda-Gail; Kaplan, Richard; Campbel, Elizabeth; Orrell, Catherine; Wood, Robin
2013-01-01
Objective To determine the relationship between mortality risk and the CD4 cell response to antiretroviral therapy (ART). Design Observational community-based ART cohort in South Africa. Methods CD4 cell counts were measured 4 monthly, and deaths were prospectively ascertained. Cumulative person-time accrued within a range of updated CD4 cell count strata (CD4 cell-strata) was calculated and used to derive CD4 cell-stratified mortality rates. Results Patients (2423) (median baseline CD4 cell count of 105 cells/ml) were observed for up to 5 years of ART. One hundred and ninety-seven patients died during 3155 person years of observation. In multivariate analysis, mortality rate ratios associated with 0–49, 50–99, 100–199, 200–299, 300– 399, 400–499 and at least 500 cells/ml updated CD4 cell-strata were 11.6, 4.9, 2.6, 1.7, 1.5, 1.4 and 1.0, respectively. Analysis of CD4 cell count recovery permitted calculations of person-time accrued within these CD4 cell strata. Despite rapid immune recovery, high mortality in the first year of ART was related to the large proportion of person-time accrued within CD4 cell-strata less than 200 cells/ml. Moreover, patients with baseline CD4 cell counts less than 100 cells/ml had much higher cumulative mortality estimates at 1 and 4 years (11.6 and 16.7%) compared with those of patients with baseline counts of at least 100 cells/ml (5.2 and 9.5%) largely because of greater cumulative person-time at CD4 cell counts less than 200 cells/ml. Conclusion: Updated CD4 cell counts are the variable most strongly associated with mortality risk during ART. High cumulative mortality risk is associated with person-time accrued at low CD4 cell counts. National HIV programmes in resource-limited settings should be designed to minimize the time patients spend with CD4 cell counts less than 200 cells/ml both before and during ART. PMID:19114870
NASA Astrophysics Data System (ADS)
Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.
2017-12-01
We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.
Lopez, Derrick; Nedkoff, Lee; Knuiman, Matthew; Hobbs, Michael S T; Briffa, Thomas G; Preen, David B; Hung, Joseph; Beilby, John; Mathur, Sushma; Reynolds, Anna; Sanfilippo, Frank M
2017-11-17
To develop a method for categorising coronary heart disease (CHD) subtype in linked data accounting for different CHD diagnoses across records, and to compare hospital admission numbers and ratios of unlinked versus linked data for each CHD subtype over time, and across age groups and sex. Cohort study. Person-linked hospital administrative data covering all admissions for CHD in Western Australia from 1988 to 2013. Ratios of (1) unlinked admission counts to contiguous admission (CA) counts (accounting for transfers), and (2) 28-day episode counts (accounting for transfers and readmissions) to CA counts stratified by CHD subtype, sex and age group. In all CHD subtypes, the ratios changed in a linear or quadratic fashion over time and the coefficients of the trend term differed across CHD subtypes. Furthermore, for many CHD subtypes the ratios also differed by age group and sex. For example, in women aged 35-54 years, the ratio of unlinked to CA counts for non-ST elevation myocardial infarction admissions in 2000 was 1.10, and this increased in a linear fashion to 1.30 in 2013, representing an annual increase of 0.0148. The use of unlinked counts in epidemiological estimates of CHD hospitalisations overestimates CHD counts. The CA and 28-day episode counts are more aligned with epidemiological studies of CHD. The degree of overestimation of counts using only unlinked counts varies in a complex manner with CHD subtype, time, sex and age group, and it is not possible to apply a simple correction factor to counts obtained from unlinked data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Wolk, D. M.; Johnson, C. H.; Rice, E. W.; Marshall, M. M.; Grahn, K. F.; Plummer, C. B.; Sterling, C. R.
2000-01-01
The microsporidia have recently been recognized as a group of pathogens that have potential for waterborne transmission; however, little is known about the effects of routine disinfection on microsporidian spore viability. In this study, in vitro growth of Encephalitozoon syn. Septata intestinalis, a microsporidium found in the human gut, was used as a model to assess the effect of chlorine on the infectivity and viability of microsporidian spores. Spore inoculum concentrations were determined by using spectrophotometric measurements (percent transmittance at 625 nm) and by traditional hemacytometer counting. To determine quantitative dose-response data for spore infectivity, we optimized a rabbit kidney cell culture system in 24-well plates, which facilitated calculation of a 50% tissue culture infective dose (TCID50) and a minimal infective dose (MID) for E. intestinalis. The TCID50 is a quantitative measure of infectivity and growth and is the number of organisms that must be present to infect 50% of the cell culture wells tested. The MID is as a measure of a system's permissiveness to infection and a measure of spore infectivity. A standardized MID and a standardized TCID50 have not been reported previously for any microsporidian species. Both types of doses are reported in this paper, and the values were used to evaluate the effects of chlorine disinfection on the in vitro growth of microsporidia. Spores were treated with chlorine at concentrations of 0, 1, 2, 5, and 10 mg/liter. The exposure times ranged from 0 to 80 min at 25°C and pH 7. MID data for E. intestinalis were compared before and after chlorine disinfection. A 3-log reduction (99.9% inhibition) in the E. intestinalis MID was observed at a chlorine concentration of 2 mg/liter after a minimum exposure time of 16 min. The log10 reduction results based on percent transmittance-derived spore counts were equivalent to the results based on hemacytometer-derived spore counts. Our data suggest that chlorine treatment may be an effective water treatment for E. intestinalis and that spectrophotometric methods may be substituted for labor-intensive hemacytometer methods when spores are counted in laboratory-based chlorine disinfection studies. PMID:10742198
Automatic, time-interval traffic counts for recreation area management planning
D. L. Erickson; C. J. Liu; H. K. Cordell
1980-01-01
Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...
Swarm Observations: Implementing Integration Theory to Understand an Opponent Swarm
2012-09-01
80 Figure 14 Box counts and local dimension plots for the “Rally” scenario. .....................81 Figure...88 Figure 21 Spatial entropy over time for the “Avoid” scenario.........................................89 Figure 22 Box counts and local...96 Figure 27 Spatial entropy over time for the “Rally-integration” scenario. ......................97 Figure 28 Box counts and
Advantages and challenges in automated apatite fission track counting
NASA Astrophysics Data System (ADS)
Enkelmann, E.; Ehlers, T. A.
2012-04-01
Fission track thermochronometer data are often a core element of modern tectonic and denudation studies. Soon after the development of the fission track methods interest emerged for the developed an automated counting procedure to replace the time consuming labor of counting fission tracks under the microscope. Automated track counting became feasible in recent years with increasing improvements in computer software and hardware. One such example used in this study is the commercial automated fission track counting procedure from Autoscan Systems Pty that has been highlighted through several venues. We conducted experiments that are designed to reliably and consistently test the ability of this fully automated counting system to recognize fission tracks in apatite and a muscovite external detector. Fission tracks were analyzed in samples with a step-wise increase in sample complexity. The first set of experiments used a large (mm-size) slice of Durango apatite cut parallel to the prism plane. Second, samples with 80-200 μm large apatite grains of Fish Canyon Tuff were analyzed. This second sample set is characterized by complexities often found in apatites in different rock types. In addition to the automated counting procedure, the same samples were also analyzed using conventional counting procedures. We found for all samples that the fully automated fission track counting procedure using the Autoscan System yields a larger scatter in the fission track densities measured compared to conventional (manual) track counting. This scatter typically resulted from the false identification of tracks due surface and mineralogical defects, regardless of the image filtering procedure used. Large differences between track densities analyzed with the automated counting persisted between different grains analyzed in one sample as well as between different samples. As a result of these differences a manual correction of the fully automated fission track counts is necessary for each individual surface area and grain counted. This manual correction procedure significantly increases (up to four times) the time required to analyze a sample with the automated counting procedure compared to the conventional approach.
Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela
2017-10-17
To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.
NASA Astrophysics Data System (ADS)
Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton
2016-04-01
The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.
Investigation and Implementation of Matrix Permanent Algorithms for Identity Resolution
2014-12-01
calculation of the permanent of a matrix whose dimension is a function of target count [21]. However, the optimal approach for computing the permanent is...presently unclear. The primary objective of this project was to determine the optimal computing strategy(-ies) for the matrix permanent in tactical and...solving various combinatorial problems (see [16] for details and appli- cations to a wide variety of problems) and thus can be applied to compute a
Lockhart, M.; Henzlova, D.; Croft, S.; ...
2017-09-20
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lockhart, M.; Henzlova, D.; Croft, S.
Over the past few decades, neutron multiplicity counting has played an integral role in Special Nuclear Material (SNM) characterization pertaining to nuclear safeguards. Current neutron multiplicity analysis techniques use singles, doubles, and triples count rates because a methodology to extract and dead time correct higher order count rates (i.e. quads and pents) was not fully developed. This limitation is overcome by the recent extension of a popular dead time correction method developed by Dytlewski. This extended dead time correction algorithm, named Dytlewski-Croft-Favalli (DCF), is detailed in reference Croft and Favalli (2017), which gives an extensive explanation of the theory andmore » implications of this new development. Dead time corrected results can then be used to assay SNM by inverting a set of extended point model equations which as well have only recently been formulated. Here, we discuss and present the experimental evaluation of practical feasibility of the DCF dead time correction algorithm to demonstrate its performance and applicability in nuclear safeguards applications. In order to test the validity and effectiveness of the dead time correction for quads and pents, 252Cf and SNM sources were measured in high efficiency neutron multiplicity counters at the Los Alamos National Laboratory (LANL) and the count rates were extracted up to the fifth order and corrected for dead time. To assess the DCF dead time correction, the corrected data is compared to traditional dead time correction treatment within INCC. In conclusion, the DCF dead time correction is found to provide adequate dead time treatment for broad range of count rates available in practical applications.« less
A video-based real-time adaptive vehicle-counting system for urban roads.
Liu, Fei; Zeng, Zhiyuan; Jiang, Rong
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios.
A video-based real-time adaptive vehicle-counting system for urban roads
2017-01-01
In developing nations, many expanding cities are facing challenges that result from the overwhelming numbers of people and vehicles. Collecting real-time, reliable and precise traffic flow information is crucial for urban traffic management. The main purpose of this paper is to develop an adaptive model that can assess the real-time vehicle counts on urban roads using computer vision technologies. This paper proposes an automatic real-time background update algorithm for vehicle detection and an adaptive pattern for vehicle counting based on the virtual loop and detection line methods. In addition, a new robust detection method is introduced to monitor the real-time traffic congestion state of road section. A prototype system has been developed and installed on an urban road for testing. The results show that the system is robust, with a real-time counting accuracy exceeding 99% in most field scenarios. PMID:29135984
Contrast-enhanced spectral mammography with a photon-counting detector.
Fredenberg, Erik; Hemmendorff, Magnus; Cederström, Björn; Aslund, Magnus; Danielsson, Mats
2010-05-01
Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. The authors have investigated a photon-counting spectral imaging system with two energy bins for contrast-enhanced mammography. System optimization and the potential benefit compared to conventional non-energy-resolved absorption imaging was studied. A framework for system characterization was set up that included quantum and anatomical noise and a theoretical model of the system was benchmarked to phantom measurements. Optimal combination of the energy-resolved images corresponded approximately to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging in the phantom study. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, yielded only a minute improvement. In a simulation of a clinically more realistic case, spectral imaging was predicted to perform approximately 30% better than absorption imaging for an average glandularity breast with an average level of anatomical noise. For dense breast tissue and a high level of anatomical noise, however, a rise in detectability by a factor of 6 was predicted. Another approximately 70%-90% improvement was found to be within reach for an optimized system. Contrast-enhanced spectral mammography is feasible and beneficial with the current system, and there is room for additional improvements. Inclusion of anatomical noise is essential for optimizing spectral imaging systems.
Ding, Huanjun; Molloi, Sabee
2012-08-07
A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.
N plus 2 Supersonic Concept Development and Systems Integration
NASA Technical Reports Server (NTRS)
Wedge, Harry R.; Bonet, John; Magee, Todd; Chen, Daniel; Hollowell, Steve; Kutzmann, Aaron; Mortlock, Alan; Stengle, Josh; Nelson, Chet; Adamson, Eric;
2010-01-01
Supersonic airplanes for two generations into the future (N+2, 2020-2025 EIS) were designed: the 100 passenger 765-072B, and the 30 passenger 765-076E. Both achieve a trans-Atlantic range of about 4000nm. The larger 765-072B meets fuel burn and emissions goals forecast for the 2025 time-frame, and the smaller 765-076E improves the boom and confidence in utilization that accompanies lower seat count. The boom level of both airplanes was reduced until balanced with performance. The final configuration product is two "realistic", non-proprietary future airplane designs, described in sufficient detail for subsequent multi-disciplinary design and optimization, with emphasis on the smaller 765-076E because of its lower boom characteristics. In addition IGES CAD files of the OML lofts of the two example configurations, a non-proprietary parametric engine model, and a first-cycle Finite Element Model are also provided for use in future multi-disciplinary analysis, optimization, and technology evaluation studies.
Goñi, M G; Moreira, M R; Viacava, G E; Roura, S I
2013-01-30
Many studies have focused on seed decontamination but no one has been capable of eliminating all pathogenic bacteria. Two objectives were followed. First, to assess the in vitro antimicrobial activity of chitosan against: (a) Escherichia coli O157:H7, (b) native microflora of lettuce and (c) native microflora of lettuce seeds. Second, to evaluate the efficiency of chitosan on reducing microflora on lettuce seeds. The overall goal was to find a combination of contact time and chitosan concentration that reduces the microflora of lettuce seeds, without affecting germination. After treatment lettuce seeds presented no detectable microbial counts (<10(2)CFU/50 seeds) for all populations. Moreover, chitosan eliminated E. coli. Regardless of the reduction in the microbial load, a 90% reduction on germination makes imbibition with chitosan, uneconomical. Subsequent treatments identified the optimal treatment as 10 min contact with a 10 g/L chitosan solution, which maintained the highest germination percentage. Copyright © 2012 Elsevier Ltd. All rights reserved.
2015-07-17
This figure shows how the Alice instrument count rate changed over time during the sunset and sunrise observations. The count rate is largest when the line of sight to the sun is outside of the atmosphere at the start and end times. Molecular nitrogen (N2) starts absorbing sunlight in the upper reaches of Pluto's atmosphere, decreasing as the spacecraft approaches the planet's shadow. As the occultation progresses, atmospheric methane and hydrocarbons can also absorb the sunlight and further decrease the count rate. When the spacecraft is totally in Pluto's shadow the count rate goes to zero. As the spacecraft emerges from Pluto's shadow into sunrise, the process is reversed. By plotting the observed count rate in the reverse time direction, it is seen that the atmospheres on opposite sides of Pluto are nearly identical. http://photojournal.jpl.nasa.gov/catalog/PIA19716
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
Influence of Point Count Length and Repeated Visits on Habitat Model Performance
Randy Dettmers; David A. Buehler; John G. Bartlett; Nathan A. Klaus
1999-01-01
Point counts are commonly used to monitor bird populations, and a substantial amount of research has investigated how conducting counts for different lengths of time affects the accuracy of these counts and the subsequent ability to monitor changes in population trends. However, little work has been done io assess how changes in count duration affect bird-habitat...
Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior
ERIC Educational Resources Information Center
Liu, Hui; Powers, Daniel A.
2007-01-01
This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…
45 CFR 261.60 - What hours of participation may a State report for a work-eligible individual?
Code of Federal Regulations, 2012 CFR
2012-10-01
... Verification Plan. (e) A State may count supervised homework time and up to one hour of unsupervised homework time for each hour of class time. Total homework time counted for participation cannot exceed the hours...
45 CFR 261.60 - What hours of participation may a State report for a work-eligible individual?
Code of Federal Regulations, 2014 CFR
2014-10-01
... Verification Plan. (e) A State may count supervised homework time and up to one hour of unsupervised homework time for each hour of class time. Total homework time counted for participation cannot exceed the hours...
Darling, Aaron E.
2009-01-01
Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186
Single photon detection in a waveguide-coupled Ge-on-Si lateral avalanche photodiode.
Martinez, Nicholas J D; Gehl, Michael; Derose, Christopher T; Starbuck, Andrew L; Pomerene, Andrew T; Lentine, Anthony L; Trotter, Douglas C; Davids, Paul S
2017-07-10
We examine gated-Geiger mode operation of an integrated waveguide-coupled Ge-on-Si lateral avalanche photodiode (APD) and demonstrate single photon detection at low dark count for this mode of operation. Our integrated waveguide-coupled APD is fabricated using a selective epitaxial Ge-on-Si growth process resulting in a separate absorption and charge multiplication (SACM) design compatible with our silicon photonics platform. Single photon detection efficiency and dark count rate is measured as a function of temperature in order to understand and optimize performance characteristics in this device. We report single photon detection of 5.27% at 1310 nm and a dark count rate of 534 kHz at 80 K for a Ge-on-Si single photon avalanche diode. Dark count rate is the lowest for a Ge-on-Si single photon detector in this range of temperatures while maintaining competitive detection efficiency. A jitter of 105 ps was measured for this device.
Weavers, Paul T; Borisch, Eric A; Hulshizer, Tom C; Rossman, Phillip J; Young, Phillip M; Johnson, Casey P; McKay, Jessica; Cline, Christopher C; Riederer, Stephen J
2016-04-01
Three-station stepping-table time-resolved 3D contrast-enhanced magnetic resonance angiography has conflicting demands in the need to limit acquisition time in proximal stations to match the speed of the advancing contrast bolus and in the distal-most station to avoid venous contamination while still providing clinically useful spatial resolution. This work describes improved receiver coil arrays which address this issue by allowing increased acceleration factors, providing increased spatial resolution per unit time. Receiver coil arrays were constructed for each station (pelvis, thigh, calf) and then integrated into a 48-element array for three-station peripheral CE-MRA. Coil element sizes and array configurations for these three stations were designed to improve SENSE-type parallel imaging taking advantage of an increase in coil count for all stations versus the previous 32 channel capability. At each station either acceleration apportionment or optimal CAIPIRINHA selection was used to choose the optimum acceleration parameters for each subject. Results were evaluated in both single- and multi-station studies. Single-station studies showed that SENSE acceleration in the thigh station could be readily increased from R=8 to R=10, allowing reduction of the frame time from 2.5 to 2.1 s to better image the typically rapidly advancing bolus at this station. Similarly, the improved coil array for the calf station permitted acceleration increase from R=8 to R=12, providing a 4.0 vs. 5.2 s frame time. Results in three-station studies suggest an improved ability to track the contrast bolus in peripheral CE-MRA. Modified receiver coil arrays and individualized parameter optimization have been used to provide improved acceleration at all stations in multi-station peripheral CE-MRA and provide high spatial resolution with frame times as short as 2.1 s. Copyright © 2015 Elsevier Inc. All rights reserved.
Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.
Uhl, Wolfgang; Schaule, Gabriela
2004-05-01
Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.
29 CFR 778.319 - Paying for but not counting hours worked.
Code of Federal Regulations, 2010 CFR
2010-07-01
... working time under the Act, coupled with a provision that these hours will not be counted as working time... more hours have been worked, the employee must be paid overtime compensation at not less than one and... 29 Labor 3 2010-07-01 2010-07-01 false Paying for but not counting hours worked. 778.319 Section...
Konikoff, Jacob; Brookmeyer, Ron; Longosz, Andrew F.; Cousins, Matthew M.; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Koblin, Beryl A.; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Eshleman, Susan H.; Laeyendecker, Oliver
2013-01-01
Background A limiting antigen avidity enzyme immunoassay (HIV-1 LAg-Avidity assay) was recently developed for cross-sectional HIV incidence estimation. We evaluated the performance of the LAg-Avidity assay alone and in multi-assay algorithms (MAAs) that included other biomarkers. Methods and Findings Performance of testing algorithms was evaluated using 2,282 samples from individuals in the United States collected 1 month to >8 years after HIV seroconversion. The capacity of selected testing algorithms to accurately estimate incidence was evaluated in three longitudinal cohorts. When used in a single-assay format, the LAg-Avidity assay classified some individuals infected >5 years as assay positive and failed to provide reliable incidence estimates in cohorts that included individuals with long-term infections. We evaluated >500,000 testing algorithms, that included the LAg-Avidity assay alone and MAAs with other biomarkers (BED capture immunoassay [BED-CEIA], BioRad-Avidity assay, HIV viral load, CD4 cell count), varying the assays and assay cutoffs. We identified an optimized 2-assay MAA that included the LAg-Avidity and BioRad-Avidity assays, and an optimized 4-assay MAA that included those assays, as well as HIV viral load and CD4 cell count. The two optimized MAAs classified all 845 samples from individuals infected >5 years as MAA negative and estimated incidence within a year of sample collection. These two MAAs produced incidence estimates that were consistent with those from longitudinal follow-up of cohorts. A comparison of the laboratory assay costs of the MAAs was also performed, and we found that the costs associated with the optimal two assay MAA were substantially less than with the four assay MAA. Conclusions The LAg-Avidity assay did not perform well in a single-assay format, regardless of the assay cutoff. MAAs that include the LAg-Avidity and BioRad-Avidity assays, with or without viral load and CD4 cell count, provide accurate incidence estimates. PMID:24386116
Walker, R.S.; Novare, A.J.; Nichols, J.D.
2000-01-01
Estimation of abundance of mammal populations is essential for monitoring programs and for many ecological investigations. The first step for any study of variation in mammal abundance over space or time is to define the objectives of the study and how and why abundance data are to be used. The data used to estimate abundance are count statistics in the form of counts of animals or their signs. There are two major sources of uncertainty that must be considered in the design of the study: spatial variation and the relationship between abundance and the count statistic. Spatial variation in the distribution of animals or signs may be taken into account with appropriate spatial sampling. Count statistics may be viewed as random variables, with the expected value of the count statistic equal to the true abundance of the population multiplied by a coefficient p. With direct counts, p represents the probability of detection or capture of individuals, and with indirect counts it represents the rate of production of the signs as well as their probability of detection. Comparisons of abundance using count statistics from different times or places assume that the p are the same for all times or places being compared (p= pi). In spite of considerable evidence that this assumption rarely holds true, it is commonly made in studies of mammal abundance, as when the minimum number alive or indices based on sign counts are used to compare abundance in different habitats or times. Alternatives to relying on this assumption are to calibrate the index used by testing the assumption of p= pi, or to incorporate the estimation of p into the study design.
Recursive algorithms for phylogenetic tree counting.
Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J
2013-10-28
In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.
NASA Technical Reports Server (NTRS)
Degnan, John J.; Smith, David E. (Technical Monitor)
2000-01-01
We consider the optimum design of photon-counting microlaser altimeters operating from airborne and spaceborne platforms under both day and night conditions. Extremely compact Q-switched microlaser transmitters produce trains of low energy pulses at multi-kHz rates and can easily generate subnanosecond pulse-widths for precise ranging. To guide the design, we have modeled the solar noise background and developed simple algorithms, based on Post-Detection Poisson Filtering (PDPF), to optimally extract the weak altimeter signal from a high noise background during daytime operations. Practical technology issues, such as detector and/or receiver dead times, have also been considered in the analysis. We describe an airborne prototype, being developed under NASA's instrument Incubator Program, which is designed to operate at a 10 kHz rate from aircraft cruise altitudes up to 12 km with laser pulse energies on the order of a few microjoules. We also analyze a compact and power efficient system designed to operate from Mars orbit at an altitude of 300 km and sample the Martian surface at rates up to 4.3 kHz using a 1 watt laser transmitter and an 18 cm telescope. This yields a Power-Aperture Product of 0.24 W-square meter, corresponding to a value almost 4 times smaller than the Mars Orbiting Laser Altimeter (0. 88W-square meter), yet the sampling rate is roughly 400 times greater (4 kHz vs 10 Hz) Relative to conventional high power laser altimeters, advantages of photon-counting laser altimeters include: (1) a more efficient use of available laser photons providing up to two orders of magnitude greater surface sampling rates for a given laser power-telescope aperture product; (2) a simultaneous two order of magnitude reduction in the volume, cost and weight of the telescope system; (3) the unique ability to spatially resolve the source of the surface return in a photon counting mode through the use of pixellated or imaging detectors; and (4) improved vertical and transverse spatial resolution resulting from both (1) and (3). Furthermore, because of significantly lower laser pulse energies, the microaltimeter is inherently more eyesafe to observers on the ground and less prone to internal optical damage, which can terminate a space mission prematurely.
Visual tool for estimating the fractal dimension of images
NASA Astrophysics Data System (ADS)
Grossu, I. V.; Besliu, C.; Rusu, M. V.; Jipa, Al.; Bordeianu, C. C.; Felea, D.
2009-10-01
This work presents a new Visual Basic 6.0 application for estimating the fractal dimension of images, based on an optimized version of the box-counting algorithm. Following the attempt to separate the real information from "noise", we considered also the family of all band-pass filters with the same band-width (specified as parameter). The fractal dimension can be thus represented as a function of the pixel color code. The program was used for the study of paintings cracks, as an additional tool which can help the critic to decide if an artistic work is original or not. Program summaryProgram title: Fractal Analysis v01 Catalogue identifier: AEEG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29 690 No. of bytes in distributed program, including test data, etc.: 4 967 319 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30M Classification: 14 Nature of problem: Estimating the fractal dimension of images. Solution method: Optimized implementation of the box-counting algorithm. Use of a band-pass filter for separating the real information from "noise". User friendly graphical interface. Restrictions: Although various file-types can be used, the application was mainly conceived for the 8-bit grayscale, windows bitmap file format. Running time: In a first approximation, the algorithm is linear.
Gezie, Lemma Derseh
2016-07-30
The response of HIV patients to antiretroviral therapy could be measured by its strong predictor, the CD4+ T cell (CD4) count for the initiation of antiretroviral therapy and proper management of disease progress. However, in addition to HIV, there are other factors which can influence the CD4 cell count. Patient's socio-economic, demographic, and behavioral variables, accessibility, duration of treatment etc., can be used to predict CD4 count. A retrospective cohort study was conducted to examine the predictors of CD4 count among ART users enrolled in the first 6 months of 2010 and followed upto mid 2014. The covariance components model was employed to determine the predictors of CD4 count over time. A total of 1196 ART attendants were used to analyze their data descriptively. Eight hundred sixty-one of the attendants had two or more CD4 count measurements and were used in modeling their data using the linear mixed method. Thus, the mean rates of incensement of CD4 counts for patients with ambulatory/bedridden and working baseline functional status were 17.4 and 30.6 cells/mm(3) per year, respectively. After adjusting for other variables, for each additional baseline CD4 count, the gain in CD4 count during treatment was 0.818 cells/mm(3) (p value <0.001). Patient's age and baseline functional status were also statistically significantly associated with CD4 count. In this study, higher baseline CD4 count, younger age, working functional status, and time in treatment contributed positively to the increment of the CD4 count. However, the observed increment at 4 year was unsatisfactory as the proportion of ART users who reached the normal range of CD4 count was very low. To see their long term treatment outcome, it requires further research with a sufficiently longer follow up data. In line with this, the local CD4 count for HIV negative persons should also be investigated for better comparison and proper disease management.
Effects of lek count protocols on greater sage-grouse population trend estimates
Monroe, Adrian; Edmunds, David; Aldridge, Cameron L.
2016-01-01
Annual counts of males displaying at lek sites are an important tool for monitoring greater sage-grouse populations (Centrocercus urophasianus), but seasonal and diurnal variation in lek attendance may increase variance and bias of trend analyses. Recommendations for protocols to reduce observation error have called for restricting lek counts to within 30 minutes of sunrise, but this may limit the number of lek counts available for analysis, particularly from years before monitoring was widely standardized. Reducing the temporal window for conducting lek counts also may constrain the ability of agencies to monitor leks efficiently. We used lek count data collected across Wyoming during 1995−2014 to investigate the effect of lek counts conducted between 30 minutes before and 30, 60, or 90 minutes after sunrise on population trend estimates. We also evaluated trends across scales relevant to management, including statewide, within Working Group Areas and Core Areas, and for individual leks. To further evaluate accuracy and precision of trend estimates from lek count protocols, we used simulations based on a lek attendance model and compared simulated and estimated values of annual rate of change in population size (λ) from scenarios of varying numbers of leks, lek count timing, and count frequency (counts/lek/year). We found that restricting analyses to counts conducted within 30 minutes of sunrise generally did not improve precision of population trend estimates, although differences among timings increased as the number of leks and count frequency decreased. Lek attendance declined >30 minutes after sunrise, but simulations indicated that including lek counts conducted up to 90 minutes after sunrise can increase the number of leks monitored compared to trend estimates based on counts conducted within 30 minutes of sunrise. This increase in leks monitored resulted in greater precision of estimates without reducing accuracy. Increasing count frequency also improved precision. These results suggest that the current distribution of count timings available in lek count databases such as that of Wyoming (conducted up to 90 minutes after sunrise) can be used to estimate sage-grouse population trends without reducing precision or accuracy relative to trends from counts conducted within 30 minutes of sunrise. However, only 10% of all Wyoming counts in our sample (1995−2014) were conducted 61−90 minutes after sunrise, and further increasing this percentage may still bias trend estimates because of declining lek attendance.
NASA Astrophysics Data System (ADS)
Di Francesco, A.; Bugalho, R.; Oliveira, L.; Pacher, L.; Rivetti, A.; Rolo, M.; Silva, J. C.; Silva, R.; Varela, J.
2016-03-01
We present a readout and digitization ASIC featuring low-noise and low-power for time-of flight (TOF) applications using SiPMs. The circuit is designed in standard CMOS 110 nm technology, has 64 independent channels and is optimized for time-of-flight measurement in Positron Emission Tomography (TOF-PET). The input amplifier is a low impedance current conveyor based on a regulated common-gate topology. Each channel has quad-buffered analogue interpolation TDCs (time binning 20 ps) and charge integration ADCs with linear response at full scale (1500 pC). The signal amplitude can also be derived from the measurement of time-over-threshold (ToT). Simulation results show that for a single photo-electron signal with charge 200 (550) fC generated by a SiPM with 320 pF capacitance the circuit has 24 (30) dB SNR, 75(39) ps r.m.s. resolution, and 4(8) mW power consumption. The event rate is 600 kHz per channel, with up to 2 MHz dark counts rejection.
Shet, Anita; Kumarasamy, N; Poongulali, Selvamuthu; Shastri, Suresh; Kumar, Dodderi Sunil; Rewari, Bharath B; Arumugam, Karthika; Antony, Jimmy; De Costa, Ayesha; D'Souza, George
2016-01-01
Given the chronic nature of HIV infection and the need for life-long antiretroviral therapy (ART), maintaining long-term optimal adherence is an important strategy for maximizing treatment success. In order to understand better the dynamic nature of adherence behaviors in India where complex cultural and logistic features prevail, we assessed the patterns, trajectories and time-dependent predictors of adherence levels in relation to virological failure among individuals initiating first-line ART in India. Between July 2010 and August 2013, eligible ART-naïve HIV-infected individuals newly initiating first-line ART within the national program at three sites in southern India were enrolled and monitored for two years. ART included zidovudine/stavudine/tenofovir plus lamivudine plus nevirapine/efavirenz. Patients were assessed using clinical, laboratory and adherence parameters. Every three months, medication adherence was measured using pill count, and a structured questionnaire on adherence barriers was administered. Optimal adherence was defined as mean adherence ≥95%. Statistical analysis was performed using a bivariate and a multivariate model of all identified covariates. Adherence trends and determinants were modeled as rate ratios using generalized estimating equation analysis in a Poisson distribution. A total of 599 eligible ART-naïve patients participated in the study, and contributed a total of 921 person-years of observation time. Women constituted 43% and mean CD4 count prior to initiating ART was 192 cells/mm3. Overall mean adherence among all patients was 95.4%. The proportion of patients optimally adherent was 75.6%. Predictors of optimal adherence included older age (≥40 years), high school-level education and beyond, lower drug toxicity-related ART interruption, full disclosure, sense of satisfaction with one's own health and patient's perception of having good access to health-care services. Adherence was inversely proportional to virological failure (IRR 0.55, 95%CI 0.44-0.69 p<0.001). Drug toxicity and stigma-related barriers were significantly associated with virological failure, while forgetfulness was not associated with virological failure. Our study highlights the overall high level of medication adherence among individuals initiating ART within the Indian national program. Primary factors contributing towards poor adherence and subsequent virological failure in the proportion of individuals with poor adherence included drug toxicity, perceived stigma and poor access to health care services. Strategies that may contribute towards improved adherence include minimizing drug interruptions for medical reasons, use of newer ART regimens with better safety profiles and increasing access with more link ART centers that decentralize ART dispensing systems to individuals.
Ziegler, Ildikó; Borbély-Jakab, Judit; Sugó, Lilla; Kovács, Réka J
2017-01-01
In this case study, the principles of quality risk management were applied to review sampling points and monitoring frequencies in the hormonal tableting unit of a formulation development pilot plant. In the cleanroom area, premises of different functions are located. Therefore a general method was established for risk evaluation based on the Hazard Analysis and Critical Control Points (HACCP) method to evaluate these premises (i.e., production area itself and ancillary clean areas) from the point of view of microbial load and state in order to observe whether the existing monitoring program met the emerged advanced monitoring practice. LAY ABSTRACT: In pharmaceutical production, cleanrooms are needed for the manufacturing of final dosage forms of drugs-intended for human or veterinary use-in order to protect the patient's weakened body from further infections. Cleanrooms are premises with a controlled level of contamination that is specified by the number of particles per cubic meter at a specified particle size or number of microorganisms (i.e. microbial count) per surface area. To ensure a low microbial count over time, microorganisms are detected and counted by environmental monitoring methods regularly. It is reasonable to find the easily infected places by risk analysis to make sure the obtained results really represent the state of the whole room. This paper presents a risk analysis method for the optimization of environmental monitoring and verification of the suitability of the method. © PDA, Inc. 2017.
Chappuy, L; Charroin, C; Vételé, F; Durand, T; Quessada, T; Klotz, M-C; Bréant, V; Aulagner, G
2014-01-01
The parenteral nutrition admixtures are manufactured with an automated compounding BAXA(®) Exacta-Mix 2400. A 48-hour assembly has been validated. To optimize time and cost, a weekly assembly was tested. Assembly was made on the first day. Ten identical parenteral nutrition admixtures (different volumes and compositions) were produced each day. A macroscopic examination was done at D0, D7 and D14. Physicochemical controls (electrolytes determinations by atomic absorption spectrophotometry, osmolalities measurements) were performed. Microbiological tests included a filtration membrane sterility test (Steritest(®)) and a plate count agar environmental monitoring. All mixtures were considered stable. The 12 Steritest(®) (H24, H48, D7 and D14) did not show any bacterial or fungal contamination. No microorganism has been detected on the plate count agar at D4 and D7. Concerning the physicochemical parameters of each parental nutrition admixture, no significant difference (Wilcoxon test) with the first day was found. The automated filling system BAXA(®) Exacta-Mix 2400 improves the quality and safety of production. According to these results, the weekly assembly is validated and permit to save time (80hours/year) and cost (40 000 euros on consumable/year). Copyright © 2013 Elsevier Masson SAS. All rights reserved.
van Schie, Mojca K M; Alblas, Eva E; Thijs, Roland D; Fronczek, Rolf; Lammers, Gert Jan; van Dijk, J Gert
2014-01-01
The Sustained Attention to Response Task (SART) helps to quantify vigilance impairments.Previous studies, in which five SART sessions on one day were administered, demonstrated worse performance during the first session than during the others. The present study comprises two experiments to identify a cause of this phenomenon. Experiment 1, counting eighty healthy participants, assessed effects of repetition,napping, and time of day on SART performance through a between-groups design. The SART was performed twice in the morning or twice in the afternoon; half of the participants took a 20-minute nap before the second SART. A strong correlation between error count and reaction time (RT) suggested effects of test instruction. Participants gave equal weight to speed and accuracy in Experiment 1; therefore, results of 20 participants were compared to those of 20 additional participants who were told to prefer accuracy (Experiment 2). The average SART error count in Experiment 1 was 10.1; the median RT was 280 ms. Neither repetition nor napping influenced error count or RT. Time of day did not influence error count, but RT was significantly longer for morning than for afternoon SARTs. The additional participants in Experiment 2 had a 49% lower error count and a 14% higher RT than the participants in Experiment 1. Error counts reduced by 50% from the first to the second session of Experiment 2, irrespective of napping or time of day. Preferring accuracy over speed was associated with a significantly lower error count. The data suggest that a worse performance in the first SART session only occurs when instructing participants to prefer accuracy, which is caused by repetition, not by napping or time of day. We advise that participants are instructed to prefer accuracy over speed when performing the SART and that a full practice session is included.
Cundell, A M; Bean, R; Massimore, L; Maier, C
1998-01-01
To determine the relationship between the sampling time of the environmental monitoring, i.e., viable counts, in aseptic filling areas and the microbial count and frequency of alerts for air, surface and personnel microbial monitoring, statistical analyses were conducted on 1) the frequency of alerts versus the time of day for routine environmental sampling conducted in calendar year 1994, and 2) environmental monitoring data collected at 30-minute intervals during routine aseptic filling operations over two separate days in four different clean rooms with multiple shifts and equipment set-ups at a parenteral manufacturing facility. Statistical analyses showed, except for one floor location that had significantly higher number of counts but no alert or action level samplings in the first two hours of operation, there was no relationship between the number of counts and the time of sampling. Further studies over a 30-day period at the floor location showed no relationship between time of sampling and microbial counts. The conclusion reached in the study was that there is no worst case time for environmental monitoring at that facility and that sampling any time during the aseptic filling operation will give a satisfactory measure of the microbial cleanliness in the clean room during the set-up and aseptic filling operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Podgorsak, A; Bednarek, D; Rudin, S
2016-06-15
Purpose: To successfully implement and operate a photon counting scheme on an electron multiplying charged-coupled device (EMCCD) based micro-CT system. Methods: We built an EMCCD based micro-CT system and implemented a photon counting scheme. EMCCD detectors use avalanche transfer registries to multiply the input signal far above the readout noise floor. Due to intrinsic differences in the pixel array, using a global threshold for photon counting is not optimal. To address this shortcoming, we generated a threshold array based on sixty dark fields (no x-ray exposure). We calculated an average matrix and a variance matrix of the dark field sequence.more » The average matrix was used for the offset correction while the variance matrix was used to set individual pixel thresholds for the photon counting scheme. Three hundred photon counting frames were added for each projection and 360 projections were acquired for each object. The system was used to scan various objects followed by reconstruction using an FDK algorithm. Results: Examination of the projection images and reconstructed slices of the objects indicated clear interior detail free of beam hardening artifacts. This suggests successful implementation of the photon counting scheme on our EMCCD based micro-CT system. Conclusion: This work indicates that it is possible to implement and operate a photon counting scheme on an EMCCD based micro-CT system, suggesting that these devices might be able to operate at very low x-ray exposures in a photon counting mode. Such devices could have future implications in clinical CT protocols. NIH Grant R01EB002873; Toshiba Medical Systems Corp.« less
ERIC Educational Resources Information Center
Jones, Carolyn M.
2010-01-01
Connecting mathematical thinking to the natural world can be as simple as looking up to the sky. Volunteer bird watchers around the world help scientists gather data about bird populations. Counting flying birds might inspire new estimation methods, such as counting the number of birds per unit of time and then timing the whole flock's flight. In…
Spectral anomaly methods for aerial detection using KUT nuisance rejection
NASA Astrophysics Data System (ADS)
Detwiler, R. S.; Pfund, D. M.; Myjak, M. J.; Kulisek, J. A.; Seifert, C. E.
2015-06-01
This work discusses the application and optimization of a spectral anomaly method for the real-time detection of gamma radiation sources from an aerial helicopter platform. Aerial detection presents several key challenges over ground-based detection. For one, larger and more rapid background fluctuations are typical due to higher speeds, larger field of view, and geographically induced background changes. As well, the possible large altitude or stand-off distance variations cause significant steps in background count rate as well as spectral changes due to increased gamma-ray scatter with detection at higher altitudes. The work here details the adaptation and optimization of the PNNL-developed algorithm Nuisance-Rejecting Spectral Comparison Ratios for Anomaly Detection (NSCRAD), a spectral anomaly method previously developed for ground-based applications, for an aerial platform. The algorithm has been optimized for two multi-detector systems; a NaI(Tl)-detector-based system and a CsI detector array. The optimization here details the adaptation of the spectral windows for a particular set of target sources to aerial detection and the tailoring for the specific detectors. As well, the methodology and results for background rejection methods optimized for the aerial gamma-ray detection using Potassium, Uranium and Thorium (KUT) nuisance rejection are shown. Results indicate that use of a realistic KUT nuisance rejection may eliminate metric rises due to background magnitude and spectral steps encountered in aerial detection due to altitude changes and geographically induced steps such as at land-water interfaces.
Okuchi, Sachi; Okada, Tomohisa; Fujimoto, Koji; Fushimi, Yasutaka; Kido, Aki; Yamamoto, Akira; Kanagaki, Mitsunori; Dodo, Toshiki; Mehemed, Taha M; Miyazaki, Mitsue; Zhou, Xiangzhi; Togashi, Kaori
2014-06-01
To optimize visualization of lenticulostriate artery (LSA) by time-of-flight (TOF) magnetic resonance angiography (MRA) with slice-selective off-resonance sinc (SORS) saturation transfer contrast pulses and to compare capability of optimal TOF-MRA and flow-sensitive black-blood (FSBB) MRA to visualize the LSA at 3T. This study was approved by the local ethics committee, and written informed consent was obtained from all the subjects. TOF-MRA was optimized in 20 subjects by comparing SORS pulses of different flip angles: 0, 400°, and 750°. Numbers of LSAs were counted. The optimal TOF-MRA was compared to FSBB-MRA in 21 subjects. Images were evaluated by the numbers and length of visualized LSAs. LSAs were significantly more visualized in TOF-MRA with SORS pulses of 400° than others (P < .003). When the optimal TOF-MRA was compared to FSBB-MRA, the visualization of LSA using FSBB (mean branch numbers 11.1, 95% confidence interval (CI) 10.0-12.1; mean total length 236 mm, 95% CI 210-263 mm) was significantly better than using TOF (4.7, 95% CI 4.1-5.3; 78 mm, 95% CI 67-89 mm) for both numbers and length of the LSA (P < .0001). LSA visualization was best with 400° SORS pulses for TOF-MRA but FSBB-MRA was better than TOF-MRA, which indicates its clinical potential to investigate the LSA on a 3T magnetic resonance imaging. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Tutorial on Using Regression Models with Count Outcomes Using R
ERIC Educational Resources Information Center
Beaujean, A. Alexander; Morgan, Grant B.
2016-01-01
Education researchers often study count variables, such as times a student reached a goal, discipline referrals, and absences. Most researchers that study these variables use typical regression methods (i.e., ordinary least-squares) either with or without transforming the count variables. In either case, using typical regression for count data can…
NASA Astrophysics Data System (ADS)
Chen, Xiang; Li, Jingchao; Han, Hui; Ying, Yulong
2018-05-01
Because of the limitations of the traditional fractal box-counting dimension algorithm in subtle feature extraction of radiation source signals, a dual improved generalized fractal box-counting dimension eigenvector algorithm is proposed. First, the radiation source signal was preprocessed, and a Hilbert transform was performed to obtain the instantaneous amplitude of the signal. Then, the improved fractal box-counting dimension of the signal instantaneous amplitude was extracted as the first eigenvector. At the same time, the improved fractal box-counting dimension of the signal without the Hilbert transform was extracted as the second eigenvector. Finally, the dual improved fractal box-counting dimension eigenvectors formed the multi-dimensional eigenvectors as signal subtle features, which were used for radiation source signal recognition by the grey relation algorithm. The experimental results show that, compared with the traditional fractal box-counting dimension algorithm and the single improved fractal box-counting dimension algorithm, the proposed dual improved fractal box-counting dimension algorithm can better extract the signal subtle distribution characteristics under different reconstruction phase space, and has a better recognition effect with good real-time performance.
Reliability of Wearable Inertial Measurement Units to Measure Physical Activity in Team Handball.
Luteberget, Live S; Holme, Benjamin R; Spencer, Matt
2018-04-01
To assess the reliability and sensitivity of commercially available inertial measurement units to measure physical activity in team handball. Twenty-two handball players were instrumented with 2 inertial measurement units (OptimEye S5; Catapult Sports, Melbourne, Australia) taped together. They participated in either a laboratory assessment (n = 10) consisting of 7 team handball-specific tasks or field assessment (n = 12) conducted in 12 training sessions. Variables, including PlayerLoad™ and inertial movement analysis (IMA) magnitude and counts, were extracted from the manufacturers' software. IMA counts were divided into intensity bands of low (1.5-2.5 m·s -1 ), medium (2.5-3.5 m·s -1 ), high (>3.5 m·s -1 ), medium/high (>2.5 m·s -1 ), and total (>1.5 m·s -1 ). Reliability between devices and sensitivity was established using coefficient of variation (CV) and smallest worthwhile difference (SWD). Laboratory assessment: IMA magnitude showed a good reliability (CV = 3.1%) in well-controlled tasks. CV increased (4.4-6.7%) in more-complex tasks. Field assessment: Total IMA counts (CV = 1.8% and SWD = 2.5%), PlayerLoad (CV = 0.9% and SWD = 2.1%), and their associated variables (CV = 0.4-1.7%) showed a good reliability, well below the SWD. However, the CV of IMA increased when categorized into intensity bands (2.9-5.6%). The reliability of IMA counts was good when data were displayed as total, high, or medium/high counts. A good reliability for PlayerLoad and associated variables was evident. The CV of the previously mentioned variables was well below the SWD, suggesting that OptimEye's inertial measurement unit and its software are sensitive for use in team handball.
A Prescription for List-Mode Data Processing Conventions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef
There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less
Irving, David B.; Finn, James E.; Larson, James P.
1995-01-01
We began a three year study in 1987 to test the feasibility of using sonar in the Togiak River to estimate salmon escapements. Current methods rely on periodic aerial surveys and a counting tower at river kilometer 97. Escapement estimates are not available until 10 to 14 days after the salmon enter the river. Water depth and turbidity preclude relocating the tower to the lower river and affect the reliability of aerial surveys. To determine whether an alternative method could be developed to improve the timeliness and accuracy of current escapement monitoring, Bendix sonar units were operated during 1987, 1988, and 1990. Two sonar stations were set up opposite each other at river kilometer 30 and were operated 24 hours per day, seven days per week. Catches from gill nets with 12, 14, and 20 cm stretch mesh, a beach seine, and visual observations were used to estimate species composition. Length and sex data were collected from salmon caught in the nets to assess sampling bias.In 1987, sonar was used to select optimal sites and enumerate coho salmon. In 1988 and 1990, the sites identified in 1987 were used to estimate the escapement of five salmon species. Sockeye salmon escapement was estimated at 512,581 and 589,321, chinook at 7,698 and 15,098, chum at 246,144 and 134,958, coho at 78,588 and 28,290, and pink at 96,167 and 131,484. Sonar estimates of sockeye salmon were two to three times the Alaska Department of Fish and Game's escapement estimate based on aerial surveys and tower counts. The source of error was probably a combination of over-estimating the total number of targets counted by the sonar and by incorrectly estimating species composition.Total salmon escapement estimates using sonar may be feasible but several more years of development are needed. Because of the overlapped salmon run timing, estimating species composition appears the most difficult aspect of using sonar for management. Possible improvements include using a larger beach seine or selecting gill net mesh sizes evenly spaced between 10 and 20 cm stretch mesh.Salmon counts at river kilometer 30 would reduce the lag time between salmon river entry and the escapement estimate to 2-5 days. Any further decrease in lag time, however, would require moving the sonar operations downriver into less desirable braided portions of the river.
Analysis of routine traffic count stations to optimize locations and frequency : final report.
DOT National Transportation Integrated Search
1981-06-01
This report describes a grouping of statewide permanent and key traffic counters on the basis of their geographic variations in traffic flow. Several factors were considered including the distance between clusters and urban versus rural areas. : Traf...
Hallas, Gary; Monis, Paul
2015-01-01
The enumeration of bacteria using plate-based counts is a core technique used by food and water microbiology testing laboratories. However, manual counting of bacterial colonies is both time and labour intensive, can vary between operators and also requires manual entry of results into laboratory information management systems, which can be a source of data entry error. An alternative is to use automated digital colony counters, but there is a lack of peer-reviewed validation data to allow incorporation into standards. We compared the performance of digital counting technology (ProtoCOL3) against manual counting using criteria defined in internationally recognized standard methods. Digital colony counting provided a robust, standardized system suitable for adoption in a commercial testing environment. The digital technology has several advantages:•Improved measurement of uncertainty by using a standard and consistent counting methodology with less operator error.•Efficiency for labour and time (reduced cost).•Elimination of manual entry of data onto LIMS.•Faster result reporting to customers.
Gautam, R; Vanderstichel, R; Boerlage, A S; Revie, C W; Hammell, K L
2017-03-01
Effectiveness of sea lice bath treatment is often assessed by comparing pre- and post-treatment counts. However, in practice, the post-treatment counting window varies from the day of treatment to several days after treatment. In this study, we assess the effect of post-treatment lag time on sea lice abundance estimates after chemical bath treatment using data from the sea lice data management program (Fish-iTrends) between 2010 and 2014. Data on two life stages, (i) adult female (AF) and (ii) pre-adult and adult male (PAAM), were aggregated at the cage level and log-transformed. Average sea lice counts by post-treatment lag time were computed for AF and PAAM and compared relative to treatment day, using linear mixed models. There were 720 observations (treatment events) that uniquely matched pre- and post-treatment counts from 53 farms. Lag time had a significant effect on the estimated sea lice abundance, which was influenced by season and pre-treatment sea lice levels. During summer, sea lice were at a minimum when counted 1 day post-treatment irrespective of pre-treatment sea lice levels, whereas in the spring and autumn, low levels were observed for PAAM over a longer interval of time, provided the pre-treatment sea lice levels were >5-10. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Boutet, J.; Debourdeau, M.; Laidevant, A.; Hervé, L.; Dinten, J.-M.
2010-02-01
Finding a way to combine ultrasound and fluorescence optical imaging on an endorectal probe may improve early detection of prostate cancer. A trans-rectal probe adapted to fluorescence diffuse optical tomography measurements was developed by our team. This probe is based on a pulsed NIR laser source, an optical fiber network and a time-resolved detection system. A reconstruction algorithm was used to help locate and quantify fluorescent prostate tumors. In this study, two different kinds of time-resolved detectors are compared: High Rate Imaging system (HRI) and a photon counting system. The HRI is based on an intensified multichannel plate and a CCD Camera. The temporal resolution is obtained through a gating of the HRI. Despite a low temporal resolution (300ps), this system allows a simultaneous acquisition of the signal from a large number of detection fibers. In the photon counting setup, 4 photomultipliers are connected to a Time Correlated Single Photon Counting (TCSPC) board, providing a better temporal resolution (0.1 ps) at the expense of a limited number of detection fibers (4). At last, we show that the limited number of detection fibers of the photon counting setup is enough for a good localization and dramatically improves the overall acquisition time. The photon counting approach is then validated through the localization of fluorescent inclusions in a prostate-mimicking phantom.
Layton, C; Lawrence, J M
1997-06-01
Black-belt subjects (10 men) were timed on each of the five Heian kata and the scores transformed by count. Trend analyses showed that increased performance time was significantly related to assumed complexity in Heian ranking.
NASA Astrophysics Data System (ADS)
Iwata, Tetsuo; Taga, Takanori; Mizuno, Takahiko
2018-02-01
We have constructed a high-efficiency, photon-counting phase-modulation fluorometer (PC-PMF) using a field-programmable gate array, which is a modified version of the photon-counting fluorometer (PCF) that works in a pulsed-excitation mode (Iwata and Mizuno in Meas Sci Technol 28:075501, 2017). The common working principle for both is the simultaneous detection of the photoelectron pulse train, which covers 64 ns with a 1.0-ns resolution time (1.0 ns/channel). The signal-gathering efficiency was improved more than 100 times over that of conventional time-correlated single-photon-counting at the expense of resolution time depending on the number of channels. The system dead time for building a histogram was eliminated, markedly shortening the measurement time for fluorescent samples with moderately high quantum yields. We describe the PC-PMF and make a brief comparison with the pulsed-excitation PCF in precision, demonstrating the potential advantage of PC-PMF.
Oketič, K; Matijašić, B Bogovič; Obermajer, T; Radulović, Z; Lević, S; Mirković, N; Nedović, V
2015-01-01
The aim of the study was to evaluate real-time PCR coupled with propidium monoazide (PMA) treatment for enumeration of microencapsulated probiotic lactobacilli microencapsulated in calcium alginate beads. Lactobacillus gasseri K7 (CCM 7710) and Lactobacillus delbrueckii subsp. bulgaricus (CCM 7712) were analysed by plate counting and PMA real-time PCR during storage at 4 °C for 90 days. PMA was effective in preventing PCR amplification of the target sequences of DNA released from heat-compromised bacteria. The values obtained by real-time PCR of non-treated samples were in general higher than those obtained by real-time PCR of PMA-treated samples or by plate counting, indicating the presence of sub-lethally injured cells. This study shows that plate count could not be completely replaced by culture independent method PMA real-time PCR for enumeration of probiotics, but may rather complement the well-established plate counting, providing useful information about the ratio of compromised bacteria in the samples.
Bias sputtered NbN and superconducting nanowire devices
NASA Astrophysics Data System (ADS)
Dane, Andrew E.; McCaughan, Adam N.; Zhu, Di; Zhao, Qingyuan; Kim, Chung-Soo; Calandri, Niccolo; Agarwal, Akshay; Bellei, Francesco; Berggren, Karl K.
2017-09-01
Superconducting nanowire single photon detectors (SNSPDs) promise to combine near-unity quantum efficiency with >100 megacounts per second rates, picosecond timing jitter, and sensitivity ranging from x-ray to mid-infrared wavelengths. However, this promise is not yet fulfilled, as superior performance in all metrics is yet to be combined into one device. The highest single-pixel detection efficiency and the widest bias windows for saturated quantum efficiency have been achieved in SNSPDs based on amorphous materials, while the lowest timing jitter and highest counting rates were demonstrated in devices made from polycrystalline materials. Broadly speaking, the amorphous superconductors that have been used to make SNSPDs have higher resistivities and lower critical temperature (Tc) values than typical polycrystalline materials. Here, we demonstrate a method of preparing niobium nitride (NbN) that has lower-than-typical superconducting transition temperature and higher-than-typical resistivity. As we will show, NbN deposited onto unheated SiO2 has a low Tc and high resistivity but is too rough for fabricating unconstricted nanowires, and Tc is too low to yield SNSPDs that can operate well at liquid helium temperatures. By adding a 50 W RF bias to the substrate holder during sputtering, the Tc of the unheated NbN films was increased by up to 73%, and the roughness was substantially reduced. After optimizing the deposition for nitrogen flow rates, we obtained 5 nm thick NbN films with a Tc of 7.8 K and a resistivity of 253 μΩ cm. We used this bias sputtered room temperature NbN to fabricate SNSPDs. Measurements were performed at 2.5 K using 1550 nm light. Photon count rates appeared to saturate at bias currents approaching the critical current, indicating that the device's quantum efficiency was approaching unity. We measured a single-ended timing jitter of 38 ps. The optical coupling to these devices was not optimized; however, integration with front-side optical structures to improve absorption should be straightforward. This material preparation was further used to fabricate nanocryotrons and a large-area imager device, reported elsewhere. The simplicity of the preparation and promising device performance should enable future high-performance devices.
Atom-counting in High Resolution Electron Microscopy:TEM or STEM - That's the question.
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2017-03-01
In this work, a recently developed quantitative approach based on the principles of detection theory is used in order to determine the possibilities and limitations of High Resolution Scanning Transmission Electron Microscopy (HR STEM) and HR TEM for atom-counting. So far, HR STEM has been shown to be an appropriate imaging mode to count the number of atoms in a projected atomic column. Recently, it has been demonstrated that HR TEM, when using negative spherical aberration imaging, is suitable for atom-counting as well. The capabilities of both imaging techniques are investigated and compared using the probability of error as a criterion. It is shown that for the same incoming electron dose, HR STEM outperforms HR TEM under common practice standards, i.e. when the decision is based on the probability function of the peak intensities in HR TEM and of the scattering cross-sections in HR STEM. If the atom-counting decision is based on the joint probability function of the image pixel values, the dependence of all image pixel intensities as a function of thickness should be known accurately. Under this assumption, the probability of error may decrease significantly for atom-counting in HR TEM and may, in theory, become lower as compared to HR STEM under the predicted optimal experimental settings. However, the commonly used standard for atom-counting in HR STEM leads to a high performance and has been shown to work in practice. Copyright © 2017 Elsevier B.V. All rights reserved.
Mueller, Sherry A; Anderson, James E; Kim, Byung R; Ball, James C
2009-04-01
Effective bacterial control in cooling-tower systems requires accurate and timely methods to count bacteria. Plate-count methods are difficult to implement on-site, because they are time- and labor-intensive and require sterile techniques. Several field-applicable methods (dipslides, Petrifilm, and adenosine triphosphate [ATP] bioluminescence) were compared with the plate count for two sample matrices--phosphate-buffered saline solution containing a pure culture of Pseudomonas fluorescens and cooling-tower water containing an undefined mixed bacterial culture. For the pure culture, (1) counts determined on nutrient agar and plate-count agar (PCA) media and expressed as colony-forming units (CFU) per milliliter were equivalent to those on R2A medium (p = 1.0 and p = 1.0, respectively); (2) Petrifilm counts were not significantly different from R2A plate counts (p = 0.99); (3) the dipslide counts were up to 2 log units higher than R2A plate counts, but this discrepancy was not statistically significant (p = 0.06); and (4) a discernable correlation (r2 = 0.67) existed between ATP readings and plate counts. For cooling-tower water samples (n = 62), (1) bacterial counts using R2A medium were higher (but not significant; p = 0.63) than nutrient agar and significantly higher than tryptone-glucose yeast extract (TGE; p = 0.03) and PCA (p < 0.001); (2) Petrifilm counts were significantly lower than nutrient agar or R2A (p = 0.02 and p < 0.001, respectively), but not statistically different from TGE, PCA, and dipslides (p = 0.55, p = 0.69, and p = 0.91, respectively); (3) the dipslide method yielded bacteria counts 1 to 3 log units lower than nutrient agar and R2A (p < 0.001), but was not significantly different from Petrifilm (p = 0.91), PCA (p = 1.00) or TGE (p = 0.07); (4) the differences between dipslides and the other methods became greater with a 6-day incubation time; and (5) the correlation between ATP readings and plate counts varied from system to system, was poor (r2 values ranged from < 0.01 to 0.47), and the ATP method was not sufficiently sensitive to measure counts below approximately 10(4) CFU/mL.
7 CFR 1944.258 - Professional assessment committee.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., staff time is counted as its imputed value, and if the members are volunteers, their time is counted as volunteer time, according to sections 1944.145(c)(2) (ii) and (iv). (b) Duties of the PAC. The PAC is.... (3) The dollar value of PAC members' time spent on regular assessments after initial approval of...
7 CFR 1944.258 - Professional assessment committee.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., staff time is counted as its imputed value, and if the members are volunteers, their time is counted as volunteer time, according to sections 1944.145(c)(2) (ii) and (iv). (b) Duties of the PAC. The PAC is.... (3) The dollar value of PAC members' time spent on regular assessments after initial approval of...
24 CFR 700.135 - Professional assessment committee.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., staff time is counted as its imputed value, and if the members are volunteers, their time is counted as volunteer time, according to sections 700.145(c)(2) (ii) and (iv). (b) Duties of the PAC. The PAC is.... (3) The dollar value of PAC members' time spent on regular assessments after initial approval of...
24 CFR 700.135 - Professional assessment committee.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., staff time is counted as its imputed value, and if the members are volunteers, their time is counted as volunteer time, according to sections 700.145(c)(2) (ii) and (iv). (b) Duties of the PAC. The PAC is.... (3) The dollar value of PAC members' time spent on regular assessments after initial approval of...
Development of microcontroller based water flow measurement
NASA Astrophysics Data System (ADS)
Munir, Muhammad Miftahul; Surachman, Arif; Fathonah, Indra Wahyudin; Billah, Muhammad Aziz; Khairurrijal, Mahfudz, Hernawan; Rimawan, Ririn; Lestari, Slamet
2015-04-01
A digital instrument for measuring water flow was developed using an AT89S52 microcontroller, DS1302 real time clock (RTC), and EEPROM for an external memory. The sensor used for probing the current was a propeller that will rotate if immersed in a water flow. After rotating one rotation, the sensor sends one pulse and the number of pulses are counted for a certain time of counting. The measurement data, i.e. the number of pulses per unit time, are converted into water flow velocity (m/s) through a mathematical formula. The microcontroller counts the pulse sent by the sensor and the number of counted pulses are stored into the EEPROM memory. The time interval for counting is provided by the RTC and can be set by the operator. The instrument was tested under various time intervals ranging from 10 to 40 seconds and several standard propellers owned by Experimental Station for Hydraulic Structure and Geotechnics (BHGK), Research Institute for Water Resources (Pusair). Using the same propellers and water flows, it was shown that water flow velocities obtained from the developed digital instrument and those found by the provided analog one are almost similar.
A high dynamic range pulse counting detection system for mass spectrometry.
Collings, Bruce A; Dima, Martian D; Ivosev, Gordana; Zhong, Feng
2014-01-30
A high dynamic range pulse counting system has been developed that demonstrates an ability to operate at up to 2e8 counts per second (cps) on a triple quadrupole mass spectrometer. Previous pulse counting detection systems have typically been limited to about 1e7 cps at the upper end of the systems dynamic range. Modifications to the detection electronics and dead time correction algorithm are described in this paper. A high gain transimpedance amplifier is employed that allows a multi-channel electron multiplier to be operated at a significantly lower bias potential than in previous pulse counting systems. The system utilises a high-energy conversion dynode, a multi-channel electron multiplier, a high gain transimpedance amplifier, non-paralysing detection electronics and a modified dead time correction algorithm. Modification of the dead time correction algorithm is necessary due to a characteristic of the pulse counting electronics. A pulse counting detection system with the capability to count at ion arrival rates of up to 2e8 cps is described. This is shown to provide a linear dynamic range of nearly five orders of magnitude for a sample of aprazolam with concentrations ranging from 0.0006970 ng/mL to 3333 ng/mL while monitoring the m/z 309.1 → m/z 205.2 transition. This represents an upward extension of the detector's linear dynamic range of about two orders of magnitude. A new high dynamic range pulse counting system has been developed demonstrating the ability to operate at up to 2e8 cps on a triple quadrupole mass spectrometer. This provides an upward extension of the detector's linear dynamic range by about two orders of magnitude over previous pulse counting systems. Copyright © 2013 John Wiley & Sons, Ltd.
Todd E. Ristau; Susan L. Stout
2014-01-01
Assessment of regeneration can be time-consuming and costly. Often, foresters look for ways to minimize the cost of doing inventories. One potential method to reduce time required on a plot is use of percent cover data rather than seedling count data to determine stocking. Robust linear regression analysis was used in this report to predict seedling count data from...
The Influence of Time Spent in Outdoor Play on Daily and Aerobic Step Count in Costa Rican Children
ERIC Educational Resources Information Center
Morera Castro, Maria del Rocio
2011-01-01
The purpose of this study is to examine the influence of time spent in outdoor play (i.e., on weekday and weekend days) on daily (i.e., average step count) and aerobic step count (i.e., average moderate to vigorous physical activity [MVPA] during the weekdays and weekend days) in fifth grade Costa Rican children. It was hypothesized that: (a)…
North American Veterinary Licensing Examination pacing study.
Subhiyah, Raja G; Boyce, John R
2010-01-01
The National Board of Veterinary Medical Examiners was interested in the possible effects of word count on the outcomes of the North American Veterinary Licensing Examination. In this study, the authors investigated the effects of increasing word count on the pacing of examinees during each section of the examination and on the performance of examinees on the items. Specifically, the authors analyzed the effect of item word count on the average time spent on each item within a section of the examination, the average number of items omitted at the end of a section, and the average difficulty of items as a function of presentation order. The average word count per item increased from 2001 to 2008. As expected, there was a relationship between word count and time spent on the item. No significant relationship was found between word count and item difficulty, and an analysis of omitted items and pacing patterns showed no indication of overall pacing problems.
Reduction of Energy Intake using Just-In-Time Feedback from a Wearable Sensor System
Farooq, Muhammad; McCrory, Megan A.; Sazonov, Edward
2017-01-01
Objective This work explored the potential use of a wearable sensor system for providing just-in-time (JIT) feedback on the progression of a meal and tested its ability to reduce the total food mass intake. Methods Eighteen participants each consumed three meals in a lab while monitored by a wearable sensor system capable of accurately tracking chew counts. The baseline visit was used to establish the self-determined ingested mass and the associated chew counts. Real-time feedback on chew counts was provided in the next two visits during which the target chew counts was either the same as that at baseline or the baseline chew counts reduced by 25%, in randomized order. The target was concealed from the participant and from the experimenter. Nonparametric repeated-measures ANOVA were performed to compare mass of intake, meal duration, and ratings of hunger, appetite, and thirst across 3 meals. Results JIT feedback targeting a 25% reduction in chew counts resulted in a reduction in mass and energy intake without affecting perceived hunger or fullness. Conclusion JIT feedback on chewing behavior may reduce intake within a meal. This system can be further used to help develop individualized strategies to provide just-in-time adaptive interventions for reducing energy intake. PMID:28233942
NASA Astrophysics Data System (ADS)
Chen, Yuan-Ho
2017-05-01
In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.
Alivov, Yahya; Baturin, Pavlo; Le, Huy Q.; Ducote, Justin; Molloi, Sabee
2014-01-01
We investigated the effect of different imaging parameters such as dose, beam energy, energy resolution, and number of energy bins on image quality of K-edge spectral computed tomography (CT) of gold nanoparticles (GNP) accumulated in an atherosclerotic plaque. Maximum likelihood technique was employed to estimate the concentration of GNP, which served as a targeted intravenous contrast material intended to detect the degree of plaque's inflammation. The simulations studies used a single slice parallel beam CT geometry with an X-ray beam energy ranging between 50 and 140 kVp. The synthetic phantoms included small (3 cm in diameter) cylinder and chest (33x24 cm2) phantom, where both phantoms contained tissue, calcium, and gold. In the simulation studies GNP quantification and background (calcium and tissue) suppression task were pursued. The X-ray detection sensor was represented by an energy resolved photon counting detector (e.g., CdZnTe) with adjustable energy bins. Both ideal and more realistic (12% FWHM energy resolution) implementations of photon counting detector were simulated. The simulations were performed for the CdZnTe detector with pixel pitch of 0.5-1 mm, which corresponds to the performance without significant charge sharing and cross-talk effects. The Rose model was employed to estimate the minimum detectable concentration of GNPs. A figure of merit (FOM) was used to optimize the X-ray beam energy (kVp) to achieve the highest signal-to-noise ratio (SNR) with respect to patient dose. As a result, the successful identification of gold and background suppression was demonstrated. The highest FOM was observed at 125 kVp X-ray beam energy. The minimum detectable GNP concentration was determined to be approximately 1.06 μmol/mL (0.21 mg/mL) for an ideal detector and about 2.5 μmol/mL (0.49 mg/mL) for more realistic (12% FWHM) detector. The studies show the optimal imaging parameters at lowest patient dose using an energy resolved photon counting detector to image GNP in an atherosclerotic plaque. PMID:24334301
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, S; Vedantham, S; Karellas, A
Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less
Moretti, Pierangelo; Probo, Monica; Cantoni, Andrea; Paltrinieri, Saverio; Giordano, Alessia
2016-08-01
Retained placenta (RP) is often diagnosed in high-yielding dairy cows and can negatively affect reproductive performances. The objective of the present study was to investigate the hematological and biochemical profile of cows with RP before and immediately after parturition, with particular emphasis on neutrophil counts, since a previous study demonstrated the presence of peripheral neutropenia in dairy cows with RP sampled a few days after parturition. Results from 12 Holstein cows affected by RP and from 17 clinically healthy controls sampled one week pre-partum, within 12h after calving and between 48 and 72h after parturition were compared between groups and over time. Compared with controls, cows with RP had lower lymphocyte counts before parturition, lower leukocyte and neutrophil counts at parturition, lower monocyte counts at all times, and higher β-hydroxybutyrate before and after parturition. Erythroid and biochemical parameters were similar over time in both groups, whereas RP cows did not show the increase of neutrophil counts that occurs in controls at parturition. Hence, the finding of a lower neutrophil count in a routinely hemogram performed at parturition could be used as an alarm signal suggesting to monitor the affected animals. Moreover, although the underlying pathogenetic mechanism should be better investigated, the present study describes for the first time the association between altered blood leukocyte concentrations at parturition in RP compared to control cows. Copyright © 2016 Elsevier Ltd. All rights reserved.
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
Sequence comparison alignment-free approach based on suffix tree and L-words frequency.
Soares, Inês; Goios, Ana; Amorim, António
2012-01-01
The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.
Slama, Hichem; Fery, Patrick; Verheulpen, Denis; Vanzeveren, Nathalie; Van Bogaert, Patrick
2015-07-01
Long-acting medications have been developed and approved for use in the treatment of attention-deficit hyperactivity disorder (ADHD). These compounds are intended to optimize and maintain symptoms control throughout the day. We tested prolonged effects of osmotic-release oral system methylphenidate on both attention and inhibition, in the late afternoon. A double-blind, randomized, placebo-controlled study was conducted in 36 boys (7-12 years) with ADHD and 40 typically developing children. The ADHD children received an individualized dose of placebo or osmotic-release oral system methylphenidate. They were tested about 8 hours after taking with 2 continuous performance tests (continuous performance test-X [CPT-X] and continuous performance test-AX [CPT-AX]) and a counting Stroop. A positive effect of osmotic-release oral system methylphenidate was present in CPT-AX with faster and less variable reaction times under osmotic-release oral system methylphenidate than under placebo, and no difference with typically developing children. In the counting Stroop, we found a decreased interference with osmotic-release oral system methylphenidate but no difference between children with ADHD under placebo and typically developing children. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Jakopic, Rozle; Richter, Stephan; Kühn, Heinz; Benedik, Ljudmila; Pihlar, Boris; Aregbe, Yetunde
2009-01-01
A sample preparation procedure for isotopic measurements using thermal ionization mass spectrometry (TIMS) was developed which employs the technique of carburization of rhenium filaments. Carburized filaments were prepared in a special vacuum chamber in which the filaments were exposed to benzene vapour as a carbon supply and carburized electrothermally. To find the optimal conditions for the carburization and isotopic measurements using TIMS, the influence of various parameters such as benzene pressure, carburization current and the exposure time were tested. As a result, carburization of the filaments improved the overall efficiency by one order of magnitude. Additionally, a new "multi-dynamic" measurement technique was developed for Pu isotope ratio measurements using a "multiple ion counting" (MIC) system. This technique was combined with filament carburization and applied to the NBL-137 isotopic standard and samples of the NUSIMEP 5 inter-laboratory comparison campaign, which included certified plutonium materials at the ppt-level. The multi-dynamic measurement technique for plutonium, in combination with filament carburization, has been shown to significantly improve the precision and accuracy for isotopic analysis of environmental samples with low-levels of plutonium.
Gigahertz-gated InGaAs/InP single-photon detector with detection efficiency exceeding 55% at 1550 nm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comandar, L. C.; Engineering Department, Cambridge University, 9 J J Thomson Ave, Cambridge CB3 0FA; Fröhlich, B.
We report on a gated single-photon detector based on InGaAs/InP avalanche photodiodes (APDs) with a single-photon detection efficiency exceeding 55% at 1550 nm. Our detector is gated at 1 GHz and employs the self-differencing technique for gate transient suppression. It can operate nearly dead time free, except for the one clock cycle dead time intrinsic to self-differencing, and we demonstrate a count rate of 500 Mcps. We present a careful analysis of the optimal driving conditions of the APD measured with a dead time free detector characterization setup. It is found that a shortened gate width of 360 ps together with anmore » increased driving signal amplitude and operation at higher temperatures leads to improved performance of the detector. We achieve an afterpulse probability of 7% at 50% detection efficiency with dead time free measurement and a record efficiency for InGaAs/InP APDs of 55% at an afterpulse probability of only 10.2% with a moderate dead time of 10 ns.« less
South Atlantic Anomaly Entry and Exit as Measured by the X-Ray Timing Explorer
NASA Technical Reports Server (NTRS)
Smith, Evan; Stark, Michael; Giles, Barry; Antunes, Sandy; Gawne, Bill
1996-01-01
The Rossi X-ray Timing Explorer (RXTE) carries instruments that must switch off high voltages (HV) when passing through the South Atlantic Anomaly (SAA). The High Energy X-ray Timing Experiment (HEXTE) contains a particle monitor that detects the increased particle flux associated with the SAA and autonomously reduces its voltage. The Proportional Counter Array (PCA) relies on uplinked predictions of SAA entry/exit times based on ephemeris data provided by the Flight Dynamics Facility. A third instrument, the All-Sky Monitor (ASM) also uses a predicted SAA model to reduce voltage when passing through the SAA. Data collected from the HEXTE particle monitor, as well as other instrument readings near the times of SAA entry/exit offer the potential for refining models of the boundaries of the SAA. The SAA has an increased particle flux which causes high rates of detection in the RXTE instruments designed to observe x-rays. The high counting rates could degrade the PCA if HV is not reduced during SAA passages. On the other hand, PCA downtime can be minimized and the science return can be optimized by having the best possible model of the SAA boundary. Thus, the PCA team planned an extensive effort during in-orbit checkout to utilize both the HEXTE particle monitor data and instrument counting rates to refine the model of the SAA boundary. The times of SAA entry and exit are compared with the definitive epemeris to determine the precise location (latitude and longitude) of the SAA boundary. Over time, the SAA and its perimeter were mapped. The RXTE Science Operations Center is continuously working to feed back the results of this effort into the science scheduling process, improving the SAA model as it affects the RXTE instruments, thus obtaining more accurate estimates of the SAA entry/exit times.
Affordable CZT SPECT with dose-time minimization (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hugg, James W.; Harris, Brian W.; Radley, Ian
2017-03-01
PURPOSE Pixelated CdZnTe (CZT) detector arrays are used in molecular imaging applications that can enable precision medicine, including small-animal SPECT, cardiac SPECT, molecular breast imaging (MBI), and general purpose SPECT. The interplay of gamma camera, collimator, gantry motion, and image reconstruction determines image quality and dose-time-FOV tradeoffs. Both dose and exam time can be minimized without compromising diagnostic content. METHODS Integration of pixelated CZT detectors with advanced ASICs and readout electronics improves system performance. Because historically CZT was expensive, the first clinical applications were limited to small FOV. Radiation doses were initially high and exam times long. Advances have significantly improved efficiency of CZT-based molecular imaging systems and the cost has steadily declined. We have built a general purpose SPECT system using our 40 cm x 53 cm CZT gamma camera with 2 mm pixel pitch and characterized system performance. RESULTS Compared to NaI scintillator gamma cameras: intrinsic spatial resolution improved from 3.8 mm to 2.0 mm; energy resolution improved from 9.8% to <4 % at 140 keV; maximum count rate is <1.5 times higher; non-detection camera edges are reduced 3-fold. Scattered photons are greatly reduced in the photopeak energy window; image contrast is improved; and the optimal FOV is increased to the entire camera area. CONCLUSION Continual improvements in CZT detector arrays for molecular imaging, coupled with optimal collimator and image reconstruction, result in minimized dose and exam time. With CZT cost improving, affordable whole-body CZT general purpose SPECT is expected to enable precision medicine applications.
Optimal Allocation of Sampling Effort in Depletion Surveys
We consider the problem of designing a depletion or removal survey as part of estimating animal abundance for populations with imperfect capture or detection rates. In a depletion survey, animals are captured from a given area, counted, and withheld from the population. This proc...
Multiple-Event, Single-Photon Counting Imaging Sensor
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.
2011-01-01
The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.
Designing the X-Ray Microcalorimeter Spectrometer for Optimal Science Return
NASA Technical Reports Server (NTRS)
Ptak, Andrew; Bandler, Simon R.; Bookbinder, Jay; Kelley, Richard L.; Petre, Robert; Smith, Randall K.; Smith, Stephen
2013-01-01
Recent advances in X-ray microcalorimeters enable a wide range of possible focal plane designs for the X-ray Microcalorimeter Spectrometer (XMS) instrument on the future Advanced X-ray Spectroscopic Imaging Observatory (AXSIO) or X-ray Astrophysics Probe (XAP). Small pixel designs (75 microns) oversample a 5-10" PSF by a factor of 3-6 for a 10 m focal length, enabling observations at both high count rates and high energy resolution. Pixel designs utilizing multiple absorbers attached to single transition-edge sensors can extend the focal plane to cover a significantly larger field of view, albeit at a cost in maximum count rate and energy resolution. Optimizing the science return for a given cost and/or complexity is therefore a non-trivial calculation that includes consideration of issues such as the mission science drivers, likely targets, mirror size, and observing efficiency. We present a range of possible designs taking these factors into account and their impacts on the science return of future large effective-area X-ray spectroscopic missions.
Dertli, Enes; Toker, Omer S; Durak, M Zeki; Yilmaz, Mustafa T; Tatlısu, Nevruz Berna; Sagdic, Osman; Cankurt, Hasan
2016-01-20
This study aimed to investigate the role of in situ exopolysaccharide (EPS) production by EPS(+)Streptococcus thermophilus strains on physicochemical, rheological, molecular, microstructural and sensory properties of ice cream in order to develop a fermented and consequently functional ice-cream in which no stabilizers would be required in ice-cream production. For this purpose, the effect of EPS producing strains (control, strain 1, strain 2 and mixture) and fermentation conditions (fermentation temperature; 32, 37 and 42 °C and time; 2, 3 and 4h) on pH, S. thermophilus count, EPS amount, consistency coefficient (K), and apparent viscosity (η50) were investigated and optimized using single and multiple response optimization tools of response surface methodology. Optimization analyses indicated that functional ice-cream should be fermented with strain 1 or strain mixture at 40-42 °C for 4h in order to produce the most viscous ice-cream with maximum EPS content. Optimization analysis results also revealed that strain specific conditions appeared to be more effective factor on in situ EPS production amount, K and η50 parameters than did fermentation temperature and time. The rheological analysis of the ice-cream produced by EPS(+) strains revealed its high viscous and pseudoplastic non-Newtonian fluid behavior, which demonstrates potential of S. thermophilus EPS as thickening and gelling agent in dairy industry. FTIR analysis proved that the EPS in ice-cream corresponded to a typical EPS, as revealed by the presence of carboxyl, hydroxyl and amide groups with additional α-glycosidic linkages. SEM studies demonstrated that it had a web-like compact microstructure with pores in ice-cream, revealing its application possibility in dairy products to improve their rheological properties. Copyright © 2015. Published by Elsevier Ltd.
Dynamic Histogram Analysis To Determine Free Energies and Rates from Biased Simulations.
Stelzl, Lukas S; Kells, Adam; Rosta, Edina; Hummer, Gerhard
2017-12-12
We present an algorithm to calculate free energies and rates from molecular simulations on biased potential energy surfaces. As input, it uses the accumulated times spent in each state or bin of a histogram and counts of transitions between them. Optimal unbiased equilibrium free energies for each of the states/bins are then obtained by maximizing the likelihood of a master equation (i.e., first-order kinetic rate model). The resulting free energies also determine the optimal rate coefficients for transitions between the states or bins on the biased potentials. Unbiased rates can be estimated, e.g., by imposing a linear free energy condition in the likelihood maximization. The resulting "dynamic histogram analysis method extended to detailed balance" (DHAMed) builds on the DHAM method. It is also closely related to the transition-based reweighting analysis method (TRAM) and the discrete TRAM (dTRAM). However, in the continuous-time formulation of DHAMed, the detailed balance constraints are more easily accounted for, resulting in compact expressions amenable to efficient numerical treatment. DHAMed produces accurate free energies in cases where the common weighted-histogram analysis method (WHAM) for umbrella sampling fails because of slow dynamics within the windows. Even in the limit of completely uncorrelated data, where WHAM is optimal in the maximum-likelihood sense, DHAMed results are nearly indistinguishable. We illustrate DHAMed with applications to ion channel conduction, RNA duplex formation, α-helix folding, and rate calculations from accelerated molecular dynamics. DHAMed can also be used to construct Markov state models from biased or replica-exchange molecular dynamics simulations. By using binless WHAM formulated as a numerical minimization problem, the bias factors for the individual states can be determined efficiently in a preprocessing step and, if needed, optimized globally afterward.
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
Systematic wavelength selection for improved multivariate spectral analysis
Thomas, Edward V.; Robinson, Mark R.; Haaland, David M.
1995-01-01
Methods and apparatus for determining in a biological material one or more unknown values of at least one known characteristic (e.g. the concentration of an analyte such as glucose in blood or the concentration of one or more blood gas parameters) with a model based on a set of samples with known values of the known characteristics and a multivariate algorithm using several wavelength subsets. The method includes selecting multiple wavelength subsets, from the electromagnetic spectral region appropriate for determining the known characteristic, for use by an algorithm wherein the selection of wavelength subsets improves the model's fitness of the determination for the unknown values of the known characteristic. The selection process utilizes multivariate search methods that select both predictive and synergistic wavelengths within the range of wavelengths utilized. The fitness of the wavelength subsets is determined by the fitness function F=.function.(cost, performance). The method includes the steps of: (1) using one or more applications of a genetic algorithm to produce one or more count spectra, with multiple count spectra then combined to produce a combined count spectrum; (2) smoothing the count spectrum; (3) selecting a threshold count from a count spectrum to select these wavelength subsets which optimize the fitness function; and (4) eliminating a portion of the selected wavelength subsets. The determination of the unknown values can be made: (1) noninvasively and in vivo; (2) invasively and in vivo; or (3) in vitro.
Downsampling Photodetector Array with Windowing
NASA Technical Reports Server (NTRS)
Patawaran, Ferze D.; Farr, William H.; Nguyen, Danh H.; Quirk, Kevin J.; Sahasrabudhe, Adit
2012-01-01
In a photon counting detector array, each pixel in the array produces an electrical pulse when an incident photon on that pixel is detected. Detection and demodulation of an optical communication signal that modulated the intensity of the optical signal requires counting the number of photon arrivals over a given interval. As the size of photon counting photodetector arrays increases, parallel processing of all the pixels exceeds the resources available in current application-specific integrated circuit (ASIC) and gate array (GA) technology; the desire for a high fill factor in avalanche photodiode (APD) detector arrays also precludes this. Through the use of downsampling and windowing portions of the detector array, the processing is distributed between the ASIC and GA. This allows demodulation of the optical communication signal incident on a large photon counting detector array, as well as providing architecture amenable to algorithmic changes. The detector array readout ASIC functions as a parallel-to-serial converter, serializing the photodetector array output for subsequent processing. Additional downsampling functionality for each pixel is added to this ASIC. Due to the large number of pixels in the array, the readout time of the entire photodetector is greater than the time between photon arrivals; therefore, a downsampling pre-processing step is done in order to increase the time allowed for the readout to occur. Each pixel drives a small counter that is incremented at every detected photon arrival or, equivalently, the charge in a storage capacitor is incremented. At the end of a user-configurable counting period (calculated independently from the ASIC), the counters are sampled and cleared. This downsampled photon count information is then sent one counter word at a time to the GA. For a large array, processing even the downsampled pixel counts exceeds the capabilities of the GA. Windowing of the array, whereby several subsets of pixels are designated for processing, is used to further reduce the computational requirements. The grouping of the designated pixel frame as the photon count information is sent one word at a time to the GA, the aggregation of the pixels in a window can be achieved by selecting only the designated pixel counts from the serial stream of photon counts, thereby obviating the need to store the entire frame of pixel count in the gate array. The pixel count se quence from each window can then be processed, forming lower-rate pixel statistics for each window. By having this processing occur in the GA rather than in the ASIC, future changes to the processing algorithm can be readily implemented. The high-bandwidth requirements of a photon counting array combined with the properties of the optical modulation being detected by the array present a unique problem that has not been addressed by current CCD or CMOS sensor array solutions.
Extending the Binomial Checkpointing Technique for Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walther, Andrea; Narayanan, Sri Hari Krishna
In terms of computing time, adjoint methods offer a very attractive alternative to compute gradient information, re- quired, e.g., for optimization purposes. However, together with this very favorable temporal complexity result comes a memory requirement that is in essence proportional with the operation count of the underlying function, e.g., if algo- rithmic differentiation is used to provide the adjoints. For this reason, checkpointing approaches in many variants have become popular. This paper analyzes an extension of the so-called binomial approach to cover also possible failures of the computing systems. Such a measure of precaution is of special interest for massivemore » parallel simulations and adjoint calculations where the mean time between failure of the large scale computing system is smaller than the time needed to complete the calculation of the adjoint information. We de- scribe the extensions of standard checkpointing approaches required for such resilience, provide a corresponding imple- mentation and discuss numerical results.« less
Long-range depth profiling of camouflaged targets using single-photon detection
NASA Astrophysics Data System (ADS)
Tobin, Rachael; Halimi, Abderrahim; McCarthy, Aongus; Ren, Ximing; McEwan, Kenneth J.; McLaughlin, Stephen; Buller, Gerald S.
2018-03-01
We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).
Waiting time distribution revealing the internal spin dynamics in a double quantum dot
NASA Astrophysics Data System (ADS)
Ptaszyński, Krzysztof
2017-07-01
Waiting time distribution and the zero-frequency full counting statistics of unidirectional electron transport through a double quantum dot molecule attached to spin-polarized leads are analyzed using the quantum master equation. The waiting time distribution exhibits a nontrivial dependence on the value of the exchange coupling between the dots and the gradient of the applied magnetic field, which reveals the oscillations between the spin states of the molecule. The zero-frequency full counting statistics, on the other hand, is independent of the aforementioned quantities, thus giving no insight into the internal dynamics. The fact that the waiting time distribution and the zero-frequency full counting statistics give a nonequivalent information is associated with two factors. Firstly, it can be explained by the sensitivity to different timescales of the dynamics of the system. Secondly, it is associated with the presence of the correlation between subsequent waiting times, which makes the renewal theory, relating the full counting statistics and the waiting time distribution, no longer applicable. The study highlights the particular usefulness of the waiting time distribution for the analysis of the internal dynamics of mesoscopic systems.
Aspects of Motor Performance and Preacademic Learning.
ERIC Educational Resources Information Center
Feder, Katya; Kerr, Robert
1996-01-01
The Miller Assessment for Preschoolers (MAP) and a number/counting test were given to 50 4- and 5-year-olds. Low performance on counting was related to significantly slower average response time, overshoot movement time, and reaction time, indicating perceptual-motor difficulty. Low MAP scores indicated difficulty processing visual spatial…
Assessment of Differing Definitions of Accelerometer Nonwear Time
ERIC Educational Resources Information Center
Evenson, Kelly R.; Terry, James W., Jr.
2009-01-01
Measuring physical activity with objective tools, such as accelerometers, is becoming more common. Accelerometers measure acceleration multiple times within a given frequency and summarize this as a count over a pre-specified time period or epoch. The resultant count represents acceleration over the epoch length. Accelerometers eliminate biases…
NASA Technical Reports Server (NTRS)
Smith, S. J.; Adams, J. S.; Bandler, S. R.; Betancourt-Martinez, G. L.; Chervenak, J. A.; Chiao, M. P.; Eckart, M. E.; Finkbeiner, F. M.; Kelley, R. L.; Kilbourne, C. A.;
2016-01-01
The focal plane of the X-ray integral field unit (X-IFU) for ESA's Athena X-ray observatory will consist of approximately 4000 transition edge sensor (TES) x-ray microcalorimeters optimized for the energy range of 0.2 to 12 kiloelectronvolts. The instrument will provide unprecedented spectral resolution of approximately 2.5 electronvolts at energies of up to 7 kiloelectronvolts and will accommodate photon fluxes of 1 milliCrab (90 counts per second) for point source observations. The baseline configuration is a uniform large pixel array (LPA) of 4.28 arcseconds pixels that is read out using frequency domain multiplexing (FDM). However, an alternative configuration under study incorporates an 18 by × 18 small pixel array (SPA) of 2 arcseconds pixels in the central approximately 36 arcseconds region. This hybrid array configuration could be designed to accommodate higher fluxes of up to 10 milliCrabs (900 counts per second) or alternately for improved spectral performance (less than 1.5 electronvolts) at low count-rates. In this paper we report on the TES pixel designs that are being optimized to meet these proposed LPA and SPA configurations. In particular we describe details of how important TES parameters are chosen to meet the specific mission criteria such as energy resolution, count-rate and quantum efficiency, and highlight performance trade-offs between designs. The basis of the pixel parameter selection is discussed in the context of existing TES arrays that are being developed for solar and x-ray astronomy applications. We describe the latest results on DC biased diagnostic arrays as well as large format kilo-pixel arrays and discuss the technical challenges associated with integrating different array types on to a single detector die.
Studying the effect of weather conditions on daily crash counts using a discrete time-series model.
Brijs, Tom; Karlis, Dimitris; Wets, Geert
2008-05-01
In previous research, significant effects of weather conditions on car crashes have been found. However, most studies use monthly or yearly data and only few studies are available analyzing the impact of weather conditions on daily car crash counts. Furthermore, the studies that are available on a daily level do not explicitly model the data in a time-series context, hereby ignoring the temporal serial correlation that may be present in the data. In this paper, we introduce an integer autoregressive model for modelling count data with time interdependencies. The model is applied to daily car crash data, metereological data and traffic exposure data from the Netherlands aiming at examining the risk impact of weather conditions on the observed counts. The results show that several assumptions related to the effect of weather conditions on crash counts are found to be significant in the data and that if serial temporal correlation is not accounted for in the model, this may produce biased results.
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
A Murine Model of Robotic Training to Evaluate Skeletal Muscle Recovery after Injury.
Lai, Stefano; Panarese, Alessandro; Lawrence, Ross; Boninger, Michael L; Micera, Silvestro; Ambrosio, Fabrisia
2017-04-01
In vivo studies have suggested that motor exercise can improve muscle regeneration after injury. Nevertheless, preclinical investigations still lack reliable tools to monitor motor performance over time and to deliver optimal training protocols to maximize force recovery. Here, we evaluated the utility of a murine robotic platform (i) to detect early impairment and longitudinal recovery after acute skeletal muscle injury and (ii) to administer varying intensity training protocols to enhance forelimb motor performance. A custom-designed robotic platform was used to train mice to perform a forelimb retraction task. After an acute injury to bilateral biceps brachii muscles, animals performed a daily training protocol in the platform at high (HL) or low (LL) loading levels over the course of 3 wk. Control animals were not trained (NT). Motor performance was assessed by quantifying force, time, submovement count, and number of movement attempts to accomplish the task. Myofiber number and cross-sectional area at the injury site were quantified histologically. Two days after injury, significant differences in the time, submovement count, number of movement attempts, and exerted force were observed in all mice, as compared with baseline values. Interestingly, the recovery time of muscle force production differed significantly between intervention groups, with HL group showing a significantly accelerated recovery. Three weeks after injury, all groups showed motor performance comparable with baseline values. Accordingly, there were no differences in the number of myofibers or average cross-sectional area among groups after 3 wk. Our findings demonstrate the utility of our custom-designed robotic device for the quantitative assessment of skeletal muscle function in preclinical murine studies. Moreover, we demonstrate that this device may be used to apply varying levels of resistance longitudinally as a means manipulate physiological muscle responses.
Development and test of photon counting lidar
NASA Astrophysics Data System (ADS)
Wang, Chun-hui; Wang, Ao-you; Tao, Yu-liang; Li, Xu; Peng, Huan; Meng, Pei-bei
2018-02-01
In order to satisfy the application requirements of spaceborne three dimensional imaging lidar , a prototype of nonscanning multi-channel lidar based on receiver field of view segmentation was designed and developed. High repetition frequency micro-pulse lasers, optics fiber array and Geiger-mode APD, combination with time-correlated single photon counting technology, were adopted to achieve multi-channel detection. Ranging experiments were carried out outdoors. In low echo photon condition, target photon counting showed time correlated and noise photon counting were random. Detection probability and range precision versus threshold were described and range precision increased from 0.44 to 0.11 when threshold increased from 4 to 8.
Sturrock, R. F.; Ouma, J. H.; Kariuki, H. C.; Thiongo, F. W.; Koech, D. K.; Butterworth, A. E.
1997-01-01
A total of 19 annual or biannual audits were performed over a 12-year period by an independent microscopist on randomized subsamples of Kato slides examined for Schistosoma mansoni eggs by Kenyan microscopists from the Division of Vector-borne Diseases (DVBD). The recounts were invariably lower than the originals owing to some deterioration of the preparations between counts, but the two were strongly correlated: significant regressions of recounts on counts taking up 80-90% of the observed variance. Observer bias differed significantly between microscopists but remained stable over time, whereas repeatability of recounts on counts dropped slightly in periods of maximum work load but did not vary systematically with time. Approximately 7% of the counts and recounts disagreed on the presence or absence of eggs, but less than a third of these were negatives that were found positive on recount. False negatives dropped to 1.3% if duplicate counts were considered. The performance of the Kenyan microscopists was remarkably high and consistent throughout the 12-year period. This form of quality control is suitable for projects where limited funds preclude full-time supervisors using more sophisticated systems. PMID:9447781
Reduced lymphocyte count as an early marker for predicting infected pancreatic necrosis.
Shen, Xiao; Sun, Jing; Ke, Lu; Zou, Lei; Li, Baiqiang; Tong, Zhihui; Li, Weiqin; Li, Ning; Li, Jieshou
2015-10-26
Early occurrence of immunosuppression is a risk factor for infected pancreatic necrosis (IPN) in the patients with acute pancreatitis (AP). However, current measures for the immune systems are too cumbersome and not widely available. Significantly decreased lymphocyte count has been shown in patients with severe but not mild type of AP. Whereas, the correlation between the absolute lymphocyte count and IPN is still unknown. We conduct this study to reveal the exact relationship between early lymphocyte count and the development of IPN in the population of AP patients. One hundred and fifty-three patients with acute pancreatitis admitted to Jinling Hospital during the period of January 2012 to July 2014 were included in this retrospective study. The absolute lymphocyte count and other relevant parameters were measured on admission. The diagnosis of IPN was based on the definition of the revised Atlanta classification. Patients were divided into two groups according to the presence of IPN. Thirty patients developed infected necrotizing pancreatitis during the disease course. The absolute lymphocyte count in patients with IPN was significantly lower on admission (0.62 × 10(9)/L, interquartile range [IQR]: 0.46-0.87 × 10(9)/L vs. 0.91 × 10(9)/L, IQR: 0.72-1.27 × 10(9)/L, p < 0.001) and throughout the whole clinical course than those without IPN. Logistic regression indicated that reduced lymphocyte count was an independent risk factor for IPN. The optimal cut-offs from ROC curve was 0.66 × 10(9)/L giving sensitivity of 83.7 % and specificity of 66.7 %. Reduced lymphocyte count within 48 h of AP onset is significantly and independently associated with the development of IPN.
ERIC Educational Resources Information Center
McGarvey, Robert J.
2010-01-01
It's a riddle faced by virtually every IT director: how to fulfill users' desire for more muscular computing resources while still obliging administrators' commands to keep education spending down. Against long odds, many district technology directors have been fulfilling both counts, optimizing their computing systems with improvements that pay…
Ding, Huanjun; Molloi, Sabee
2012-01-01
Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941
Gross, Hans J
2011-09-01
Human inborn numerical competence means our ability to recognize object numbers precisely under circumstances which do not allow sequential counting. This archaic process has been called "subitizing," from the Latin "subito" = suddenly, immediately, indicating that the objects in question are presented to test persons only for a fraction of a second in order to prevent counting. In contrast, however, sequential counting, an outstanding cultural achievement of mankind, means to count "1, 2, 3, 4, 5, 6, 7, 8…" without a limit. The following essay will explain how the limit of numerical competence, i.e., the recognition of object numbers without counting, has been determined for humans and how this has been achieved for the first time in case of an invertebrate, the honeybee. Finally, a hypothesis explaining the influence of our limited, inborn numerical competence on counting in our times, e.g., in the Russian language, will be presented. Subitizing versus counting by young Down syndrome infants and autistics and the Savant syndrome will be discussed.
To bee or not to bee, this is the question…
2011-01-01
Human inborn numerical competence means our ability to recognize object numbers precisely under circumstances which do not allow sequential counting. This archaic process has been called “subitizing,” from the Latin “subito” = suddenly, immediately, indicating that the objects in question are presented to test persons only for a fraction of a second in order to prevent counting. In contrast, however, sequential counting, an outstanding cultural achievement of mankind, means to count “1, 2, 3, 4, 5, 6, 7, 8…” without a limit. The following essay will explain how the limit of numerical competence, i.e., the recognition of object numbers without counting, has been determined for humans and how this has been achieved for the first time in case of an invertebrate, the honeybee. Finally, a hypothesis explaining the influence of our limited, inborn numerical competence on counting in our times, e.g., in the Russian language, will be presented. Subitizing versus counting by young Down syndrome infants and autistics and the Savant syndrome will be discussed. PMID:22046473
Atmospheric mold spore counts in relation to meteorological parameters
NASA Astrophysics Data System (ADS)
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
Counting-loss correction for X-ray spectroscopy using unit impulse pulse shaping.
Hong, Xu; Zhou, Jianbin; Ni, Shijun; Ma, Yingjie; Yao, Jianfeng; Zhou, Wei; Liu, Yi; Wang, Min
2018-03-01
High-precision measurement of X-ray spectra is affected by the statistical fluctuation of the X-ray beam under low-counting-rate conditions. It is also limited by counting loss resulting from the dead-time of the system and pile-up pulse effects, especially in a high-counting-rate environment. In this paper a detection system based on a FAST-SDD detector and a new kind of unit impulse pulse-shaping method is presented, for counting-loss correction in X-ray spectroscopy. The unit impulse pulse-shaping method is evolved by inverse deviation of the pulse from a reset-type preamplifier and a C-R shaper. It is applied to obtain the true incoming rate of the system based on a general fast-slow channel processing model. The pulses in the fast channel are shaped to unit impulse pulse shape which possesses small width and no undershoot. The counting rate in the fast channel is corrected by evaluating the dead-time of the fast channel before it is used to correct the counting loss in the slow channel.
High-rate dead-time corrections in a general purpose digital pulse processing system
Abbene, Leonardo; Gerardi, Gaetano
2015-01-01
Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270
Variability of Seasonal CO2 Ice Caps on Mars for Mars Years 26 through 29
NASA Astrophysics Data System (ADS)
Feldman, W. C.; Maurice, S.; Prettyman, T. H.
2011-12-01
We have developed an improved thermal, epithermal, and fast neutron counting-rate time series data of the Mars Odyssey Neutron Spectrometer (MONS), optimized to greatly reduce both statistical and systematic uncertainties. This new data set was applied to study temporal and spatial distributions of the growth, decay, and maximum amount of precipitated CO2 ice during Martian years (MY) 26, 27, 28, and 29. For this study, we concentrate on the epithermal counting rate detected using the down-looking prism (P1) of MONS, and a combination of the epithermal and thermal counting rate detected by the forward-looking sensor (P2) of MONS. Although the energy range of neutrons detected by P2 covers both the thermal and epithermal range, it is heavily weighted to the thermal range. We find that the variance of the maximum epithermal counting rate is remarkably small over both north and south seasonal caps, varying by less than 3% over the four-year period. In contrast, although the maximum P2 counting rate over both poles is sensibly the same within error bars (about 2%) during the first three years, it drops by 18% over the north pole and 8% over the south pole during MY 29. The most-likely explanation of this drop is that abundances of the non-condensable gases N2 and Ar, are unusually enhanced during MY 29. Movies were also made of maps of the growth and decay of P2 counting rates summed over the first three years of these data. Careful inspection shows that both the growth and decay in the north were cylindrically symmetric, centered near the geographic north pole. In contrast, both the growth and decay of CO2 buildup in the south were skewed off the geographic pole to the center of the CO2 residual cap, and contained a small, but definitely distinct ring-like annular enhancement centered at a latitude of about 83.5° S spread over a longitude range that extends between about -35° and +35° E. This arc runs parallel to, and overlays, the very steep drop in altitude from the top of the south-polar CO2/water-ice residual cap at about +4.2 km to the surrounding plains at about +2.5 km. Algorithms developed previously to convert counting rates to CO2 and noncondensable gas column abundance will be applied to interpret the data.
The normalization of solar X-ray data from many experiments.
NASA Technical Reports Server (NTRS)
Wende, C. D.
1972-01-01
A conversion factor is used to convert Geiger (GM) tube count rates or ion chamber currents into units of the incident X-ray energy flux in a specified passband. A method is described which varies the passband to optimize these conversion factors such that they are relatively independent of the spectrum of the incident photons. This method was applied to GM tubes flown on Explorers 33 and 35 and Mariner 5 and to ion chambers flown on OSO 3 and OGO 4. Revised conversion factors and passbands are presented, and the resulting absolute solar X-ray fluxes based on these are shown to improve the agreement between the various experiments. Calculations have shown that, although the GM tubes on Explorer 33 viewed the Sun off-axis, the effective passband did not change appreciably, and the simple normalization of the count rates to the count rates of a similar GM tube on Explorer 35 was justified.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
One of the most valuable unique characteristics of the PCA is the high count rates (100,000 c/s) it can record, and the resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle-1 work on Sco X-1 has shown that performing high count rate observations is very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of-the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-7 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-8 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 10^5 c/s/5PCU levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-9 proposal. The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life og the satallire probably only one black-hole transient (if any) will reach 10^5 cps/5PCU levels. when this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-5 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2&3 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1-3 work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2,3&4 proposal. - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1-3 work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
Asm-Triggered too Observations of 100,000 C/s Black Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
RESUBMISSION ACCEPTED CYCLE 2 PROPOSAL - The PCA is unique by the high count rates (~100,000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our Cycle 1&2 work on Sco X-1 and 1744-28 has shown that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black-hole transient (if any) will reach 100,000 c/s levels. When this occurs, a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state-of- the-art high count rate observations covering all of the most crucial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-10 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates (core Program)
NASA Astrophysics Data System (ADS)
Resubmission accepted Cycle 2-11 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
ASM Triggered too Observations of 100,000 C/s Black-Hole Candidates
NASA Astrophysics Data System (ADS)
van der Klis, Michiel
Resubmission accepted Cycle 2-11 proposal. The PCA is unique by the high count rates (~100.000 c/s) it can record, and its resulting extreme sensitivity to weak variability. Only few sources get this bright. Our RXTE work on Sco X-1 and 1744-28 shows that high count rate observations are very rewarding, but also difficult and not without risk. In the life of the satellite probably only one black hole transient (if any) will reach 10^5 cps/5 PCU levels. When this occurs a window of discovery will be opened on black holes, which will nearly certainly close again within a few days. This proposal aims at ensuring that optimal use is made of this opportunity by performing state of the art high count rate observations covering all of the most crusial aspects of the source variability.
NASA Astrophysics Data System (ADS)
Lundqvist, Mats; Danielsson, Mats; Cederstroem, Bjoern; Chmill, Valery; Chuntonov, Alexander; Aslund, Magnus
2003-06-01
Sectra Microdose is the first single photon counting mammography detector. An edge-on crystalline silicon detector is connected to application specific integrated circuits that individually process each photon. The detector is scanned across the breast and the rejection of scattered radiation exceeds 97% without the use of a Bucky. Processing of each x-rays individually enables an optimization of the information transfer from the x-rays to the image in a way previously not possible. Combined with an almost absence of noise from scattered radiation and from electronics we foresee a possibility to reduce the radiation dose and/or increase the image quality. We will discuss fundamental features of the new direct photon counting technique in terms of dose efficiency and present preliminary measurements for a prototype on physical parameters such as Noise Power Spectra (NPS), MTF and DQE.
Observer model optimization of a spectral mammography system
NASA Astrophysics Data System (ADS)
Fredenberg, Erik; Åslund, Magnus; Cederström, Björn; Lundqvist, Mats; Danielsson, Mats
2010-04-01
Spectral imaging is a method in medical x-ray imaging to extract information about the object constituents by the material-specific energy dependence of x-ray attenuation. Contrast-enhanced spectral imaging has been thoroughly investigated, but unenhanced imaging may be more useful because it comes as a bonus to the conventional non-energy-resolved absorption image at screening; there is no additional radiation dose and no need for contrast medium. We have used a previously developed theoretical framework and system model that include quantum and anatomical noise to characterize the performance of a photon-counting spectral mammography system with two energy bins for unenhanced imaging. The theoretical framework was validated with synthesized images. Optimal combination of the energy-resolved images for detecting large unenhanced tumors corresponded closely, but not exactly, to minimization of the anatomical noise, which is commonly referred to as energy subtraction. In that case, an ideal-observer detectability index could be improved close to 50% compared to absorption imaging. Optimization with respect to the signal-to-quantum-noise ratio, commonly referred to as energy weighting, deteriorated detectability. For small microcalcifications or tumors on uniform backgrounds, however, energy subtraction was suboptimal whereas energy weighting provided a minute improvement. The performance was largely independent of beam quality, detector energy resolution, and bin count fraction. It is clear that inclusion of anatomical noise and imaging task in spectral optimization may yield completely different results than an analysis based solely on quantum noise.
A new structure design and the basic radiation characteristics test of the intense current tube
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Ai, Xianyun; Fu, Li; Cui, Hui
2018-02-01
As a kind of special G-M counter, the intense current tube (ICT) is characterized by small ratio of cathode to anode radius, high working current or count rate, and can be used as the detection units of ultra-high range radiation instruments. In this paper, a new design of ICT structure is introduced, not only does it have a minimum ratio of cathode to anode but it also has a cathode which directly sticks out from the sensitive gas. Using COMSOL Multiphysics, we simulated the electric field between the anode and cathode and finalized the optimal structure. The results of processes and experiments show that the structure has better properties, with plateau slope reaching up to 7.4% within 100V, and it also has a wider range of dose rate. The linear data between the bottom limit of 0.2mGy/h and the upper limit of 1Gy/h is quite accurate but it becomes less reliable beyond 1Gy/h. By using Paralyzable model, we deduce that the dead time of the said ICT is less than 13.4 µs, and we will further optimize the readout circuit in order to reduce the resolution time of the circuit in the near future.
NASA Astrophysics Data System (ADS)
Kim, Kiho; Yun, Jiwon; Lee, Donghyuck; Kim, Dohun
2018-02-01
A simple and convenient design enables real-time three-dimensional position tracking of nitrogen-vacancy (NV) centers in diamond. The system consists entirely of commercially available components (a single-photon counter, a high-speed digital-to-analog converter, a phase-sensitive detector-based feedback device, and a piezo stage), eliminating the need for custom programming or rigorous optimization processes. With a large input range of counters and trackers combined with high sensitivity of single-photon counting, high-speed position tracking (upper bound recovery time of 0.9 s upon 250 nm of step-like positional shift) not only of bright ensembles, but also of low-photon-collection-efficiency single to few NV centers (down to 103 s-1) is possible. The tracking requires position modulation of only 10 nm, which allows simultaneous position tracking and pulsed measurements in the long term. Therefore, this tracking system enables measuring a single-spin magnetic resonance and Rabi oscillations at a very high resolution even without photon collection optimization. The system is widely applicable to various fields related to NV center quantum manipulation research such as NV optical trapping, NV tracking in fluid dynamics, and biological sensing using NV centers inside a biological cell.
Focal volume optics and experimental artifacts in confocal fluorescence correlation spectroscopy.
Hess, Samuel T; Webb, Watt W
2002-01-01
Fluorescence correlation spectroscopy (FCS) can provide a wealth of information about biological and chemical systems on a broad range of time scales (<1 micros to >1 s). Numerical modeling of the FCS observation volume combined with measurements has revealed, however, that the standard assumption of a three-dimensional Gaussian FCS observation volume is not a valid approximation under many common measurement conditions. As a result, the FCS autocorrelation will contain significant, systematic artifacts that are most severe with confocal optics when using a large detector aperture and aperture-limited illumination. These optical artifacts manifest themselves in the fluorescence correlation as an apparent additional exponential component or diffusing species with significant (>30%) amplitude that can imply extraneous kinetics, shift the measured diffusion time by as much as approximately 80%, and cause the axial ratio to diverge. Artifacts can be minimized or virtually eliminated by using a small confocal detector aperture, underfilled objective back-aperture, or two-photon excitation. However, using a detector aperture that is smaller or larger than the optimal value (approximately 4.5 optical units) greatly reduces both the count rate per molecule and the signal-to-noise ratio. Thus, there is a tradeoff between optimizing signal-to-noise and reducing experimental artifacts in one-photon FCS. PMID:12324447
Hogg, Abigail
2017-01-01
Objective. To examine how instructor-developed reading material relates to pre-class time spent preparing for the readiness assurance process (RAP) in a team-based learning (TBL) course. Methods. Students within pharmacokinetics and physiology were asked to self-report the amount of time spent studying for the RAP. Correlation analysis and multilevel linear regression techniques were used to identify factors within the pre-class reading material that contribute to self-reported study time. Results. On average students spent 3.2 hours preparing for a section of material in the TBL format. The ratio of predicted reading time, based on reading speed and word count, and self-reported study time was greater than 1:3. Self-reported study time was positively correlated with word count, number of tables and figures, and overall page length. For predictors of self-reported study time, topic difficulty and number of figures were negative predictors whereas word count and number of self-assessments were positive predictors. Conclusion. Factors related to reading material are moderate predictors of self-reported student study time for an accountability assessment. A more significant finding is student self-reported study time is much greater than the time predicted by simple word count. PMID:28970604
Persky, Adam M; Hogg, Abigail
2017-08-01
Objective. To examine how instructor-developed reading material relates to pre-class time spent preparing for the readiness assurance process (RAP) in a team-based learning (TBL) course. Methods. Students within pharmacokinetics and physiology were asked to self-report the amount of time spent studying for the RAP. Correlation analysis and multilevel linear regression techniques were used to identify factors within the pre-class reading material that contribute to self-reported study time. Results. On average students spent 3.2 hours preparing for a section of material in the TBL format. The ratio of predicted reading time, based on reading speed and word count, and self-reported study time was greater than 1:3. Self-reported study time was positively correlated with word count, number of tables and figures, and overall page length. For predictors of self-reported study time, topic difficulty and number of figures were negative predictors whereas word count and number of self-assessments were positive predictors. Conclusion. Factors related to reading material are moderate predictors of self-reported student study time for an accountability assessment. A more significant finding is student self-reported study time is much greater than the time predicted by simple word count.
Berglund, Johan; Johansson, Henrik; Lundqvist, Mats; Cederström, Björn; Fredenberg, Erik
2014-01-01
Abstract. In x-ray imaging, contrast information content varies with photon energy. It is, therefore, possible to improve image quality by weighting photons according to energy. We have implemented and evaluated so-called energy weighting on a commercially available spectral photon-counting mammography system. The technique was evaluated using computer simulations, phantom experiments, and analysis of screening mammograms. The CNR benefit of energy weighting for a number of relevant target-background combinations measured by the three methods fell in the range of 2.2 to 5.2% when using optimal weight factors. This translates to a potential dose reduction at constant CNR in the range of 4.5 to 11%. We expect the choice of weight factor in practical implementations to be straightforward because (1) the CNR improvement was not very sensitive to weight, (2) the optimal weight was similar for all investigated target-background combinations, (3) aluminum/PMMA phantoms were found to represent clinically relevant tasks well, and (4) the optimal weight could be calculated directly from pixel values in phantom images. Reasonable agreement was found between the simulations and phantom measurements. Manual measurements on microcalcifications and automatic image analysis confirmed that the CNR improvement was detectable in energy-weighted screening mammograms. PMID:26158045
Chen, Chien-Yi
2009-01-01
Optimal conditions for the simultaneous determination of As, Sb and Sm in Chinese medicinal herbs using epithermal neutron activation analysis were investigated. The minimum detectable concentrations of 76As, 122Sb and 153Sm in lichen and medicinal herbs depended on the weight of the irradiated sample, and irradiation and decay durations. Optimal conditions were obtained by wrapping the irradiated target with 3.2 mm borated polyethylene neutron filters, which were adopted to screen the original reactor fission neutrons and to reduce the background activities of 38Cl, 24Na and 42K. Twelve medicinal herbs, commonly consumed by Taiwanese children as a diuretic treatment, were analysed since trace elements, such as As and Sb, in these herbs may be toxic when consumed in sufficiently large quantities over a long period. Various amounts of medicinal herbs, standardised powder, lichen and tomato leaves were weighed, packed into polyethylene bags, irradiated and counted under different conditions. The results indicated that about 350 mg of lichen irradiated for 24 h and counted for 20 min following a 30-60 h decay period was optimal for irradiation in a 10(11)n/cm s epithermal neutron flux. The implications of the content of the studied elements in Chinese medicinal herbs are discussed.
NASA Astrophysics Data System (ADS)
Ofek, Eran O.; Zackay, Barak
2018-04-01
Detection of templates (e.g., sources) embedded in low-number count Poisson noise is a common problem in astrophysics. Examples include source detection in X-ray images, γ-rays, UV, neutrinos, and search for clusters of galaxies and stellar streams. However, the solutions in the X-ray-related literature are sub-optimal in some cases by considerable factors. Using the lemma of Neyman–Pearson, we derive the optimal statistics for template detection in the presence of Poisson noise. We demonstrate that, for known template shape (e.g., point sources), this method provides higher completeness, for a fixed false-alarm probability value, compared with filtering the image with the point-spread function (PSF). In turn, we find that filtering by the PSF is better than filtering the image using the Mexican-hat wavelet (used by wavdetect). For some background levels, our method improves the sensitivity of source detection by more than a factor of two over the popular Mexican-hat wavelet filtering. This filtering technique can also be used for fast PSF photometry and flare detection; it is efficient and straightforward to implement. We provide an implementation in MATLAB. The development of a complete code that works on real data, including the complexities of background subtraction and PSF variations, is deferred for future publication.
Real-time bacterial microcolony counting using on-chip microscopy
NASA Astrophysics Data System (ADS)
Jung, Jae Hee; Lee, Jung Eun
2016-02-01
Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery.
Real-time bacterial microcolony counting using on-chip microscopy
Jung, Jae Hee; Lee, Jung Eun
2016-01-01
Observing microbial colonies is the standard method for determining the microbe titer and investigating the behaviors of microbes. Here, we report an automated, real-time bacterial microcolony-counting system implemented on a wide field-of-view (FOV), on-chip microscopy platform, termed ePetri. Using sub-pixel sweeping microscopy (SPSM) with a super-resolution algorithm, this system offers the ability to dynamically track individual bacterial microcolonies over a wide FOV of 5.7 mm × 4.3 mm without requiring a moving stage or lens. As a demonstration, we obtained high-resolution time-series images of S. epidermidis at 20-min intervals. We implemented an image-processing algorithm to analyze the spatiotemporal distribution of microcolonies, the development of which could be observed from a single bacterial cell. Test bacterial colonies with a minimum diameter of 20 μm could be enumerated within 6 h. We showed that our approach not only provides results that are comparable to conventional colony-counting assays but also can be used to monitor the dynamics of colony formation and growth. This microcolony-counting system using on-chip microscopy represents a new platform that substantially reduces the detection time for bacterial colony counting. It uses chip-scale image acquisition and is a simple and compact solution for the automation of colony-counting assays and microbe behavior analysis with applications in antibacterial drug discovery. PMID:26902822
Seyoum, Awoke; Ndlovu, Principal; Temesgen, Zewotir
2017-03-16
Adherence and CD4 cell count change measure the progression of the disease in HIV patients after the commencement of HAART. Lack of information about associated factors on adherence to HAART and CD4 cell count reduction is a challenge for the improvement of cells in HIV positive adults. The main objective of adopting joint modeling was to compare separate and joint models of longitudinal repeated measures in identifying long-term predictors of the two longitudinal outcomes: CD4 cell count and adherence to HAART. A longitudinal retrospective cohort study was conducted to examine the joint predictors of CD4 cell count change and adherence to HAART among HIV adult patients enrolled in the first 10 months of the year 2008 and followed-up to June 2012. Joint model was employed to determine joint predictors of two longitudinal response variables over time. Furthermore, the generalized linear mixed effect model had been used for specification of the marginal distribution, conditional to correlated random effect. A total of 792 adult HIV patients were studied to analyze the longitudinal joint model study. The result from this investigation revealed that age, weight, baseline CD4 cell count, ownership of cell phone, visiting times, marital status, residence area and level of disclosure of the disease to family members had significantly affected both outcomes. From the two-way interactions, time * owner of cell phone, time * sex, age * sex, age * level of education as well as time * level of education were significant for CD4 cell count change in the longitudinal data analysis. The multivariate joint model with linear predictor indicates that CD4 cell count change was positively correlated (p ≤ 0.0001) with adherence to HAART. Hence, as adherence to HAART increased, CD4 cell count also increased; and those patients who had significant CD4 cell count change at each visiting time had been encouraged to be good adherents. Joint model analysis was more parsimonious as compared to separate analysis, as it reduces type I error and subject-specific analysis improved its model fit. The joint model operates multivariate analysis simultaneously; and it has great power in parameter estimation. Developing joint model helps validate the observed correlation between the outcomes that have emerged from the association of intercepts. There should be a special attention and intervention for HIV positive adults, especially for those who had poor adherence and with low CD4 cell count change. The intervention may be important for pre-treatment counseling and awareness creation. The study also identified a group of patients who were with maximum risk of CD4 cell count change. It is suggested that this group of patients needs high intervention for counseling.
Yang, Di; Zhao, Hongxin; Gao, Guiju; Wei, Kai; Zhang, Li; Han, Ning; Xiao, Jiang; Li, Xin; Wang, Fang; Liang, Hongyuan; Zhang, Wei; Wu, Liang
2014-12-01
To explore the relationship between CD4(+) T lymphocyte cell count and prognosis as well as healing of the surgical incision in HIV/AIDS patients who had received operation. Data were collected and analysed retrospectively from 234 HIV/AIDS patients hospitalized at the Beijing Ditan hospital who underwent operation between January 2008 and December 2012. Following factors were taken into consideration that including:age, gender, time and where that anti-HIV(+) was diagnosed, CD4(+)T lymphocyte cell count at the time of operation, part of the body that being operated, typology of incision, different levels of healing on the surgical incision, infection at the incision site, post-operative complications and the prognosis, etc. Wilcoxon rank sum test, χ(2) test, Kruskal-Wallis H test and Spearman rank correlation were used for statistical analysis to compare the different levels on healing of the incision in relation to the different CD4(+)T lymphocyte cell counts. Rates of level A healing under different CD4(+)T cell counts were also compared. 1) Among the 234 patients including 125 males and 109 females, the average age was 36.17±11.56 years old. Time after discovery of anti-HIV(+)was between 0 and 204 months. The medium CD4(+)T cell count was 388.5 cell/µl; 23.93% of the patients having CD4(+)T lymphocyte cell counts as <200 cell/µl. 2) 7.26% of the operations were emergent. There were 23 different organs affected at the time of operation, due to 48 different kinds of illness. 21.37% of the operations belonged to class I incision, 49.57% was class II incision and 29.06% was class III incision. 86.32% of the incisions resulted in level A healing, 12.51% resulted in level B and 1.71% in level C. 4.27% of the patients developed post-operative complications. Differences between level A healing and level B or C healing in terms of CD4(+)T lymphocyte cell count were not significant (P > 0.05). There was no statistically significant difference on the CD4(+) T lymphocyte count in patients with or without postoperative complications. Difference of the HIV infection time was also not statistically significant between the two groups of patients. Rate of level A healing for the different CD4(+)T lymphocyte cell count was not significant (P > 0.05). Healing of the incision did not show significant correlation with CD4(+) T lymphocyte cell count, duration of antiretroviral therapy or the time that HIV infection was discovered (P > 0.05). As long as both the in/exclusion criteria were strictly followed, prognosis for operation on HIV/AIDS seemed to be generally good. Low CD4(+)T lymphocyte cell count should not be taken as a exclusion criteria for operation on HIV/AIDS patients.
Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope
2013-01-01
With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.
Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope
2013-01-01
Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448
Study on ultra-fast single photon counting spectrometer based on PCI
NASA Astrophysics Data System (ADS)
Zhang, Xi-feng
2010-10-01
The time-correlated single photon counting spectrometer developed uses PCI bus technology. We developed the ultrafast data acquisition card based on PCI, replace multi-channel analyzer primary. The system theory and design of the spectrometer are presented in detail, and the process of operation is introduced with the integration of the system. Many standard samples have been measured and the data have been analyzed and contrasted. Experimental results show that the spectrometer, s sensitive is single photon counting, and fluorescence life-span and time resolution is picosecond level. And the instrument could measure time-resolved spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo
2015-07-01
A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less
CD4+ Cell Count and HIV Load as Predictors of Size of Anal Warts Over Time in HIV-Infected Women
Luu, Hung N.; Amirian, E. Susan; Chan, Wenyaw; Beasley, R. Palmer; Piller, Linda B.
2012-01-01
Background. Little is known about the associations between CD4+ cell counts, human immunodeficiency virus (HIV) load, and human papillomavirus “low-risk” types in noncancerous clinical outcomes. This study examined whether CD4+ count and HIV load predict the size of the largest anal warts in 976 HIV-infected women in an ongoing cohort. Methods. A linear mixed model was used to determine the association between size of anal wart and CD4+ count and HIV load. Results. The incidence of anal warts was 4.15 cases per 100 person-years (95% confidence interval [CI], 3.83–4.77) and 1.30 cases per 100 person-years (95% CI, 1.00–1.58) in HIV-infected and HIV-uninfected women, respectively. There appeared to be an inverse association between size of the largest anal warts and CD4+ count at baseline; however, this was not statistically significant. There was no association between size of the largest anal warts and CD4+ count or HIV load over time. Conclusions. There was no evidence for an association between size of the largest anal warts and CD4+ count or HIV load over time. Further exploration on the role of immune response on the development of anal warts is warranted in a larger study. PMID:22246682
AIC identifies optimal representation of longitudinal dietary variables.
VanBuren, John; Cavanaugh, Joseph; Marshall, Teresa; Warren, John; Levy, Steven M
2017-09-01
The Akaike Information Criterion (AIC) is a well-known tool for variable selection in multivariable modeling as well as a tool to help identify the optimal representation of explanatory variables. However, it has been discussed infrequently in the dental literature. The purpose of this paper is to demonstrate the use of AIC in determining the optimal representation of dietary variables in a longitudinal dental study. The Iowa Fluoride Study enrolled children at birth and dental examinations were conducted at ages 5, 9, 13, and 17. Decayed or filled surfaces (DFS) trend clusters were created based on age 13 DFS counts and age 13-17 DFS increments. Dietary intake data (water, milk, 100 percent-juice, and sugar sweetened beverages) were collected semiannually using a food frequency questionnaire. Multinomial logistic regression models were fit to predict DFS cluster membership (n=344). Multiple approaches could be used to represent the dietary data including averaging across all collected surveys or over different shorter time periods to capture age-specific trends or using the individual time points of dietary data. AIC helped identify the optimal representation. Averaging data for all four dietary variables for the whole period from age 9.0 to 17.0 provided a better representation in the multivariable full model (AIC=745.0) compared to other methods assessed in full models (AICs=750.6 for age 9 and 9-13 increment dietary measurements and AIC=762.3 for age 9, 13, and 17 individual measurements). The results illustrate that AIC can help researchers identify the optimal way to summarize information for inclusion in a statistical model. The method presented here can be used by researchers performing statistical modeling in dental research. This method provides an alternative approach for assessing the propriety of variable representation to significance-based procedures, which could potentially lead to improved research in the dental community. © 2017 American Association of Public Health Dentistry.
Van de Velde, Franco; Vaccari, María Celia; Piagentini, Andrea Marcela; Pirovani, María Élida
2016-09-01
The fogging of strawberries using a environmentally friendly sanitizer mixture of peracetic acid (5%) and hydrogen peroxide (20%) was performed in a model chamber and modeled as a function of the concentration (3.4, 20.0, 60.0, 100.0 and 116.6 µL sanitizer L(-) (1) air chamber) and the treatment time (5.7, 15.0, 37.5, 60.0 and 69.3 min). The sanitizer fogging was adequate for reducing total mesophilic microbial and yeasts and moulds counts of fruits until seven days of storage at 2℃. However, sanitizer oxidant properties adversely affected the content of total anthocyanins, total phenolics, vitamin C, and antioxidant capacity to various degrees, with some deleterious changes in the fruits color, depending on the fogging conditions. A multiple numeric response optimization was developed based on 2.0 log microbiological reduction, maximum phytochemicals and antioxidant capacity retentions, with no changes in the fruits color, being the optimal fogging conditions achieved: 10.1 µL sanitizer L(-1) air chamber and 29.6 min. The fogging of strawberries at these conditions may represent a promising postharvest treatment option for extending their shelf-life without affecting their sensory quality and bioactive properties. © The Author(s) 2016.
Immunogold Nanoparticles for Rapid Plasmonic Detection of C. sakazakii.
Aly, Mohamed A; Domig, Konrad J; Kneifel, Wolfgang; Reimhult, Erik
2018-06-25
Cronobacter sakazakii is a foodborne pathogen that can cause a rare, septicemia, life-threatening meningitis, and necrotizing enterocolitis in infants. In general, standard methods for pathogen detection rely on culture, plating, colony counting and polymerase chain reaction DNA-sequencing for identification, which are time, equipment and skill demanding. Recently, nanoparticle- and surface-based immunoassays have increasingly been explored for pathogen detection. We investigate the functionalization of gold nanoparticles optimized for irreversible and specific binding to C. sakazakii and their use for spectroscopic detection of the pathogen. We demonstrate how 40-nm gold nanoparticles grafted with a poly(ethylene glycol) brush and functionalized with polyclonal antibodies raised against C. sakazakii can be used to specifically target C. sakazakii . The strong extinction peak of the Au nanoparticle plasmon polariton resonance in the optical range is used as a label for detection of the pathogens. Individual binding of the nanoparticles to the C. sakazakii surface is also verified by transmission electron microscopy. We show that a high degree of surface functionalization with anti- C. sakazakii optimizes the detection and leads to a detection limit as low as 10 CFU/mL within 2 h using a simple cuvette-based UV-Vis spectrometric readout that has great potential for further optimization.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
van Frankenhuyzen, Jessica K; Trevors, Jack T; Flemming, Cecily A; Lee, Hung; Habash, Marc B
2013-11-01
Biosolids result from treatment of sewage sludge to meet jurisdictional standards, including pathogen reduction. Once government regulations are met, materials can be applied to agricultural lands. Culture-based methods are used to enumerate pathogen indicator microorganisms but may underestimate cell densities, which is partly due to bacteria existing in a viable but non-culturable physiological state. Viable indicators can also be quantified by realtime polymerase chain reaction (qPCR) used with propidium monoazide (PMA), a dye that inhibits amplification of DNA found extracellularly or in dead cells. The objectives of this study were to test an optimized PMA-qPCR method for viable pathogen detection in wastewater solids and to validate it by comparing results to data obtained by conventional plating. Reporter genes from genetically marked Pseudomonas sp. UG14Lr and Agrobacterium tumefaciens 542 cells were spiked into samples of primary sludge, and anaerobically digested and Lystek-treated biosolids as cell-free DNA, dead cells, viable cells, and mixtures of live and dead cells, followed by DNA extraction with and without PMA, and qPCR. The protocol was then used for Escherichia coli quantification in the three matrices, and results compared to plate counts. PMA-qPCR selectively detected viable cells, while inhibiting signals from cell-free DNA and DNA found in membrane-compromised cells. PMA-qPCR detected 0.5-1 log unit more viable E. coli cells in both primary solids and dewatered biosolids than plate counts. No viable E. coli was found in Lystek-treated biosolids. These data suggest PMA-qPCR may more accurately estimate pathogen cell numbers than traditional culture methods.
Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery
NASA Astrophysics Data System (ADS)
Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata
2014-09-01
Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.
An optimized OPC and MDP flow for reducing mask write time and mask cost
NASA Astrophysics Data System (ADS)
Yang, Ellyn; Li, Cheng He; Park, Se Jin; Zhu, Yu; Guo, Eric
2010-09-01
In the process of optical proximity correction, layout edge or fragment is migrating to proper position in order to minimize edge placement error (EPE). During this fragment migration, several factors other than EPE can be also taken into account as a part of cost function for optimal fragment displacement. Several factors are devised in favor of OPC stability, which can accommodate room for high mask error enhancement factor (MEEF), lack of process window, catastrophic pattern failure such as pinch/bridge and improper fragmentation. As technology node becomes finer, there happens conflict between OPC accuracy and stability. Especially for metal layers, OPC has focused on the stability by loss of accurate OPC results. On this purpose, several techniques have been introduced, which are target smoothing, process window aware OPC, model-based retargeting and adaptive OPC. By utilizing those techniques, OPC enables more stabilized patterning, instead of realizing design target exactly on wafer. Inevitably, post-OPC layouts become more complicated because those techniques invoke additional edge, or fragments prior to correction or during OPC iteration. As a result, jogs of post OPC layer can be dramatically increased, which results in huge number of shot count after data fracturing. In other words, there is trade-off relationship between data complexity and various methods for OPC stability. In this paper, those relationships have been investigated with respect to several technology nodes. The mask shot count reduction is achieved by reducing the number of jogs with which EPE difference are within pre-specified value. The effect of jog smoothing on OPC output - in view of OPC performance and mask data preparation - was studied quantitatively for respective technology nodes.
Modeling time-series count data: the unique challenges facing political communication studies.
Fogarty, Brian J; Monogan, James E
2014-05-01
This paper demonstrates the importance of proper model specification when analyzing time-series count data in political communication studies. It is common for scholars of media and politics to investigate counts of coverage of an issue as it evolves over time. Many scholars rightly consider the issues of time dependence and dynamic causality to be the most important when crafting a model. However, to ignore the count features of the outcome variable overlooks an important feature of the data. This is particularly the case when modeling data with a low number of counts. In this paper, we argue that the Poisson autoregressive model (Brandt and Williams, 2001) accurately meets the needs of many media studies. We replicate the analyses of Flemming et al. (1997), Peake and Eshbaugh-Soha (2008), and Ura (2009) and demonstrate that models missing some of the assumptions of the Poisson autoregressive model often yield invalid inferences. We also demonstrate that the effect of any of these models can be illustrated dynamically with estimates of uncertainty through a simulation procedure. The paper concludes with implications of these findings for the practical researcher. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
NASA Astrophysics Data System (ADS)
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
500-514 N. Peshtigo Ct, May 2018, Lindsay Light Radiological Survey
maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,800 cpm - 5,000 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
550 E. Illinois, May 2018, Lindsay Light Radiological Survey
Maximum gamma count rate for each lift was recorded on the attached RadiationSurvey Forms. Count rates in the excavation ranged from 1,250 cpm to 4,880 cpm.No count rates were found at any time that exceeded the instrument specific thresholdlimits.
The Ecological Stewardship Institute at Northern Kentucky University and the U.S. Environmental Protection Agency are collaborating to optimize a harmful algal bloom detection algorithm that estimates the presence and count of cyanobacteria in freshwater systems by image analysis...
Trajectories for Locomotion Systems: A Geometric and Computational Approach via Series Expansions
2004-10-11
speed controller. The model is endowed with a 100 count per revolution optical encoder for odometry. (2) On-board computation is performed by a single...switching networks,” Automatica, July 2003. Submitted. [17] K. M. Passino, Biomimicry for Optimization, Control, and Automation. New York: Springer
Optimizing phonon space in the phonon-coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
Giacomelli, L; Zimbal, A; Reginatto, M; Tittelmeier, K
2011-01-01
A compact NE213 liquid scintillation neutron spectrometer with a new digital data acquisition (DAQ) system is now in operation at the Physikalisch-Technische Bundesanstalt (PTB). With the DAQ system, developed by ENEA Frascati, neutron spectrometry with high count rates in the order of 5×10(5) s(-1) is possible, roughly an order of magnitude higher than with an analog acquisition system. To validate the DAQ system, a new data analysis code was developed and tests were done using measurements with 14-MeV neutrons made at the PTB accelerator. Additional analysis was carried out to optimize the two-gate method used for neutron and gamma (n-γ) discrimination. The best results were obtained with gates of 35 ns and 80 ns. This indicates that the fast and medium decay time components of the NE213 light emission are the ones that are relevant for n-γ discrimination with the digital acquisition system. This differs from what is normally implemented in the analog pulse shape discrimination modules, namely, the fast and long decay emissions of the scintillating light.
Marcinkowski, R; España, S; Van Holen, R; Vandenberghe, S
2014-12-07
The majority of current whole-body PET scanners are based on pixelated scintillator arrays with a transverse pixel size of 4 mm. However, recent studies have shown that decreasing the pixel size to 2 mm can significantly improve image spatial resolution. In this study, the performance of Digital Photon Counter (DPC) from Philips Digital Photon Counting (PDPC) was evaluated to determine their potential for high-resolution whole-body time of flight (TOF) PET scanners. Two detector configurations were evaluated. First, the DPC3200-44-22 DPC array was coupled to a LYSO block of 15 × 15 2 × 2 × 22 mm(3) pixels through a 1 mm thick light guide. Due to light sharing among the dies neighbour logic of the DPC was used. In a second setup the same DPC was coupled directly to a scalable 4 × 4 LYSO matrix of 1.9 × 1.9 × 22 mm(3) crystals with a dedicated reflector arrangement allowing for controlled light sharing patterns inside the matrix. With the first approach an average energy resolution of 14.5% and an average CRT of 376 ps were achieved. For the second configuration an average energy resolution of 11% and an average CRT of 295 ps were achieved. Our studies show that the DPC is a suitable photosensor for a high-resolution TOF-PET detector. The dedicated reflector arrangement allows one to achieve better performances than the light guide approach. The count loss, caused by dark counts, is overcome by fitting the matrix size to the size of DPC single die.
Fully integrated free-running InGaAs/InP single-photon detector for accurate lidar applications.
Yu, Chao; Shangguan, Mingjia; Xia, Haiyun; Zhang, Jun; Dou, Xiankang; Pan, Jian-Wei
2017-06-26
We present a fully integrated InGaAs/InP negative feedback avalanche diode (NFAD) based free-running single-photon detector (SPD) designed for accurate lidar applications. A free-piston Stirling cooler is used to cool down the NFAD with a large temperature range, and an active hold-off circuit implemented in a field programmable gate array is applied to further suppress the afterpulsing contribution. The key parameters of the free-running SPD including photon detection efficiency (PDE), dark count rate (DCR), afterpulse probability, and maximum count rate (MCR) are dedicatedly optimized for lidar application in practice. We then perform a field experiment using a Mie lidar system with 20 kHz pulse repetition frequency to compare the performance between the free-running InGaAs/InP SPD and a commercial superconducting nanowire single-photon detector (SNSPD). Our detector exhibits good performance with 1.6 Mcps MCR (0.6 μs hold-off time), 10% PDE, 950 cps DCR, and 18% afterpulse probability over 50 μs period. Such performance is worse than the SNSPD with 60% PDE and 300 cps DCR. However, after performing a specific algorithm that we have developed for afterpulse and count rate corrections, the lidar system performance in terms of range-corrected signal (Pr 2 ) distribution using our SPD agrees very well with the result using the SNSPD, with only a relative error of ∼2%. Due to the advantages of low-cost and small size of InGaAs/InP NFADs, such detector provides a practical solution for accurate lidar applications.
Effects of Space Missions on the Human Immune System: A Meta-Analysis
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Barger, L. K.; Baldini, F.; Huff, D.
1995-01-01
Future spaceflight will require travelers to spend ever-increasing periods of time in microgravity. Optimal functioning of the immune system is of paramount importance for the health and performance of these travelers. A meta-analysis statistical procedure was used to analyze immune system data from crew members in United States and Soviet space missions from 8.5 to 140 days duration between 1968 and 1985. Ten immunological parameters (immunoglobulins A, G, M, D, white blood cell (WBC) count, number of lymphocytes, percent total lymphocytes, percent B lymphocytes, percent T lymphocytes, and lymphocyte reactivity to mitogen) were investigated using multifactorial, repeated measure analysis of variance. With the preflight level set at 100, WBC count increased to 154 +/- 14% (mean +/- SE; p less than or equal to 0.05) immediately after flight; there was a decrease in lymphocyte count (83 +/- 4%; p less than or equal to 0.05) and percent of total lymphocytes (69 +/- 1%; p less than or equal to 0.05) immediately after flight, with reduction in RNA synthesis to phytohemagglutinin (PHA) to 51 +/- 21% (p less than or equal to 0.05) and DNA synthesis to PHA to 61 +/- 8% (p less than or equal to 0.05) at the first postflight measurement. Thus, some cellular immunological functions are decreased significantly following spaceflight. More data are needed on astronauts' age, aerobic power output, and parameters of their exercise training program to determine if these immune system responses are due solely to microgravity exposure or perhaps to some other aspect of spaceflight.
NASA Astrophysics Data System (ADS)
Bártová, H.; Kučera, J.; Musílek, L.; Trojek, T.
2014-11-01
In order to evaluate the age from the equivalent dose and to obtain an optimized and efficient procedure for thermoluminescence (TL) dating, it is necessary to obtain the values of both the internal and the external dose rates from dated samples and from their environment. The measurements described and compared in this paper refer to bricks from historic buildings and a fine-grain dating method. The external doses are therefore negligible, if the samples are taken from a sufficient depth in the wall. However, both the alpha dose rate and the beta and gamma dose rates must be taken into account in the internal dose. The internal dose rate to fine-grain samples is caused by the concentrations of natural radionuclides 238U, 235U, 232Th and members of their decay chains, and by 40K concentrations. Various methods can be used for determining trace concentrations of these natural radionuclides and their contributions to the dose rate. The dose rate fraction from 238U and 232Th can be calculated, e.g., from the alpha count rate, or from the concentrations of 238U and 232Th, measured by neutron activation analysis (NAA). The dose rate fraction from 40K can be calculated from the concentration of potassium measured, e.g., by X-ray fluorescence analysis (XRF) or by NAA. Alpha counting and XRF are relatively simple and are accessible for an ordinary laboratory. NAA can be considered as a more accurate method, but it is more demanding regarding time and costs, since it needs a nuclear reactor as a neutron source. A comparison of these methods allows us to decide whether the time- and cost-saving simpler techniques introduce uncertainty that is still acceptable.
Sleep Estimates Using Microelectromechanical Systems (MEMS)
te Lindert, Bart H. W.; Van Someren, Eus J. W.
2013-01-01
Study Objectives: Although currently more affordable than polysomnography, actigraphic sleep estimates have disadvantages. Brand-specific differences in data reduction impede pooling of data in large-scale cohorts and may not fully exploit movement information. Sleep estimate reliability might improve by advanced analyses of three-axial, linear accelerometry data sampled at a high rate, which is now feasible using microelectromechanical systems (MEMS). However, it might take some time before these analyses become available. To provide ongoing studies with backward compatibility while already switching from actigraphy to MEMS accelerometry, we designed and validated a method to transform accelerometry data into the traditional actigraphic movement counts, thus allowing for the use of validated algorithms to estimate sleep parameters. Design: Simultaneous actigraphy and MEMS-accelerometry recording. Setting: Home, unrestrained. Participants: Fifteen healthy adults (23-36 y, 10 males, 5 females). Interventions: None. Measurements: Actigraphic movement counts/15-sec and 50-Hz digitized MEMS-accelerometry. Analyses: Passing-Bablok regression optimized transformation of MEMS-accelerometry signals to movement counts. Kappa statistics calculated agreement between individual epochs scored as wake or sleep. Bland-Altman plots evaluated reliability of common sleep variables both between and within actigraphs and MEMS-accelerometers. Results: Agreement between epochs was almost perfect at the low, medium, and high threshold (kappa = 0.87 ± 0.05, 0.85 ± 0.06, and 0.83 ± 0.07). Sleep parameter agreement was better between two MEMS-accelerometers or a MEMS-accelerometer and an actigraph than between two actigraphs. Conclusions: The algorithm allows for continuity of outcome parameters in ongoing actigraphy studies that consider switching to MEMS-accelerometers. Its implementation makes backward compatibility feasible, while collecting raw data that, in time, could provide better sleep estimates and promote cross-study data pooling. Citation: te Lindert BHW; Van Someren EJW. Sleep estimates using microelectromechanical systems (MEMS). SLEEP 2013;36(5):781-789. PMID:23633761
Viani, Rolando M; Alvero, Carmelita; Fenton, Terry; Acosta, Edward P; Hazra, Rohan; Townley, Ellen; Steimers, Debra; Min, Sherene; Wiznia, Andrew
2015-11-01
To assess the pharmacokinetics (PK), safety and efficacy of dolutegravir plus optimized background regimen in HIV-infected treatment-experienced adolescents. Children older than 12 to younger than 18 years received dolutegravir weight-based fixed doses at approximately 1.0 mg/kg once daily in a phase I/II multicenter open label 48-week study. Intensive PK evaluation was done at steady state after dolutegravir was added to a failing regimen or started at the end of a treatment interruption. Safety and HIV RNA and CD4 cell count assessments were performed through week 48. Twenty-three adolescents were enrolled and 22 (96%) completed the 48-week study visit. Median age and weight were 15 years and 52 kg, respectively. Median [interquartile range (IQR)] baseline CD4+ cell count was 466 cells/μL (297, 771). Median (IQR) baseline HIV-1 RNA log10 was 4.3 log10 copies/mL (3.9, 4.6). Dolutegravir geometric mean of the area under the plasma concentration-time curve from time of administration to 24 hours after dosing (AUC0-24) and 24 hour postdose concentration (C24) were 46.0 μg hours/mL and 0.90 μg/mL, respectively, which were within the study targets based on adult PK ranges. Virologic success with an HIV RNA <400 copies/mL was achieved in 74% [95% confidence interval (CI): 52-90%] at week 48. Additionally, 61% (95% CI: 39-80%) had an HIV RNA <50 copies/mL at week 48. Median (IQR) gain in CD4 cell count at week 48 was 84 cells/μL (-81, 238). Dolutegravir was well tolerated, with no grade 4 adverse events, serious adverse events or discontinuations because of serious adverse events. Dolutegravir achieved target PK exposures in adolescents. Dolutegravir was safe and well tolerated, providing good virologic efficacy through week 48.
The clinical significance of platelet counts in the first 24 hours after severe injury.
Stansbury, Lynn G; Hess, Aaron S; Thompson, Kwaku; Kramer, Betsy; Scalea, Thomas M; Hess, John R
2013-04-01
Admission platelet (PLT) counts are known to be associated with all-cause mortality for seriously injured patients admitted to a trauma center. The course of subsequent PLT counts, their implications, and the effects of PLT therapy are less well known. Trauma center patients who were directly admitted from the scene of injury, received 1 or more units of uncrossmatched red blood cells in the first hour of care, survived for at least 15 minutes, and had a PLT count measured in the first hour were analyzed for the association of their admission and subsequent PLT counts in the first 24 hours with injury severity and hemorrhagic and central nervous system (CNS) causes of in-hospital mortality. Over an 8.25-year period, 1292 of 45,849 direct trauma admissions met entry criteria. Admission PLT counts averaged 228×10(9) ±90×10(9) /L and decreased by 104×10(9) /L by the second hour and 1×10(9) /L each hour thereafter. The admission count was not related to time to admission. Each 1-point increase in the injury severity score was associated with a 1×10(9) /L decrease in the PLT count at all times in the first 24 hours of care. Admission PLT counts were strongly associated with hemorrhagic and CNS injury mortality and subsequent PLT counts. Effects of PLT therapy could not be ascertained. Admission PLT counts in critically injured trauma patients are usually normal, decreasing after admission. Low PLT counts at admission and during the course of trauma care are strongly associated with mortality. © 2012 American Association of Blood Banks.
Joshi, V K; Chauhan, Arjun; Devi, Sarita; Kumar, Vikas
2015-08-01
Lactic acid fermentation of radish was conducted using various additive and growth stimulators such as salt (2 %-3 %), lactose, MgSO4 + MnSO4 and Mustard (1 %, 1.5 % and 2 %) to optimize the process. Response surface methodology (Design expert, Trial version 8.0.5.2) was applied to the experimental data for the optimization of process variables in lactic acid fermentation of radish. Out of various treatments studied, only the treatments having ground mustard had an appreciable effect on lactic acid fermentation. Both linear and quadratic terms of the variables studied had a significant effect on the responses studied. The interactions between the variables were found to contribute to the response at a significant level. The best results were obtained in the treatment with 2.5 % salt, 1.5 % lactose, 1.5 % (MgSO4 + MnSO4) and 1.5 % mustard. These optimized concentrations increased titrable acidity and LAB count, but lowered pH. The second-order polynomial regression model determined that the highest titrable acidity (1.69), lowest pH (2.49) and maximum LAB count (10 × 10(8) cfu/ml) would be obtained at these concentrations of additives. Among 30 runs conducted, run 2 has got the optimum concentration of salt- 2.5 %, lactose- 1.5 %, MgSO4 + MnSO4- 1.5 % and mustard- 1.5 % for lactic acid fermentation of radish. The values for different additives and growth stimulators optimized in this study could successfully be employed for the lactic acid fermentation of radish as a postharvest reduction tool and for product development.
Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J
2015-02-01
To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.
DOT National Transportation Integrated Search
1983-05-01
The mechanics of automated passenger counting on transit buses and the accuracy of these systems are discussed. Count data needs to be time coded and distance/location information is necessary for effectively correlating data to specific bus stops or...
Analysis of overdispersed count data by mixtures of Poisson variables and Poisson processes.
Hougaard, P; Lee, M L; Whitmore, G A
1997-12-01
Count data often show overdispersion compared to the Poisson distribution. Overdispersion is typically modeled by a random effect for the mean, based on the gamma distribution, leading to the negative binomial distribution for the count. This paper considers a larger family of mixture distributions, including the inverse Gaussian mixture distribution. It is demonstrated that it gives a significantly better fit for a data set on the frequency of epileptic seizures. The same approach can be used to generate counting processes from Poisson processes, where the rate or the time is random. A random rate corresponds to variation between patients, whereas a random time corresponds to variation within patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scarcella, Carmelo; Tosi, Alberto, E-mail: alberto.tosi@polimi.it; Villa, Federica
2013-12-15
We developed a single-photon counting multichannel detection system, based on a monolithic linear array of 32 CMOS SPADs (Complementary Metal-Oxide-Semiconductor Single-Photon Avalanche Diodes). All channels achieve a timing resolution of 100 ps (full-width at half maximum) and a photon detection efficiency of 50% at 400 nm. Dark count rate is very low even at room temperature, being about 125 counts/s for 50 μm active area diameter SPADs. Detection performance and microelectronic compactness of this CMOS SPAD array make it the best candidate for ultra-compact time-resolved spectrometers with single-photon sensitivity from 300 nm to 900 nm.
A statistical approach for inferring the 3D structure of the genome.
Varoquaux, Nelle; Ay, Ferhat; Noble, William Stafford; Vert, Jean-Philippe
2014-06-15
Recent technological advances allow the measurement, in a single Hi-C experiment, of the frequencies of physical contacts among pairs of genomic loci at a genome-wide scale. The next challenge is to infer, from the resulting DNA-DNA contact maps, accurate 3D models of how chromosomes fold and fit into the nucleus. Many existing inference methods rely on multidimensional scaling (MDS), in which the pairwise distances of the inferred model are optimized to resemble pairwise distances derived directly from the contact counts. These approaches, however, often optimize a heuristic objective function and require strong assumptions about the biophysics of DNA to transform interaction frequencies to spatial distance, and thereby may lead to incorrect structure reconstruction. We propose a novel approach to infer a consensus 3D structure of a genome from Hi-C data. The method incorporates a statistical model of the contact counts, assuming that the counts between two loci follow a Poisson distribution whose intensity decreases with the physical distances between the loci. The method can automatically adjust the transfer function relating the spatial distance to the Poisson intensity and infer a genome structure that best explains the observed data. We compare two variants of our Poisson method, with or without optimization of the transfer function, to four different MDS-based algorithms-two metric MDS methods using different stress functions, a non-metric version of MDS and ChromSDE, a recently described, advanced MDS method-on a wide range of simulated datasets. We demonstrate that the Poisson models reconstruct better structures than all MDS-based methods, particularly at low coverage and high resolution, and we highlight the importance of optimizing the transfer function. On publicly available Hi-C data from mouse embryonic stem cells, we show that the Poisson methods lead to more reproducible structures than MDS-based methods when we use data generated using different restriction enzymes, and when we reconstruct structures at different resolutions. A Python implementation of the proposed method is available at http://cbio.ensmp.fr/pastis. © The Author 2014. Published by Oxford University Press.
Sasaki, Hiroyuki; Hattori, Yuta; Ikeda, Yuko; Kamagata, Mayo; Shibata, Shigenobu
2015-06-01
Mice that exercise after meals gain less body weight and visceral fat compared to those that exercised before meals under a one meal/exercise time per day schedule. Humans generally eat two or three meals per day, and rarely have only one meal. To extend our previous observations, we examined here whether a "two meals, two exercise sessions per day" schedule was optimal in terms of maintaining a healthy body weight. In this experiment, "morning" refers to the beginning of the active phase (the "morning" for nocturnal animals). We found that 2-h feeding before 2-h exercise in the morning and evening (F-Ex/F-Ex) resulted in greater attenuation of high fat diet (HFD)-induced weight gain compared to other combinations of feeding and exercise under two daily meals and two daily exercise periods. There were no significant differences in total food intake and total wheel counts, but feeding before exercise in the morning groups (F-Ex/F-Ex and F-Ex/Ex-F) increased the morning wheel counts. These results suggest that habitual exercise after feeding in the morning and evening is more effective for preventing HFD-induced weight gain. We also determined whether there were any correlations between food intake, wheel rotation, visceral fat volume and skeletal muscle volumes. We found positive associations between gastrocnemius muscle volumes and morning wheel counts, as well as negative associations between morning food intake volumes/body weight and morning wheel counts. These results suggest that morning exercise-induced increase of muscle volume may refer to anti-obesity. Evening exercise is negatively associated with fat volume increases, suggesting that this practice may counteract fat deposition. Our multifactorial analysis revealed that morning food intake helps to increase exercise, and that evening exercise reduced fat volumes. Thus, exercise in the morning or evening is important for preventing the onset of obesity.
Landis, Sarah; Suruki, Robert; Maskell, Joe; Bonar, Kerina; Hilton, Emma; Compton, Chris
2018-03-20
Blood eosinophil count may be a useful biomarker for predicting response to inhaled corticosteroids and exacerbation risk in chronic obstructive pulmonary disease (COPD) patients. The optimal cut point for categorizing blood eosinophil counts in these contexts remains unclear. We aimed to determine the distribution of blood eosinophil count in COPD patients and matched non-COPD controls, and to describe demographic and clinical characteristics at different cut points. We identified COPD patients within the UK Clinical Practice Research Database aged ≥40 years with a FEV 1 /FVC <0.7, and ≥1 blood eosinophil count recorded during stable disease between January 1, 2010 and December 31, 2012. COPD patients were matched on age, sex, and smoking status to non-COPD controls. Using all blood eosinophil counts recorded during a 12-month period, COPD patients were categorized as "always above," "fluctuating above and below," and "never above" cut points of 100, 150, and 300 cells/μL. The geometric mean blood eosinophil count was statistically significantly higher in COPD patients versus matched controls (196.6 cells/µL vs. 182.1 cells/µL; mean difference 8%, 95% CI: 6.8, 9.2), and in COPD patients with versus without a history of asthma (205.0 cells/µL vs. 192.2 cells/µL; mean difference 6.7%, 95%, CI: 4.9, 8.5). About half of COPD patients had all blood eosinophil counts above 150 cells/μL; this persistent higher eosinophil phenotype was associated with being male, higher body mass index, and history of asthma. In conclusion, COPD patients demonstrated higher blood eosinophil count than non-COPD controls, although there was substantial overlap in the distributions. COPD patients with a history of asthma had significantly higher blood eosinophil count versus those without.
Optimizing microwave photodetection: input-output theory
NASA Astrophysics Data System (ADS)
Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.
2018-04-01
High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.
A model for HIV/AIDS pandemic with optimal control
NASA Astrophysics Data System (ADS)
Sule, Amiru; Abdullah, Farah Aini
2015-05-01
Human immunodeficiency virus and acquired immune deficiency syndrome (HIV/AIDS) is pandemic. It has affected nearly 60 million people since the detection of the disease in 1981 to date. In this paper basic deterministic HIV/AIDS model with mass action incidence function are developed. Stability analysis is carried out. And the disease free equilibrium of the basic model was found to be locally asymptotically stable whenever the threshold parameter (RO) value is less than one, and unstable otherwise. The model is extended by introducing two optimal control strategies namely, CD4 counts and treatment for the infective using optimal control theory. Numerical simulation was carried out in order to illustrate the analytic results.
HIV Patients Drop Out in Indonesia: Associated Factors and Potential Productivity Loss.
Siregar, Adiatma Ym; Pitriyan, Pipit; Wisaksana, Rudi
2016-07-01
this study reported various factors associated with a higher probability of HIV patients drop out, and potential productivity loss due to HIV patients drop out. we analyzed data of 658 HIV patients from a database in a main referral hospital in Bandung city, West Java, Indonesia from 2007 to 2013. First, we utilized probit regression analysis and included, among others, the following variables: patients' status (active or drop out), CD4 cell count, TB and opportunistic infection (OI), work status, sex, history of injecting drugs, and support from family and peers. Second, we used the drop out data from our database and CD 4 cell count decline rate from another study to estimate the productivity loss due to HIV patients drop out. lower CD4 cell count was associated with a higher probability of drop out. Support from family/peers, living with family, and diagnosed with TB were associated with lower probability of drop out. The productivity loss at national level due to treatment drop out (consequently, due to CD4 cell count decline) can reach US$365 million (using average wage). first, as lower CD 4 cell count was associated with higher probability of drop out, we recommend (to optimize) early ARV initiation at a higher CD 4 cell count, involving scaling up HIV service at the community level. Second, family/peer support should be further emphasized to further ensure treatment success. Third, dropping out from ART will result in a relatively large productivity loss.
Hierarchical winner-take-all particle swarm optimization social network for neural model fitting.
Coventry, Brandon S; Parthasarathy, Aravindakshan; Sommer, Alexandra L; Bartlett, Edward L
2017-02-01
Particle swarm optimization (PSO) has gained widespread use as a general mathematical programming paradigm and seen use in a wide variety of optimization and machine learning problems. In this work, we introduce a new variant on the PSO social network and apply this method to the inverse problem of input parameter selection from recorded auditory neuron tuning curves. The topology of a PSO social network is a major contributor to optimization success. Here we propose a new social network which draws influence from winner-take-all coding found in visual cortical neurons. We show that the winner-take-all network performs exceptionally well on optimization problems with greater than 5 dimensions and runs at a lower iteration count as compared to other PSO topologies. Finally we show that this variant of PSO is able to recreate auditory frequency tuning curves and modulation transfer functions, making it a potentially useful tool for computational neuroscience models.
Performance of the Tachyon Time-of-Flight PET Camera
NASA Astrophysics Data System (ADS)
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera.
Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm 2 side of 6.15 × 6.15 × 25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-01-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W. -S.; Vu, C.; ...
2015-01-23
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm 2 side of 6.15 ×6.15 ×25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according tomore » the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less
Wang, Fang; Jia, Jin-Song; Wang, Jing; Zhao, Ting; Jiang, Qian; Jiang, Hao; Zhu, Hong-Hu
2017-10-01
We aimed to compare the kinetics of white blood cell (WBC) and explore predictive factors of leukocytosis in non-high-risk acute promyelocytic leukemia (APL), with oral arsenic plus all-trans retinoic acid (ATRA) or intravenous arsenic trioxide (ATO) plus ATRA as a first-line treatment. The absolute count, doubling time and peak time of WBC were analyzed in 64 newly diagnosed non-high-risk APL patients who were treated with different induction regimens containing either oral Realgar-indigo naturalis formula (RIF) (n=35) or ATO (n=29). The end points were the dynamic changes of the WBC counts during induction. The time points started at day 1 and were selected over 3-day intervals for 28days. Among the 64 included patients, the median initial and peak WBC counts were 1.78×10 9 /L (range 0.31-9.89) and 12.16×10 9 /L (range 1.56-80.01), respectively. The incidence of differentiation syndrome was 9.38%. The dynamic changes in leukocytosis showed a single peak wave in all the patients, and the median time to peak was 10 (range 2-26) days. A higher WBC count was observed in the RIF group than in the ATO group after 10days of treatment (9.22×10 9 /L vs. 4.10×10 9 /L, p=0.015). Patients with the peak WBC count >10×10 9 /L had a shorter WBC doubling time compared to patients with a lower peak WBC (RIF group 4days vs. 7days, p=0.001; ATO group 4.5days vs. 23days, p=0.002). Univariate and multivariable analyses showed that the doubling time of WBC is an independent factor for the peak WBC count. Different kinetics of WBC proliferation were observed during induction with oral arsenic plus ATRA and ATO plus ATRA. The doubling time of WBC is an important independent factor for predicting the peak WBC count. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reduction of energy intake using just-in-time feedback from a wearable sensor system.
Farooq, Muhammad; McCrory, Megan A; Sazonov, Edward
2017-04-01
This work explored the potential use of a wearable sensor system for providing just-in-time (JIT) feedback on the progression of a meal and tested its ability to reduce the total food mass intake. Eighteen participants consumed three meals each in a lab while monitored by a wearable sensor system capable of accurately tracking chew counts. The baseline visit was used to establish the self-determined ingested mass and the associated chew counts. Real-time feedback on chew counts was provided in the next two visits, during which the target chew count was either the same as that at baseline or the baseline chew count reduced by 25% (in randomized order). The target was concealed from the participant and from the experimenter. Nonparametric repeated-measures ANOVAs were performed to compare mass of intake, meal duration, and ratings of hunger, appetite, and thirst across three meals. JIT feedback targeting a 25% reduction in chew counts resulted in a reduction in mass and energy intake without affecting perceived hunger or fullness. JIT feedback on chewing behavior may reduce intake within a meal. This system can be further used to help develop individualized strategies to provide JIT adaptive interventions for reducing energy intake. © 2017 The Obesity Society.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
The determination of the pulse pile-up reject (PUR) counting for X and gamma ray spectrometry
NASA Astrophysics Data System (ADS)
Karabıdak, S. M.; Kaya, S.
2017-02-01
The collection the charged particles produced by the incident radiation on a detector requires a time interval. If this time interval is not sufficiently short compared with the peaking time of the amplifier, a loss in the recovered signal amplitude occurs. Another major constraint on the throughput of modern x or gamma-ray spectrometers is the time required for the subsequent the pulse processing by the electronics. Two above-mentioned limitations are cause of counting losses resulting from the dead time and the pile-up. The pulse pile-up is a common problem in x and gamma ray radiation detection systems. The pulses pile-up in spectroscopic analysis can cause significant errors. Therefore, inhibition of these pulses is a vital step. A way to reduce errors due to the pulse pile-up is a pile-up inspection circuitry (PUR). Such a circuit rejects some of the pulse pile-up. Therefore, this circuit leads to counting losses. Determination of these counting losses is an important problem. In this work, a new method is suggested for the determination of the pulse pile-up reject.
29 CFR 778.318 - Productive and nonproductive hours of work.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Special Problems Effect of Failure to Count Or Pay for Certain Working Hours § 778.318 Productive and... Act; such nonproductive working hours must be counted and paid for. (b) Compensation payable for... which such nonproductive hours are properly counted as working time but no special hourly rate is...
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Pulse shaping circuit for active counting of superheated emulsion
NASA Astrophysics Data System (ADS)
Murai, Ikuo; Sawamura, Teruko
2005-08-01
A pulse shaping circuit for active counting of superheated emulsions is described. A piezoelectric transducer is used for sensing bubble formation acoustically and the acoustic signal is transformed to a shaping pulse for counting. The circuit has a short signal processing time in the order of 10 ms.
TIME-INTERVAL MEASURING DEVICE
Gross, J.E.
1958-04-15
An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
NASA Astrophysics Data System (ADS)
Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.
2002-07-01
High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.
Adams, Robert; Zboray, Robert; Cortesi, Marco; Prasser, Horst-Michael
2014-04-01
A conceptual design optimization of a fast neutron tomography system was performed. The system is based on a compact deuterium-deuterium fast neutron generator and an arc-shaped array of individual neutron detectors. The array functions as a position sensitive one-dimensional detector allowing tomographic reconstruction of a two-dimensional cross section of an object up to 10 cm across. Each individual detector is to be optically isolated and consists of a plastic scintillator and a Silicon Photomultiplier for measuring light produced by recoil protons. A deterministic geometry-based model and a series of Monte Carlo simulations were used to optimize the design geometry parameters affecting the reconstructed image resolution. From this, it is expected that with an array of 100 detectors a reconstructed image resolution of ~1.5mm can be obtained. Other simulations were performed in order to optimize the scintillator depth (length along the neutron path) such that the best ratio of direct to scattered neutron counts is achieved. This resulted in a depth of 6-8 cm and an expected detection efficiency of 33-37%. Based on current operational capabilities of a prototype neutron generator being developed at the Paul Scherrer Institute, planned implementation of this detector array design should allow reconstructed tomograms to be obtained with exposure times on the order of a few hours. Copyright © 2014 Elsevier Ltd. All rights reserved.
Uchiyama, Kimio; Yamada, Manabu; Tamate, Shusuke; Iwasaki, Konomi; Mitomo, Keisuke; Nakayama, Seiichi
2015-09-01
The time for the neutrophil count to recover after subcutaneous injection of filgrastim BS1 or lenograstim was studied in patients suffering from neutropenia following preoperative combined chemotherapy using docetaxel, nedaplatin, or cisplatin (in divided doses for 5 days)and 5-fluorouracil for oral cancer. 1. There was no significant difference in the minimum leukocyte and neutrophil counts after chemotherapy. 2. There was no significant difference in the maximum leukocyte and neutrophil counts after chemotherapy. 3. Time for leukocytes to recover from their minimum count(>4,000/mm3)or for neutrophils to recover from their minimum count(>2,000/mm3)and the number of days on which treatment was administered tended to be shorter in the filgrastim BS1 group. Thus, it was concluded that filgrastim BS1 is just as effective as other prior G-CSF agents in treating patients suffering from neutropenia following chemotherapy(TPF therapy).
Grigorov, Boyan; Rabilloud, Jessica; Lawrence, Philip; Gerlier, Denis
2011-01-01
Background Measles virus (MV) is a member of the Paramyxoviridae family and an important human pathogen causing strong immunosuppression in affected individuals and a considerable number of deaths worldwide. Currently, measles is a re-emerging disease in developed countries. MV is usually quantified in infectious units as determined by limiting dilution and counting of plaque forming unit either directly (PFU method) or indirectly from random distribution in microwells (TCID50 method). Both methods are time-consuming (up to several days), cumbersome and, in the case of the PFU assay, possibly operator dependent. Methods/Findings A rapid, optimized, accurate, and reliable technique for titration of measles virus was developed based on the detection of virus infected cells by flow cytometry, single round of infection and titer calculation according to the Poisson's law. The kinetics follow up of the number of infected cells after infection with serial dilutions of a virus allowed estimation of the duration of the replication cycle, and consequently, the optimal infection time. The assay was set up to quantify measles virus, vesicular stomatitis virus (VSV), and human immunodeficiency virus type 1 (HIV-1) using antibody labeling of viral glycoprotein, virus encoded fluorescent reporter protein and an inducible fluorescent-reporter cell line, respectively. Conclusion Overall, performing the assay takes only 24–30 hours for MV strains, 12 hours for VSV, and 52 hours for HIV-1. The step-by-step procedure we have set up can be, in principle, applicable to accurately quantify any virus including lentiviral vectors, provided that a virus encoded gene product can be detected by flow cytometry. PMID:21915289
Heinz, M G; Colburn, H S; Carney, L H
2001-10-01
The perceptual significance of the cochlear amplifier was evaluated by predicting level-discrimination performance based on stochastic auditory-nerve (AN) activity. Performance was calculated for three models of processing: the optimal all-information processor (based on discharge times), the optimal rate-place processor (based on discharge counts), and a monaural coincidence-based processor that uses a non-optimal combination of rate and temporal information. An analytical AN model included compressive magnitude and level-dependent-phase responses associated with the cochlear amplifier, and high-, medium-, and low-spontaneous-rate (SR) fibers with characteristic frequencies (CFs) spanning the AN population. The relative contributions of nonlinear magnitude and nonlinear phase responses to level encoding were compared by using four versions of the model, which included and excluded the nonlinear gain and phase responses in all possible combinations. Nonlinear basilar-membrane (BM) phase responses are robustly encoded in near-CF AN fibers at low frequencies. Strongly compressive BM responses at high frequencies near CF interact with the high thresholds of low-SR AN fibers to produce large dynamic ranges. Coincidence performance based on a narrow range of AN CFs was robust across a wide dynamic range at both low and high frequencies, and matched human performance levels. Coincidence performance based on all CFs demonstrated the "near-miss" to Weber's law at low frequencies and the high-frequency "mid-level bump." Monaural coincidence detection is a physiologically realistic mechanism that is extremely general in that it can utilize AN information (average-rate, synchrony, and nonlinear-phase cues) from all SR groups.
Incorporating availability for detection in estimates of bird abundance
Diefenbach, D.R.; Marshall, M.R.; Mattice, J.A.; Brauning, D.W.
2007-01-01
Several bird-survey methods have been proposed that provide an estimated detection probability so that bird-count statistics can be used to estimate bird abundance. However, some of these estimators adjust counts of birds observed by the probability that a bird is detected and assume that all birds are available to be detected at the time of the survey. We marked male Henslow's Sparrows (Ammodramus henslowii) and Grasshopper Sparrows (A. savannarum) and monitored their behavior during May-July 2002 and 2003 to estimate the proportion of time they were available for detection. We found that the availability of Henslow's Sparrows declined in late June to <10% for 5- or 10-min point counts when a male had to sing and be visible to the observer; but during 20 May-19 June, males were available for detection 39.1% (SD = 27.3) of the time for 5-min point counts and 43.9% (SD = 28.9) of the time for 10-min point counts (n = 54). We detected no temporal changes in availability for Grasshopper Sparrows, but estimated availability to be much lower for 5-min point counts (10.3%, SD = 12.2) than for 10-min point counts (19.2%, SD = 22.3) when males had to be visible and sing during the sampling period (n = 80). For distance sampling, we estimated the availability of Henslow's Sparrows to be 44.2% (SD = 29.0) and the availability of Grasshopper Sparrows to be 20.6% (SD = 23.5). We show how our estimates of availability can be incorporated in the abundance and variance estimators for distance sampling and modify the abundance and variance estimators for the double-observer method. Methods that directly estimate availability from bird counts but also incorporate detection probabilities need further development and will be important for obtaining unbiased estimates of abundance for these species.
Rain volume estimation over areas using satellite and radar data
NASA Technical Reports Server (NTRS)
Doneaud, A. A.; Vonderhaar, T. H.
1985-01-01
An investigation of the feasibility of rain volume estimation using satellite data following a technique recently developed with radar data called the Arera Time Integral was undertaken. Case studies were selected on the basis of existing radar and satellite data sets which match in space and time. Four multicell clusters were analyzed. Routines for navigation remapping amd smoothing of satellite images were performed. Visible counts were normalized for solar zenith angle. A radar sector of interest was defined to delineate specific radar echo clusters for each radar time throughout the radar echo cluster lifetime. A satellite sector of interest was defined by applying small adjustments to the radar sector using a manual processing technique. The radar echo area, the IR maximum counts and the IR counts matching radar echo areas were found to evolve similarly, except for the decaying phase of the cluster where the cirrus debris keeps the IR counts high.
45 CFR 400.220 - Counting time-eligibility of refugees.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 2 2010-10-01 2010-10-01 false Counting time-eligibility of refugees. 400.220 Section 400.220 Public Welfare Regulations Relating to Public Welfare OFFICE OF REFUGEE RESETTLEMENT, ADMINISTRATION FOR CHILDREN AND FAMILIES, DEPARTMENT OF HEALTH AND HUMAN SERVICES REFUGEE RESETTLEMENT PROGRAM...
Liu, Yanhong; Kong, Xiangyi; Wang, Wen; Fan, Fangfang; Zhang, Yan; Zhao, Min; Wang, Yi; Wang, Yupeng; Wang, Yu; Qin, Xianhui; Tang, Genfu; Wang, Binyan; Xu, Xiping; Hou, Fan Fan; Gao, Wei; Sun, Ningling; Li, Jianping; Venners, Scott A; Jiang, Shanqun; Huo, Yong
2017-01-01
The aim of the present study was to examine the association between peripheral differential leukocyte counts and dyslipidemia in a Chinese hypertensive population. A total of 10,866 patients with hypertension were enrolled for a comprehensive assessment of cardiovascular risk factors using data from the China Stroke Primary Prevention Trial. Plasma lipid levels and total leukocyte, neutrophil, and lymphocyte counts were determined according to standard methods. Peripheral differential leukocyte counts were consistently and positively associated with serum total cholesterol (TC), LDL cholesterol (LDL-C), and TG levels (all P < 0.001 for trend), while inversely associated with HDL cholesterol levels (P < 0.05 for trend). In subsequent analyses where serum lipids were dichotomized (dyslipidemia/normolipidemia), we found that patients in the highest quartile of total leukocyte count (≥7.6 × 10 9 cells/l) had 1.64 times the risk of high TG [95% confidence interval (CI): 1.46, 1.85], 1.34 times the risk of high TC (95% CI: 1.20, 1.50), and 1.24 times the risk of high LDL-C (95% CI: 1.12, 1.39) compared with their counterparts in the lowest quartile of total leukocyte count. Similar patterns were also observed with neutrophils and lymphocytes. In summary, these findings indicate that elevated differential leukocyte counts are directly associated with serum lipid levels and increased odds of dyslipidemia. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.
Fully integrated sub 100ps photon counting platform
NASA Astrophysics Data System (ADS)
Buckley, S. J.; Bellis, S. J.; Rosinger, P.; Jackson, J. C.
2007-02-01
Current state of the art high resolution counting modules, specifically designed for high timing resolution applications, are largely based on a computer card format. This has tended to result in a costly solution that is restricted to the computer it resides in. We describe a four channel timing module that interfaces to a computer via a USB port and operates with a resolution of less than 100 picoseconds. The core design of the system is an advanced field programmable gate array (FPGA) interfacing to a precision time interval measurement module, mass memory block and a high speed USB 2.0 serial data port. The FPGA design allows the module to operate in a number of modes allowing both continuous recording of photon events (time-tagging) and repetitive time binning. In time-tag mode the system reports, for each photon event, the high resolution time along with the chronological time (macro time) and the channel ID. The time-tags are uploaded in real time to a host computer via a high speed USB port allowing continuous storage to computer memory of up to 4 millions photons per second. In time-bin mode, binning is carried out with count rates up to 10 million photons per second. Each curve resides in a block of 128,000 time-bins each with a resolution programmable down to less than 100 picoseconds. Each bin has a limit of 65535 hits allowing autonomous curve recording until a bin reaches the maximum count or the system is commanded to halt. Due to the large memory storage, several curves/experiments can be stored in the system prior to uploading to the host computer for analysis. This makes this module ideal for integration into high timing resolution specific applications such as laser ranging and fluorescence lifetime imaging using techniques such as time correlated single photon counting (TCSPC).
Counter tube window and X-ray fluorescence analyzer study
NASA Technical Reports Server (NTRS)
Hertel, R.; Holm, M.
1973-01-01
A study was performed to determine the best design tube window and X-ray fluorescence analyzer for quantitative analysis of Venusian dust and condensates. The principal objective of the project was to develop the best counter tube window geometry for the sensing element of the instrument. This included formulation of a mathematical model of the window and optimization of its parameters. The proposed detector and instrument has several important features. The instrument will perform a near real-time analysis of dust in the Venusian atmosphere, and is capable of measuring dust layers less than 1 micron thick. In addition, wide dynamic measurement range will be provided to compensate for extreme variations in count rates. An integral pulse-height analyzer and memory accumulate data and read out spectra for detail computer analysis on the ground.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Jean-Pierre L. Savard; Tracey D. Hooper
1995-01-01
We examine the effect of survey length and radius on the results of point count surveys for grassland birds at Williams Lake, British Columbia. Four- and 8-minute counts detected on average 68 percent and 85 percent of the number of birds detected during 12-minute counts. The most efficient sampling duration was 4 minutes, as long as travel time between points was...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koskela, Tuomas S.; Lobet, Mathieu; Deslippe, Jack
In this session we show, in two case studies, how the roofline feature of Intel Advisor has been utilized to optimize the performance of kernels of the XGC1 and PICSAR codes in preparation for Intel Knights Landing architecture. The impact of the implemented optimizations and the benefits of using the automatic roofline feature of Intel Advisor to study performance of large applications will be presented. This demonstrates an effective optimization strategy that has enabled these science applications to achieve up to 4.6 times speed-up and prepare for future exascale architectures. # Goal/Relevance of Session The roofline model [1,2] is amore » powerful tool for analyzing the performance of applications with respect to the theoretical peak achievable on a given computer architecture. It allows one to graphically represent the performance of an application in terms of operational intensity, i.e. the ratio of flops performed and bytes moved from memory in order to guide optimization efforts. Given the scale and complexity of modern science applications, it can often be a tedious task for the user to perform the analysis on the level of functions or loops to identify where performance gains can be made. With new Intel tools, it is now possible to automate this task, as well as base the estimates of peak performance on measurements rather than vendor specifications. The goal of this session is to demonstrate how the roofline feature of Intel Advisor can be used to balance memory vs. computation related optimization efforts and effectively identify performance bottlenecks. A series of typical optimization techniques: cache blocking, structure refactoring, data alignment, and vectorization illustrated by the kernel cases will be addressed. # Description of the codes ## XGC1 The XGC1 code [3] is a magnetic fusion Particle-In-Cell code that uses an unstructured mesh for its Poisson solver that allows it to accurately resolve the edge plasma of a magnetic fusion device. After recent optimizations to its collision kernel [4], most of the computing time is spent in the electron push (pushe) kernel, where these optimization efforts have been focused. The kernel code scaled well with MPI+OpenMP but had almost no automatic compiler vectorization, in part due to indirect memory addresses and in part due to low trip counts of low-level loops that would be candidates for vectorization. Particle blocking and sorting have been implemented to increase trip counts of low-level loops and improve memory locality, and OpenMP directives have been added to vectorize compute-intensive loops that were identified by Advisor. The optimizations have improved the performance of the pushe kernel 2x on Haswell processors and 1.7x on KNL. The KNL node-for-node performance has been brought to within 30% of a NERSC Cori phase I Haswell node and we expect to bridge this gap by reducing the memory footprint of compute intensive routines to improve cache reuse. ## PICSAR is a Fortran/Python high-performance Particle-In-Cell library targeting at MIC architectures first designed to be coupled with the PIC code WARP for the simulation of laser-matter interaction and particle accelerators. PICSAR also contains a FORTRAN stand-alone kernel for performance studies and benchmarks. A MPI domain decomposition is used between NUMA domains and a tile decomposition (cache-blocking) handled by OpenMP has been added for shared-memory parallelism and better cache management. The so-called current deposition and field gathering steps that compose the PIC time loop constitute major hotspots that have been rewritten to enable more efficient vectorization. Particle communications between tiles and MPI domain has been merged and parallelized. All considered, these improvements provide speedups of 3.1 for order 1 and 4.6 for order 3 interpolation shape factors on KNL configured in SNC4 quadrant flat mode. Performance is similar between a node of cori phase 1 and KNL at order 1 and better on KNL by a factor 1.6 at order 3 with the considered test case (homogeneous thermal plasma).« less
NASA Astrophysics Data System (ADS)
Degnan, J. J.
2002-05-01
We have recently demonstrated a scanning, photon-counting, laser altimeter, which is capable of daylight operations from aircraft cruise altitudes. The instrument measures the times-of-flight of individual photons to deduce the distances between the instrument reference and points on the underlying terrain from which the arriving photons were reflected. By imaging the terrain onto a highly pixellated detector followed by a multi-channel timing receiver, one can make multiple spatially-resolved measurements to the surface within a single laser pulse. The horizontal spatial resolution is limited by the optical projection of a single pixel onto the surface. In short, a 3D image of the terrain within the laser ground spot is obtained on each laser fire, assuming at least one signal photon is recorded by each pixel.. In test flights, a prototype airborne system has successfully recorded few kHz rate, single photon returns from clouds, soils, man-made objects, vegetation, and water surfaces at mid-day under conditions of maximum solar illumination. The system has also demonstrated a capability to resolve volumetrically distributed targets, such as tree canopies, and has performed wave height measurements and shallow water bathymetry over the Chesapeake Bay and Atlantic Ocean. The signal photons were reliably extracted from the solar noise background using an optimized Post-Detection Poisson Filter. The passively Q-switched microchip Nd:YAG laser transmitter measures only 2.25 mm in length and is pumped by a single 1.2 Watt laser diode. The output is frequency-doubled to take advantage of higher detector counting efficiencies and narrower spectral filters available at 532 nm. The transmitter produces a few microjoules of green energy in a subnanosecond pulse at several kilohertz rates. The illuminated ground area is imaged by a 14 cm diameter, diffraction-limited, off-axis telescope onto a segmented anode photomultiplier with up to 16 pixels (4 x4). Each anode segment is input to one channel of "fine" range receiver (5 cm detector-limited resolution), which records the times-of-flight of the individual photons. A parallel "coarse" receiver provides a lower resolution (>75 cm) histogram of atmospheric scatterers between the aircraft and ground and centers the "fine" receiver gate on the last set of returns, permitting the fine receiver to lock onto ground features with no a priori range knowledge. Many scientists have expressed a desire for globally contiguous maps of planetary bodies with few meter horizontal spatial resolutions and decimeter vertical resolutions. By sequentially overcoming various technical hurdles to globally contiguous mapping from space, we are led to a conceptual point design for a spaceborne, 3D imaging lidar, which utilizes low energy, high repetition rate lasers, photon-counting detector arrays, multi-channel timing receivers, and a unique optical scanner.
Maestro, M. Luisa; Gómez-España, Auxiliadora; Rivera, Fernando; Valladares, Manuel; Massuti, Bartomeu; Benavides, Manuel; Gallén, Manuel; Marcuello, Eugenio; Abad, Albert; Arrivi, Antonio; Fernández-Martos, Carlos; González, Encarnación; Tabernero, Josep M.; Vidaurreta, Marta; Aranda, Enrique; Díaz-Rubio, Eduardo
2012-01-01
Background. The Maintenance in Colorectal Cancer trial was a phase III study to assess maintenance therapy with single-agent bevacizumab versus bevacizumab plus chemotherapy in patients with metastatic colorectal cancer. An ancillary study was conducted to evaluate the circulating tumor cell (CTC) count as a prognostic and/or predictive marker for efficacy endpoints. Patients and Methods. One hundred eighty patients were included. Blood samples were obtained at baseline and after three cycles. CTC enumeration was carried out using the CellSearch® System (Veridex LLC, Raritan, NJ). Computed tomography scans were performed at cycle 3 and 6 and every 12 weeks thereafter for tumor response assessment. Results. The median progression-free survival (PFS) interval for patients with a CTC count ≥3 at baseline was 7.8 months, versus the 12.0 months achieved by patients with a CTC count <3 (p = .0002). The median overall survival (OS) time was 17.7 months for patients with a CTC count ≥3, compared with 25.1 months for patients with a lower count (p = .0059). After three cycles, the median PFS interval for patients with a low CTC count was 10.8 months, significantly longer than the 7.5 months for patients with a high CTC count (p = .005). The median OS time for patients with a CTC count <3 was significantly longer than for patients with a CTC count ≥3, 25.1 months versus 16.2 months, respectively (p = .0095). Conclusions. The CTC count is a strong prognostic factor for PFS and OS outcomes in metastatic colorectal cancer patients. PMID:22643538
Reyes, Mayra I; Pérez, Cynthia M; Negrón, Edna L
2008-03-01
Consumers increasingly use bottled water and home water treatment systems to avoid direct tap water. According to the International Bottled Water Association (IBWA), an industry trade group, 5 billion gallons of bottled water were consumed by North Americans in 2001. The principal aim of this study was to assess the microbial quality of in-house and imported bottled water for human consumption, by measurement and comparison of the concentration of bacterial endotoxin and standard cultivable methods of indicator microorganisms, specifically, heterotrophic and fecal coliform plate counts. A total of 21 brands of commercial bottled water, consisting of 10 imported and 11 in-house brands, selected at random from 96 brands that are consumed in Puerto Rico, were tested at three different time intervals. The Standard Limulus Amebocyte Lysate test, gel clot method, was used to measure the endotoxin concentrations. The minimum endotoxin concentration in 63 water samples was less than 0.0625 EU/mL, while the maximum was 32 EU/mL. The minimum bacterial count showed no growth, while the maximum was 7,500 CFU/mL. Bacterial isolates like P. fluorescens, Corynebacterium sp. J-K, S. paucimobilis, P. versicularis, A. baumannii, P. chlororaphis, F. indologenes, A. faecalis and P. cepacia were identified. Repeated measures analysis of variance demonstrated that endotoxin concentration did not change over time, while there was a statistically significant (p < 0.05) decrease in bacterial count over time. In addition, multiple linear regression analysis demonstrated that a unit change in the concentration of endotoxin across time was associated with a significant (p < 0.05) reduction in the bacteriological cell count. This analysis evidenced a significant time effect in the average log bacteriological cell count. Although bacterial growth was not detected in some water samples, endotoxin was present. Measurement of Gram-negative bacterial endotoxins is one of the methods that have been suggested as a rapid way of determining bacteriological water quality.
A new approach for measuring the work and quality of histopathology reporting.
Sharma, Vijay; Davey, Jonathan G N; Humphreys, Catherine; Johnston, Peter W
2013-07-01
Cancer datasets drive report quality, but require more work to inform compliant reports. The aim of this study was to correlate the number of words with measures of quality, to examine the impact of the drive for improved quality on the workload of histopathology reporting over time. We examined the first 10 reports of colon, breast, renal, lung and ovarian carcinoma, melanoma resection, nodal lymphoma appendicitis and seborrhoeic keratosis (SK) issued in 1991, 2001 and 2011. Correlations were analysed using Pearson's partial correlation coefficients. Word count increased significantly over time for most specimen types examined. Word count almost always correlated with units of information, indicating that the word count was a good measure of the amount of information contained within the reports; this correlation was preserved following correction for the effect of time. A good correlation with compliance with cancer datasets was also observed, but was weakened or lost following correction for the increase in word count and units of information that occurred between time points. These data indicate that word count could potentially be used as a measure of information content if its integrity and usefulness are continuously validated. Further prospective studies are required to assess and validate this approach. © 2013 John Wiley & Sons Ltd.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Cust, Anne E.; Pickles, Kristen M.; Goumas, Chris; Vu, Thao; Schmid, Helen; Nagore, Eduardo; Kelly, John; Aitken, Joanne F.; Giles, Graham G.; Hopper, John L.; Jenkins, Mark A.; Mann, Graham J.
2015-01-01
Background Awareness of individual risk may encourage improved prevention and early detection of melanoma. Methods We evaluated the accuracy of self-reported pigmentation and nevus phenotype compared to clinical assessment, and examined agreement between nevus counts from selected anatomical regions. The sample included 456 cases with invasive cutaneous melanoma diagnosed between ages 18-39 years and 538 controls from the population-based Australian Melanoma Family Study. Participants completed a questionnaire regarding their pigmentation and nevus phenotype, and attended a dermatologic skin examination. Results There was strong agreement between self-reported and clinical assessment of eye color (kappa, κ, =0.78, 95% confidence interval (CI) 0.74-0.81); and moderate agreement for hair color (κ =0.46, 95% CI 0.42-0.50). Agreement between self-reported skin color and spectrophotometer-derived measurements was poor (κ =0.12, 95% CI 0.08-0.16) to moderate (Spearman correlation rs=-0.37, 95% CI -0.32- to -0.42). Participants tended to under-estimate their nevus counts and pigmentation; men were more likely to under-report their skin color. The rs was 0.43 (95% CI 0.38-0.49) comparing clinical total body nevus counts with self-reported nevus categories. There was good agreement of quartile distributions of total body nevus counts with site-specific nevus counts, particularly on both arms. Conclusions Young adults have sub-optimal accuracy when assessing important risk characteristics including nevus numbers and pigmentation. Measuring nevus count on the arms is a good predictor of full body nevus count. Impact These results have implications for the likely success of targeted public health programs that rely on self-assessment of these factors. PMID:25628333