Analysis of general power counting rules in effective field theory
Gavela, Belen; Jenkins, Elizabeth E.; Manohar, Aneesh V.; ...
2016-09-02
We derive the general counting rules for a quantum effective field theory (EFT) in d dimensions. The rules are valid for strongly and weakly coupled theories, and they predict that all kinetic energy terms are canonically normalized. They determine the energy dependence of scattering cross sections in the range of validity of the EFT expansion. We show that the size of the cross sections is controlled by the Λ power counting of EFT, not by chiral counting, even for chiral perturbation theory (χPT). The relation between Λ and f is generalized to d dimensions. We show that the naive dimensionalmore » analysis 4π counting is related to ℏ counting. The EFT counting rules are applied to χPT, low-energy weak interactions, Standard Model EFT and the non-trivial case of Higgs EFT.« less
Kathryn L. Purcell; Sylvia R. Mori; Mary K. Chase
2005-01-01
We used data from two oak-woodland sites in California to develop guidelines for the design of bird monitoring programs using point counts. We used power analysis to determine sample size adequacy when varying the number of visits, count stations, and years for examining trends in abundance. We assumed an overdispersed Poisson distribution for count data, with...
Power counting and Wilsonian renormalization in nuclear effective field theory
NASA Astrophysics Data System (ADS)
Valderrama, Manuel Pavón
2016-05-01
Effective field theories are the most general tool for the description of low energy phenomena. They are universal and systematic: they can be formulated for any low energy systems we can think of and offer a clear guide on how to calculate predictions with reliable error estimates, a feature that is called power counting. These properties can be easily understood in Wilsonian renormalization, in which effective field theories are the low energy renormalization group evolution of a more fundamental — perhaps unknown or unsolvable — high energy theory. In nuclear physics they provide the possibility of a theoretically sound derivation of nuclear forces without having to solve quantum chromodynamics explicitly. However there is the problem of how to organize calculations within nuclear effective field theory: the traditional knowledge about power counting is perturbative but nuclear physics is not. Yet power counting can be derived in Wilsonian renormalization and there is already a fairly good understanding of how to apply these ideas to non-perturbative phenomena and in particular to nuclear physics. Here we review a few of these ideas, explain power counting in two-nucleon scattering and reactions with external probes and hint at how to extend the present analysis beyond the two-body problem.
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
Löffler, Christian; Sattler, Horst; Peters, Lena; Tuleweit, Anika; Löffler, Uta; Wadsack, Daniel; Uppenkamp, Michael; Bergner, Raoul
2016-10-01
Power Doppler ultrasound is used to assess joint vascularity in acute arthritis. PDUS signals have been correlated with synovial histology and bone deterioration. Little is known about the correlation between power Doppler signals and synovial white blood count. In our study, we analyzed power Doppler signals in inflammatory joint diseases including gout, calcium pyrophosphate deposition disease, rheumatoid arthritis, spondyloarthritis and others and correlated power Doppler signals with synovial white blood count and with serologic markers of inflammation. We retrospectively evaluated 194 patients with arthritis. All patients underwent joint sonography, power Doppler ultrasound, synovial fluid analysis and blood examination of C-reactive protein and erythrocyte sedimentation rate. Correlation analyses (Spearman and Pearson), Chi(2) test, t-tests, a unifactorial ANOVA and regression analyses were applied. Hypervascularisation in power Doppler was most prominent in gout and calcium pyrophosphate deposition disease. Spondyloarthritis and non-inflammatory joint diseases presented with low degrees of hypervascularisation. Mean synovial white blood count did not differ significantly between crystal-related arthritides, rheumatoid arthritis, spondyloarthritis or other inflammatory joint diseases. There was a positive but weak correlation between power Doppler signals and synovial white blood count (P<0.001, rs=0.283), erythrocyte sedimentation rate (P<0.001, rs=0.387) and C-reactive protein (P<0.001, rs=0.373) over all diagnoses. This was especially relevant in rheumatoid arthritis (P<0.01, rs=0.479). Power Doppler degrees 0 and 1 were able to predict synovial leukocytes<5/nL, degrees 2 and 3 predict leukocytes≥5/nL (P<0.001). Copyright © 2016 Société française de rhumatologie. Published by Elsevier SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Guoqing; Lina, Liu
2018-02-01
An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.
Artificial neural network-aided image analysis system for cell counting.
Sjöström, P J; Frydel, B R; Wahlberg, L U
1999-05-01
In histological preparations containing debris and synthetic materials, it is difficult to automate cell counting using standard image analysis tools, i.e., systems that rely on boundary contours, histogram thresholding, etc. In an attempt to mimic manual cell recognition, an automated cell counter was constructed using a combination of artificial intelligence and standard image analysis methods. Artificial neural network (ANN) methods were applied on digitized microscopy fields without pre-ANN feature extraction. A three-layer feed-forward network with extensive weight sharing in the first hidden layer was employed and trained on 1,830 examples using the error back-propagation algorithm on a Power Macintosh 7300/180 desktop computer. The optimal number of hidden neurons was determined and the trained system was validated by comparison with blinded human counts. System performance at 50x and lO0x magnification was evaluated. The correlation index at 100x magnification neared person-to-person variability, while 50x magnification was not useful. The system was approximately six times faster than an experienced human. ANN-based automated cell counting in noisy histological preparations is feasible. Consistent histology and computer power are crucial for system performance. The system provides several benefits, such as speed of analysis and consistency, and frees up personnel for other tasks.
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
Cook, Richard J; Wei, Wei
2003-07-01
The design of clinical trials is typically based on marginal comparisons of a primary response under two or more treatments. The considerable gains in efficiency afforded by models conditional on one or more baseline responses has been extensively studied for Gaussian models. The purpose of this article is to present methods for the design and analysis of clinical trials in which the response is a count or a point process, and a corresponding baseline count is available prior to randomization. The methods are based on a conditional negative binomial model for the response given the baseline count and can be used to examine the effect of introducing selection criteria on power and sample size requirements. We show that designs based on this approach are more efficient than those proposed by McMahon et al. (1994).
Wang, WeiBo; Sun, Wei; Wang, Wei; Szatkiewicz, Jin
2018-03-01
The application of high-throughput sequencing in a broad range of quantitative genomic assays (e.g., DNA-seq, ChIP-seq) has created a high demand for the analysis of large-scale read-count data. Typically, the genome is divided into tiling windows and windowed read-count data is generated for the entire genome from which genomic signals are detected (e.g. copy number changes in DNA-seq, enrichment peaks in ChIP-seq). For accurate analysis of read-count data, many state-of-the-art statistical methods use generalized linear models (GLM) coupled with the negative-binomial (NB) distribution by leveraging its ability for simultaneous bias correction and signal detection. However, although statistically powerful, the GLM+NB method has a quadratic computational complexity and therefore suffers from slow running time when applied to large-scale windowed read-count data. In this study, we aimed to speed up substantially the GLM+NB method by using a randomized algorithm and we demonstrate here the utility of our approach in the application of detecting copy number variants (CNVs) using a real example. We propose an efficient estimator, the randomized GLM+NB coefficients estimator (RGE), for speeding up the GLM+NB method. RGE samples the read-count data and solves the estimation problem on a smaller scale. We first theoretically validated the consistency and the variance properties of RGE. We then applied RGE to GENSENG, a GLM+NB based method for detecting CNVs. We named the resulting method as "R-GENSENG". Based on extensive evaluation using both simulated and empirical data, we concluded that R-GENSENG is ten times faster than the original GENSENG while maintaining GENSENG's accuracy in CNV detection. Our results suggest that RGE strategy developed here could be applied to other GLM+NB based read-count analyses, i.e. ChIP-seq data analysis, to substantially improve their computational efficiency while preserving the analytic power.
A system-level view of optimizing high-channel-count wireless biosignal telemetry.
Chandler, Rodney J; Gibson, Sarah; Karkare, Vaibhav; Farshchi, Shahin; Marković, Dejan; Judy, Jack W
2009-01-01
In this paper we perform a system-level analysis of a wireless biosignal telemetry system. We perform an analysis of each major system component (e.g., analog front end, analog-to-digital converter, digital signal processor, and wireless link), in which we consider physical, algorithmic, and design limitations. Since there are a wide range applications for wireless biosignal telemetry systems, each with their own unique set of requirements for key parameters (e.g., channel count, power dissipation, noise level, number of bits, etc.), our analysis is equally broad. The net result is a set of plots, in which the power dissipation for each component and as the system as a whole, are plotted as a function of the number of channels for different architectural strategies. These results are also compared to existing implementations of complete wireless biosignal telemetry systems.
Measurements and Analysis of 115 KV Power Line Noise and Its Effect on Pueblo Test Site Radio Links
DOT National Transportation Integrated Search
1972-05-01
Noise measurements were made for 115 KV power lines near the frequencies 166, 217 and 406.8 MHz with a receiver bandwidth of 1 MHz. The measurements consisted of counting the numbers of pulses per minute at preset threshold values and RMS. The variat...
Power counting to better jet observables
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2014-12-01
Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia 8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.
Li, Xiaohong; Brock, Guy N; Rouchka, Eric C; Cooper, Nigel G F; Wu, Dongfeng; O'Toole, Timothy E; Gill, Ryan S; Eteleeb, Abdallah M; O'Brien, Liz; Rai, Shesh N
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level.
Li, Xiaohong; Brock, Guy N.; Rouchka, Eric C.; Cooper, Nigel G. F.; Wu, Dongfeng; O’Toole, Timothy E.; Gill, Ryan S.; Eteleeb, Abdallah M.; O’Brien, Liz
2017-01-01
Normalization is an essential step with considerable impact on high-throughput RNA sequencing (RNA-seq) data analysis. Although there are numerous methods for read count normalization, it remains a challenge to choose an optimal method due to multiple factors contributing to read count variability that affects the overall sensitivity and specificity. In order to properly determine the most appropriate normalization methods, it is critical to compare the performance and shortcomings of a representative set of normalization routines based on different dataset characteristics. Therefore, we set out to evaluate the performance of the commonly used methods (DESeq, TMM-edgeR, FPKM-CuffDiff, TC, Med UQ and FQ) and two new methods we propose: Med-pgQ2 and UQ-pgQ2 (per-gene normalization after per-sample median or upper-quartile global scaling). Our per-gene normalization approach allows for comparisons between conditions based on similar count levels. Using the benchmark Microarray Quality Control Project (MAQC) and simulated datasets, we performed differential gene expression analysis to evaluate these methods. When evaluating MAQC2 with two replicates, we observed that Med-pgQ2 and UQ-pgQ2 achieved a slightly higher area under the Receiver Operating Characteristic Curve (AUC), a specificity rate > 85%, the detection power > 92% and an actual false discovery rate (FDR) under 0.06 given the nominal FDR (≤0.05). Although the top commonly used methods (DESeq and TMM-edgeR) yield a higher power (>93%) for MAQC2 data, they trade off with a reduced specificity (<70%) and a slightly higher actual FDR than our proposed methods. In addition, the results from an analysis based on the qualitative characteristics of sample distribution for MAQC2 and human breast cancer datasets show that only our gene-wise normalization methods corrected data skewed towards lower read counts. However, when we evaluated MAQC3 with less variation in five replicates, all methods performed similarly. Thus, our proposed Med-pgQ2 and UQ-pgQ2 methods perform slightly better for differential gene analysis of RNA-seq data skewed towards lowly expressed read counts with high variation by improving specificity while maintaining a good detection power with a control of the nominal FDR level. PMID:28459823
A Neutron Burst Associated with an Extensive Air Shower?
NASA Astrophysics Data System (ADS)
Alves, Mauro; Martin, Inacio; Shkevov, Rumen; Gusev, Anatoly; De Abreu, Alessandro
2016-07-01
A portable and compact system based on a He-3 tube (LND, USA; model 25311) with an area of approximately 250 cm² and is used to record neutron count rates at ground level in the energy range of 0.025 eV to 10 MeV, in São José dos Campos, SP, Brazil (23° 12' 45" S, 45° 52' 00" W; altitude, 660m). The detector, power supply, digitizer and other hardware are housed in an air-conditioned room. The detector power supply and digitizer are not connected to the main electricity network; a high-capacity 12-V battery is used to power the detector and digitizer. Neutron counts are accumulated at 1-minute intervals continuously. The data are stored in a PC for further analysis. In February 8, 2015, at 12 h 22 min (local time) during a period of fair weather with minimal cloud cover (< 1 okta) the neutron detector recorded a sharp (count rate = 27 neutrons/min) and brief (< 1 min) increase in the count rate. In the days before and after this event, the neutron count rate has oscillated between 0 and 3 neutrons/min. Since the occurrence of this event is not related with spurious signals, malfunctioning equipment, oscillations in the mains voltage, etc. we are led to believe that the sharp increase was caused by a physical source such as a an extensive air shower that occurred over the detector.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Motor run-up system. [power lines
NASA Technical Reports Server (NTRS)
Daeges, J. J. (Inventor)
1975-01-01
A starting system is described for bringing a large synchronous motor up to speed to prevent large power line disturbances at the moment the motor is connected to the power line. The system includes (1) a digital counter which generates a count determined by the difference in frequency between the power line and a small current generated by the synchronous motor; (2) a latch which stores the count; and (3) a comparator which compares the stored count with a newly generated count to determine whether the synchronous motor is accelerating or decelerating. Signals generated by the counter and comparator control the current to a clutch that couples a starting motor to the large synchronous motor.
Colyer, R.; Siegmund, O.; Tremsin, A.; Vallerga, J.; Weiss, S.; Michalet, X.
2011-01-01
Fluorescence lifetime imaging (FLIM) is a powerful approach to studying the immediate environment of molecules. For example, it is used in biology to study changes in the chemical environment, or to study binding processes, aggregation, and conformational changes by measuring Förster resonance energy transfer (FRET) between donor and acceptor fluorophores. FLIM can be acquired by time-domain measurements (time-correlated single-photon counting) or frequency-domain measurements (with PMT modulation or digital frequency domain acquisition) in a confocal setup, or with wide-field systems (using time-gated cameras). In the best cases, the resulting data is analyzed in terms of multicomponent fluorescence lifetime decays with demanding requirements in terms of signal level (and therefore limited frame rate). Recently, the phasor approach has been proposed as a powerful alternative for fluorescence lifetime analysis of FLIM, ensemble, and single-molecule experiments. Here we discuss the advantages of combining phasor analysis with a new type of FLIM acquisition hardware presented previously, consisting of a high temporal and spatial resolution wide-field single-photon counting device (the H33D detector). Experimental data with live cells and quantum dots will be presented as an illustration of this new approach. PMID:21625298
Extended performance electric propulsion power processor design study. Volume 2: Technical summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J.; Li, Ming
2013-01-01
Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables. PMID:22552787
Chen, Yao-Yi; Dasari, Surendra; Ma, Ze-Qiang; Vega-Montoto, Lorenzo J; Li, Ming; Tabb, David L
2012-09-01
Spectral counting has become a widely used approach for measuring and comparing protein abundance in label-free shotgun proteomics. However, when analyzing complex samples, the ambiguity of matching between peptides and proteins greatly affects the assessment of peptide and protein inventories, differentiation, and quantification. Meanwhile, the configuration of database searching algorithms that assign peptides to MS/MS spectra may produce different results in comparative proteomic analysis. Here, we present three strategies to improve comparative proteomics through spectral counting. We show that comparing spectral counts for peptide groups rather than for protein groups forestalls problems introduced by shared peptides. We demonstrate the advantage and flexibility of this new method in two datasets. We present four models to combine four popular search engines that lead to significant gains in spectral counting differentiation. Among these models, we demonstrate a powerful vote counting model that scales well for multiple search engines. We also show that semi-tryptic searching outperforms tryptic searching for comparative proteomics. Overall, these techniques considerably improve protein differentiation on the basis of spectral count tables.
Renormalization of a tensorial field theory on the homogeneous space SU(2)/U(1)
NASA Astrophysics Data System (ADS)
Lahoche, Vincent; Oriti, Daniele
2017-01-01
We study the renormalization of a general field theory on the homogeneous space (SU(2)/ ≤ft. U(1)\\right){{}× d} with tensorial interaction and gauge invariance under the diagonal action of SU(2). We derive the power counting for arbitrary d. For the case d = 4, we prove perturbative renormalizability to all orders via multi-scale analysis, study both the renormalized and effective perturbation series, and establish the asymptotic freedom of the model. We also outline a general power counting for the homogeneous space {{≤ft(SO(D)/SO(D-1)\\right)}× d} , of direct interest for quantum gravity models in arbitrary dimension, and point out the obstructions to the direct generalization of our results to these cases.
Gaburro, Julie; Duchemin, Jean-Bernard; Paradkar, Prasad N; Nahavandi, Saeid; Bhatti, Asim
2016-11-18
Widespread in the tropics, the mosquito Aedes aegypti is an important vector of many viruses, posing a significant threat to human health. Vector monitoring often requires fecundity estimation by counting eggs laid by female mosquitoes. Traditionally, manual data analyses have been used but this requires a lot of effort and is the methods are prone to errors. An easy tool to assess the number of eggs laid would facilitate experimentation and vector control operations. This study introduces a built-in software called ICount allowing automatic egg counting of the mosquito vector, Aedes aegypti. ICount egg estimation compared to manual counting is statistically equivalent, making the software effective for automatic and semi-automatic data analysis. This technique also allows rapid analysis compared to manual methods. Finally, the software has been used to assess p-cresol oviposition choices under laboratory conditions in order to test the system with different egg densities. ICount is a powerful tool for fast and precise egg count analysis, freeing experimenters from manual data processing. Software access is free and its user-friendly interface allows easy use by non-experts. Its efficiency has been tested in our laboratory with oviposition dual choices of Aedes aegypti females. The next step will be the development of a mobile application, based on the ICount platform, for vector monitoring surveys in the field.
Analysis of a boron-carbide-drum-controlled critical reactor experiment
NASA Technical Reports Server (NTRS)
Mayo, W. T.
1972-01-01
In order to validate methods and cross sections used in the neutronic design of compact fast-spectrum reactors for generating electric power in space, an analysis of a boron-carbide-drum-controlled critical reactor was made. For this reactor the transport analysis gave generally satisfactory results. The calculated multiplication factor for the most detailed calculation was only 0.7-percent Delta k too high. Calculated reactivity worth of the control drums was $11.61 compared to measurements of $11.58 by the inverse kinetics methods and $11.98 by the inverse counting method. Calculated radial and axial power distributions were in good agreement with experiment.
Winkelman, James W; Tanasijevic, Milenko J; Zahniser, David J
2017-08-01
- A novel automated slide-based approach to the complete blood count and white blood cell differential count is introduced. - To present proof of concept for an image-based approach to complete blood count, based on a new slide preparation technique. A preliminary data comparison with the current flow-based technology is shown. - A prototype instrument uses a proprietary method and technology to deposit a precise volume of undiluted peripheral whole blood in a monolayer onto a glass microscope slide so that every cell can be distinguished, counted, and imaged. The slide is stained, and then multispectral image analysis is used to measure the complete blood count parameters. Images from a 600-cell white blood cell differential count, as well as 5000 red blood cells and a variable number of platelets, that are present in 600 high-power fields are made available for a technologist to view on a computer screen. An initial comparison of the basic complete blood count parameters was performed, comparing 1857 specimens on both the new instrument and a flow-based hematology analyzer. - Excellent correlations were obtained between the prototype instrument and a flow-based system. The primary parameters of white blood cell, red blood cell, and platelet counts resulted in correlation coefficients (r) of 0.99, 0.99, and 0.98, respectively. Other indices included hemoglobin (r = 0.99), hematocrit (r = 0.99), mean cellular volume (r = 0.90), mean corpuscular hemoglobin (r = 0.97), and mean platelet volume (r = 0.87). For the automated white blood cell differential counts, r values were calculated for neutrophils (r = 0.98), lymphocytes (r = 0.97), monocytes (r = 0.76), eosinophils (r = 0.96), and basophils (r = 0.63). - Quantitative results for components of the complete blood count and automated white blood cell differential count can be developed by image analysis of a monolayer preparation of a known volume of peripheral blood.
Newell, Felicity L.; Sheehan, James; Wood, Petra Bohall; Rodewald, Amanda D.; Buehler, David A.; Keyser, Patrick D.; Larkin, Jeffrey L.; Beachy, Tiffany A.; Bakermans, Marja H.; Boves, Than J.; Evans, Andrea; George, Gregory A.; McDermott, Molly E.; Perkins, Kelly A.; White, Matthew; Wigley, T. Bently
2013-01-01
Point counts are commonly used to assess changes in bird abundance, including analytical approaches such as distance sampling that estimate density. Point-count methods have come under increasing scrutiny because effects of detection probability and field error are difficult to quantify. For seven forest songbirds, we compared fixed-radii counts (50 m and 100 m) and density estimates obtained from distance sampling to known numbers of birds determined by territory mapping. We applied point-count analytic approaches to a typical forest management question and compared results to those obtained by territory mapping. We used a before–after control impact (BACI) analysis with a data set collected across seven study areas in the central Appalachians from 2006 to 2010. Using a 50-m fixed radius, variance in error was at least 1.5 times that of the other methods, whereas a 100-m fixed radius underestimated actual density by >3 territories per 10 ha for the most abundant species. Distance sampling improved accuracy and precision compared to fixed-radius counts, although estimates were affected by birds counted outside 10-ha units. In the BACI analysis, territory mapping detected an overall treatment effect for five of the seven species, and effects were generally consistent each year. In contrast, all point-count methods failed to detect two treatment effects due to variance and error in annual estimates. Overall, our results highlight the need for adequate sample sizes to reduce variance, and skilled observers to reduce the level of error in point-count data. Ultimately, the advantages and disadvantages of different survey methods should be considered in the context of overall study design and objectives, allowing for trade-offs among effort, accuracy, and power to detect treatment effects.
MetaSeq: privacy preserving meta-analysis of sequencing-based association studies.
Singh, Angad Pal; Zafer, Samreen; Pe'er, Itsik
2013-01-01
Human genetics recently transitioned from GWAS to studies based on NGS data. For GWAS, small effects dictated large sample sizes, typically made possible through meta-analysis by exchanging summary statistics across consortia. NGS studies groupwise-test for association of multiple potentially-causal alleles along each gene. They are subject to similar power constraints and therefore likely to resort to meta-analysis as well. The problem arises when considering privacy of the genetic information during the data-exchange process. Many scoring schemes for NGS association rely on the frequency of each variant thus requiring the exchange of identity of the sequenced variant. As such variants are often rare, potentially revealing the identity of their carriers and jeopardizing privacy. We have thus developed MetaSeq, a protocol for meta-analysis of genome-wide sequencing data by multiple collaborating parties, scoring association for rare variants pooled per gene across all parties. We tackle the challenge of tallying frequency counts of rare, sequenced alleles, for metaanalysis of sequencing data without disclosing the allele identity and counts, thereby protecting sample identity. This apparent paradoxical exchange of information is achieved through cryptographic means. The key idea is that parties encrypt identity of genes and variants. When they transfer information about frequency counts in cases and controls, the exchanged data does not convey the identity of a mutation and therefore does not expose carrier identity. The exchange relies on a 3rd party, trusted to follow the protocol although not trusted to learn about the raw data. We show applicability of this method to publicly available exome-sequencing data from multiple studies, simulating phenotypic information for powerful meta-analysis. The MetaSeq software is publicly available as open source.
An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets.
Hosseini, Parsa; Tremblay, Arianne; Matthews, Benjamin F; Alkharouf, Nadim W
2010-07-02
The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data in a CASAVA-build into functional annotations while producing corresponding gene expression measurements. Achieving such analysis is executed in an ultrafast and highly efficient manner, whether the analysis be a single-read or paired-end sequencing experiment. TASE is a user-friendly and freely available application, allowing rapid analysis and annotation of any given Illumina Solexa sequencing dataset with ease.
Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS
Liu, Jia; May, Morgan; Petri, Andrea; ...
2015-03-04
Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less
Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jia; May, Morgan; Petri, Andrea
Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
Extended performance electric propulsion power processor design study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30cm ion thruster power processor with a beam supply rating of 2.2kW to 10kW. Extensions in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. Preliminary electrical design, mechanical design, and thermal analysis were performed on a 6kW power transformer for the beam supply. Bi-Mod mechanical, structural, and thermal control configurations were evaluated for the power processor, and preliminary estimates of mechanical weight were determined. A program development plan was formulated that outlines the work breakdown structure for the development, qualification and fabrication of the power processor flight hardware.
Eric T. Linder; David A. Buehler
2005-01-01
In 1996, Region 8 of the U. S. Forest Service implemented a program to monitor landbirds on southeastern U.S. national forests. The goal was to develop a monitoring system that could document population trends and bird-habitat relationships. Using power analysis, we examined the ability of the monitoring program to detect population trends (3 percent annual change) at...
Detecting trends in raptor counts: power and type I error rates of various statistical tests
Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.
1996-01-01
We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.
Governing the surgical count through communication interactions: implications for patient safety.
Riley, R; Manias, E; Polglase, A
2006-10-01
Intermittently, the incidence of retained surgical items after surgery is reported in the healthcare literature, usually in the form of case studies. It is commonly recognised that poor communication practices influence surgical outcomes. To explore the power relationships in the communication between nurses and surgeons that affect the conduct of the surgical count. A qualitative, ethnographic study was undertaken. Data were collected in three operating room departments in metropolitan Melbourne, Australia. 11 operating room nurses who worked as anaesthetic, instrument and circulating nurses were individually observed during their interactions with surgeons, anaesthetists, other nurses and patients. Data were generated through 230 h of participant observation, 11 individual and 4 group interviews, and the keeping of a diary by the first author. A deconstructive analysis was undertaken. Results are discussed in terms of the discursive practices in which clinicians engaged to govern and control the surgical count. The three major issues presented in this paper are judging, coping with normalisation and establishing priorities. The findings highlight the power relationships between members of the surgical team and the complexity of striking a balance between organisational policy and professional judgement. Increasing professional accountability may help to deal with the issues of normalisation, whereas greater attention needs to be paid to issues of time management. More sophisticated technological solutions need to be considered to support manual counting techniques.
Seyoum, Awoke; Ndlovu, Principal; Temesgen, Zewotir
2017-03-16
Adherence and CD4 cell count change measure the progression of the disease in HIV patients after the commencement of HAART. Lack of information about associated factors on adherence to HAART and CD4 cell count reduction is a challenge for the improvement of cells in HIV positive adults. The main objective of adopting joint modeling was to compare separate and joint models of longitudinal repeated measures in identifying long-term predictors of the two longitudinal outcomes: CD4 cell count and adherence to HAART. A longitudinal retrospective cohort study was conducted to examine the joint predictors of CD4 cell count change and adherence to HAART among HIV adult patients enrolled in the first 10 months of the year 2008 and followed-up to June 2012. Joint model was employed to determine joint predictors of two longitudinal response variables over time. Furthermore, the generalized linear mixed effect model had been used for specification of the marginal distribution, conditional to correlated random effect. A total of 792 adult HIV patients were studied to analyze the longitudinal joint model study. The result from this investigation revealed that age, weight, baseline CD4 cell count, ownership of cell phone, visiting times, marital status, residence area and level of disclosure of the disease to family members had significantly affected both outcomes. From the two-way interactions, time * owner of cell phone, time * sex, age * sex, age * level of education as well as time * level of education were significant for CD4 cell count change in the longitudinal data analysis. The multivariate joint model with linear predictor indicates that CD4 cell count change was positively correlated (p ≤ 0.0001) with adherence to HAART. Hence, as adherence to HAART increased, CD4 cell count also increased; and those patients who had significant CD4 cell count change at each visiting time had been encouraged to be good adherents. Joint model analysis was more parsimonious as compared to separate analysis, as it reduces type I error and subject-specific analysis improved its model fit. The joint model operates multivariate analysis simultaneously; and it has great power in parameter estimation. Developing joint model helps validate the observed correlation between the outcomes that have emerged from the association of intercepts. There should be a special attention and intervention for HIV positive adults, especially for those who had poor adherence and with low CD4 cell count change. The intervention may be important for pre-treatment counseling and awareness creation. The study also identified a group of patients who were with maximum risk of CD4 cell count change. It is suggested that this group of patients needs high intervention for counseling.
Zheng, Han; Kimber, Alan; Goodwin, Victoria A; Pickering, Ruth M
2018-01-01
A common design for a falls prevention trial is to assess falling at baseline, randomize participants into an intervention or control group, and ask them to record the number of falls they experience during a follow-up period of time. This paper addresses how best to include the baseline count in the analysis of the follow-up count of falls in negative binomial (NB) regression. We examine the performance of various approaches in simulated datasets where both counts are generated from a mixed Poisson distribution with shared random subject effect. Including the baseline count after log-transformation as a regressor in NB regression (NB-logged) or as an offset (NB-offset) resulted in greater power than including the untransformed baseline count (NB-unlogged). Cook and Wei's conditional negative binomial (CNB) model replicates the underlying process generating the data. In our motivating dataset, a statistically significant intervention effect resulted from the NB-logged, NB-offset, and CNB models, but not from NB-unlogged, and large, outlying baseline counts were overly influential in NB-unlogged but not in NB-logged. We conclude that there is little to lose by including the log-transformed baseline count in standard NB regression compared to CNB for moderate to larger sized datasets. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Lai, Jih-Sheng; Young, Sr., Robert W.; Chen, Daoshen; Scudiere, Matthew B.; Ott, Jr., George W.; White, Clifford P.; McKeever, John W.
1997-01-01
A resonant, snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the main inverter switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter.
Lai, J.S.; Young, R.W. Sr.; Chen, D.; Scudiere, M.B.; Ott, G.W. Jr.; White, C.P.; McKeever, J.W.
1997-06-24
A resonant, snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the main inverter switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter. 14 figs.
Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S
2016-04-15
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12} ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}] ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope
Ackermann, M.; Ajello, M.; Albert, A.; ...
2016-04-14
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less
A Vacuum-Aspirator for Counting Termites
Susan C. Jones; Joe K. Mauldin
1983-01-01
An aspirator-system powered by a vacuum cleaner is described for manually counting termites. It is significantly faster and termite survival is at least as high as when using a mouth-aspirator for counting large numbers of termites.
Newell, Matthew R [Los Alamos, NM; Jones, David Carl [Los Alamos, NM
2009-09-01
A portable multiplicity counter has signal input circuitry, processing circuitry and a user/computer interface disposed in a housing. The processing circuitry, which can comprise a microcontroller integrated circuit operably coupled to shift register circuitry implemented in a field programmable gate array, is configured to be operable via the user/computer interface to count input signal pluses receivable at said signal input circuitry and record time correlations thereof in a total counting mode, coincidence counting mode and/or a multiplicity counting mode. The user/computer interface can be for example an LCD display/keypad and/or a USB interface. The counter can include a battery pack for powering the counter and low/high voltage power supplies for biasing external detectors so that the counter can be configured as a hand-held device for counting neutron events.
NASA Astrophysics Data System (ADS)
Wolf, R. N.; Atanasov, D.; Blaum, K.; Kreim, S.; Lunney, D.; Manea, V.; Rosenbusch, M.; Schweikhard, L.; Welker, A.; Wienholtz, F.; Zuber, K.
2016-06-01
In-trap decay in ISOLTRAP's radiofrequency quadrupole (RFQ) ion beam cooler and buncher was used to determine the lifetime of short-lived nuclides. After various storage times, the remaining mother nuclides were mass separated from accompanying isobaric contaminations by the multi-reflection time-of-flight mass separator (MR-ToF MS), allowing for a background-free ion counting. A feasibility study with several online measurements shows that the applications of the ISOLTRAP setup can be further extended by exploiting the high resolving power of the MR-ToF MS in combination with in-trap decay and single-ion counting.
Performance in population models for count data, part II: a new SAEM algorithm
Savic, Radojka; Lavielle, Marc
2009-01-01
Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795
Extended operating range of the 30-cm ion thruster with simplified power processor requirements
NASA Technical Reports Server (NTRS)
Rawlin, V. K.
1981-01-01
A two grid 30 cm diameter mercury ion thruster was operated with only six power supplies over the baseline J series thruster power throttle range with negligible impact on thruster performance. An analysis of the functional model power processor showed that the component mass and parts count could be reduced considerably and the electrical efficiency increased slightly by only replacing power supplies with relays. The input power, output thrust, and specific impulse of the thruster were then extended, still using six supplies, from 2660 watts, 0.13 newtons, and 2980 seconds to 9130 watts, 0.37 newtons, and 3820 seconds, respectively. Increases in thrust and power density enable reductions in the number of thrusters and power processors required for most missions. Preliminary assessments of the impact of thruster operation at increased thrust and power density on the discharge characteristics, performance, and lifetime of the thruster were also made.
Garakh, Zhanna; Zaytseva, Yuliya; Kapranova, Alexandra; Fiala, Ondrej; Horacek, Jiri; Shmukler, Alexander; Gurovich, Isaac Ya; Strelets, Valeria B
2015-11-01
To evaluate the spectral power of the cortical bands in patients with first episode schizophrenia and schizoaffective disorder at rest and during the performance of a mental arithmetic task. We analyzed EEG spectral power (SP) in the resting state and subsequently while counting down from 200 in steps of 7, in 32 first episode schizophrenia patients (SZ), 32 patients with first episode schizoaffective disorder (SA) and healthy controls (HC, n=40). Behavioral parameters such as accuracy and counting speed were also evaluated. Both SZ and SA patients were slower in counting than HC, no difference was obtained in the accuracy and counting speed in the patient groups. In the resting state patients showed elevated midline theta power, off-midline anterior beta 2 power and decreased central/posterior alpha power. The SA group occupied an intermediate position between the schizophrenia patients and controls. In task performance patients lacked a typical increase of midline theta, left anterior beta 2, and anterior gamma power; however, schizoaffective patients demonstrated a growing trend of power in the gamma band in left anterior off-midline sites similar to HC. Moreover, alpha power was less inhibited in schizoaffective patients and more pronounced in schizophrenia patients indicating distinct inhibitory mechanisms in these psychotic disorders. Patients with SA demonstrate less alteration in the spectral power of bands at rest than SZ, and present spectral power changes during cognitive task performance close to the controls. Our study contributes to the present evidence on the neurophysiological distinction between schizophrenia and schizoaffective disorder. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
An efficient annotation and gene-expression derivation tool for Illumina Solexa datasets
2010-01-01
Background The data produced by an Illumina flow cell with all eight lanes occupied, produces well over a terabyte worth of images with gigabytes of reads following sequence alignment. The ability to translate such reads into meaningful annotation is therefore of great concern and importance. Very easily, one can get flooded with such a great volume of textual, unannotated data irrespective of read quality or size. CASAVA, a optional analysis tool for Illumina sequencing experiments, enables the ability to understand INDEL detection, SNP information, and allele calling. To not only extract from such analysis, a measure of gene expression in the form of tag-counts, but furthermore to annotate such reads is therefore of significant value. Findings We developed TASE (Tag counting and Analysis of Solexa Experiments), a rapid tag-counting and annotation software tool specifically designed for Illumina CASAVA sequencing datasets. Developed in Java and deployed using jTDS JDBC driver and a SQL Server backend, TASE provides an extremely fast means of calculating gene expression through tag-counts while annotating sequenced reads with the gene's presumed function, from any given CASAVA-build. Such a build is generated for both DNA and RNA sequencing. Analysis is broken into two distinct components: DNA sequence or read concatenation, followed by tag-counting and annotation. The end result produces output containing the homology-based functional annotation and respective gene expression measure signifying how many times sequenced reads were found within the genomic ranges of functional annotations. Conclusions TASE is a powerful tool to facilitate the process of annotating a given Illumina Solexa sequencing dataset. Our results indicate that both homology-based annotation and tag-count analysis are achieved in very efficient times, providing researchers to delve deep in a given CASAVA-build and maximize information extraction from a sequencing dataset. TASE is specially designed to translate sequence data in a CASAVA-build into functional annotations while producing corresponding gene expression measurements. Achieving such analysis is executed in an ultrafast and highly efficient manner, whether the analysis be a single-read or paired-end sequencing experiment. TASE is a user-friendly and freely available application, allowing rapid analysis and annotation of any given Illumina Solexa sequencing dataset with ease. PMID:20598141
Nagy-Balo, Edina; Kiss, Alexandra; Condie, Catherine; Stewart, Mark; Edes, Istvan; Csanadi, Zoltan
2014-06-01
Pulmonary vein isolation with phased radiofrequency current and use of a pulmonary vein ablation catheter (PVAC) has recently been associated with a high incidence of clinically silent brain infarcts on diffusion-weighted magnetic resonance imaging and a high microembolic signal (MES) count detected by transcranial Doppler. The purpose of this study was to investigate the potential correlation between different biophysical parameters of energy delivery (ED) and MES generation during PVAC ablation. MES counts during consecutive PVAC ablations were recorded for each ED and time stamped for correlation with temperature, power, and impedance data from the GENius 14.4 generator. Additionally, catheter-tissue contact was characterized by the template deviation score, calculated by comparing the temperature curve with an ideal template representing good contact, and by the respiratory contact failure score, to quantify temperature variations indicative of intermittent contact due to respiration. A total of 834 EDs during 48 PVAC ablations were analyzed. A significant increase in MES count was associated with a lower average temperature, a temperature integral over 62°C, a higher average power, the total energy delivered, higher respiration and template deviation scores (P <.0001), and simultaneous ED to the most proximal and distal poles of the PVAC (P <.0001). MES generation during ablation is related to different indicators of poor electrode-tissue contact, the total power delivered, and the interaction between the most distal and the most proximal electrodes. Copyright © 2014. Published by Elsevier Inc.
Alignment-free sequence comparison (II): theoretical power of comparison statistics.
Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S
2010-11-01
Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.
Zhang, Qingyang
2018-05-16
Differential co-expression analysis, as a complement of differential expression analysis, offers significant insights into the changes in molecular mechanism of different phenotypes. A prevailing approach to detecting differentially co-expressed genes is to compare Pearson's correlation coefficients in two phenotypes. However, due to the limitations of Pearson's correlation measure, this approach lacks the power to detect nonlinear changes in gene co-expression which is common in gene regulatory networks. In this work, a new nonparametric procedure is proposed to search differentially co-expressed gene pairs in different phenotypes from large-scale data. Our computational pipeline consisted of two main steps, a screening step and a testing step. The screening step is to reduce the search space by filtering out all the independent gene pairs using distance correlation measure. In the testing step, we compare the gene co-expression patterns in different phenotypes by a recently developed edge-count test. Both steps are distribution-free and targeting nonlinear relations. We illustrate the promise of the new approach by analyzing the Cancer Genome Atlas data and the METABRIC data for breast cancer subtypes. Compared with some existing methods, the new method is more powerful in detecting nonlinear type of differential co-expressions. The distance correlation screening can greatly improve computational efficiency, facilitating its application to large data sets.
Differential expression analysis for RNAseq using Poisson mixed models.
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang
2017-06-20
Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Comparative shotgun proteomics using spectral count data and quasi-likelihood modeling.
Li, Ming; Gray, William; Zhang, Haixia; Chung, Christine H; Billheimer, Dean; Yarbrough, Wendell G; Liebler, Daniel C; Shyr, Yu; Slebos, Robbert J C
2010-08-06
Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography-tandem mass spectrometry (LC-MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher's Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue phenotypes are sufficiently rich in quantitative information and that statistically significant differences in proteins spectral counts reflect the underlying biology of the samples.
Comparative Shotgun Proteomics Using Spectral Count Data and Quasi-Likelihood Modeling
2010-01-01
Shotgun proteomics provides the most powerful analytical platform for global inventory of complex proteomes using liquid chromatography−tandem mass spectrometry (LC−MS/MS) and allows a global analysis of protein changes. Nevertheless, sampling of complex proteomes by current shotgun proteomics platforms is incomplete, and this contributes to variability in assessment of peptide and protein inventories by spectral counting approaches. Thus, shotgun proteomics data pose challenges in comparing proteomes from different biological states. We developed an analysis strategy using quasi-likelihood Generalized Linear Modeling (GLM), included in a graphical interface software package (QuasiTel) that reads standard output from protein assemblies created by IDPicker, an HTML-based user interface to query shotgun proteomic data sets. This approach was compared to four other statistical analysis strategies: Student t test, Wilcoxon rank test, Fisher’s Exact test, and Poisson-based GLM. We analyzed the performance of these tests to identify differences in protein levels based on spectral counts in a shotgun data set in which equimolar amounts of 48 human proteins were spiked at different levels into whole yeast lysates. Both GLM approaches and the Fisher Exact test performed adequately, each with their unique limitations. We subsequently compared the proteomes of normal tonsil epithelium and HNSCC using this approach and identified 86 proteins with differential spectral counts between normal tonsil epithelium and HNSCC. We selected 18 proteins from this comparison for verification of protein levels between the individual normal and tumor tissues using liquid chromatography−multiple reaction monitoring mass spectrometry (LC−MRM-MS). This analysis confirmed the magnitude and direction of the protein expression differences in all 6 proteins for which reliable data could be obtained. Our analysis demonstrates that shotgun proteomic data sets from different tissue phenotypes are sufficiently rich in quantitative information and that statistically significant differences in proteins spectral counts reflect the underlying biology of the samples. PMID:20586475
Delta connected resonant snubber circuit
Lai, J.S.; Peng, F.Z.; Young, R.W. Sr.; Ott, G.W. Jr.
1998-01-20
A delta connected, resonant snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the dc supply voltage through the main inverter switches and the auxiliary switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter. 36 figs.
Delta connected resonant snubber circuit
Lai, Jih-Sheng; Peng, Fang Zheng; Young, Sr., Robert W.; Ott, Jr., George W.
1998-01-01
A delta connected, resonant snubber-based, soft switching, inverter circuit achieves lossless switching during dc-to-ac power conversion and power conditioning with minimum component count and size. Current is supplied to the resonant snubber branches solely by the dc supply voltage through the main inverter switches and the auxiliary switches. Component count and size are reduced by use of a single semiconductor switch in the resonant snubber branches. Component count is also reduced by maximizing the use of stray capacitances of the main switches as parallel resonant capacitors. Resonance charging and discharging of the parallel capacitances allows lossless, zero voltage switching. In one embodiment, circuit component size and count are minimized while achieving lossless, zero voltage switching within a three-phase inverter.
Observation weights unlock bulk RNA-seq tools for zero inflation and single-cell applications.
Van den Berge, Koen; Perraudeau, Fanny; Soneson, Charlotte; Love, Michael I; Risso, Davide; Vert, Jean-Philippe; Robinson, Mark D; Dudoit, Sandrine; Clement, Lieven
2018-02-26
Dropout events in single-cell RNA sequencing (scRNA-seq) cause many transcripts to go undetected and induce an excess of zero read counts, leading to power issues in differential expression (DE) analysis. This has triggered the development of bespoke scRNA-seq DE methods to cope with zero inflation. Recent evaluations, however, have shown that dedicated scRNA-seq tools provide no advantage compared to traditional bulk RNA-seq tools. We introduce a weighting strategy, based on a zero-inflated negative binomial model, that identifies excess zero counts and generates gene- and cell-specific weights to unlock bulk RNA-seq DE pipelines for zero-inflated data, boosting performance for scRNA-seq.
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Mass spectrometric measurements of atmospheric composition
NASA Technical Reports Server (NTRS)
Hoffman, J. H.
1974-01-01
The development of a magnetic sector field analyzer for continuous sampling and measurement of outer planetary atmospheres is discussed. Special features of the analyzer include a dynamic range of 10 to the minus 7th power, a mass range from 1 to 48 AMU, two ion sensitivities, a special scan time of 35 sec at 14 BPS, and the use of ion counting techniques for analysis.
Egorov, Evgeny S; Merzlyak, Ekaterina M; Shelenkov, Andrew A; Britanova, Olga V; Sharonov, George V; Staroverov, Dmitriy B; Bolotin, Dmitriy A; Davydov, Alexey N; Barsova, Ekaterina; Lebedev, Yuriy B; Shugay, Mikhail; Chudakov, Dmitriy M
2015-06-15
Emerging high-throughput sequencing methods for the analyses of complex structure of TCR and BCR repertoires give a powerful impulse to adaptive immunity studies. However, there are still essential technical obstacles for performing a truly quantitative analysis. Specifically, it remains challenging to obtain comprehensive information on the clonal composition of small lymphocyte populations, such as Ag-specific, functional, or tissue-resident cell subsets isolated by sorting, microdissection, or fine needle aspirates. In this study, we report a robust approach based on unique molecular identifiers that allows profiling Ag receptors for several hundred to thousand lymphocytes while preserving qualitative and quantitative information on clonal composition of the sample. We also describe several general features regarding the data analysis with unique molecular identifiers that are critical for accurate counting of starting molecules in high-throughput sequencing applications. Copyright © 2015 by The American Association of Immunologists, Inc.
2017-01-01
The annual report presents data tables describing the electricity industry in each State. Data include: summary statistics; the 10 largest plants by generating capacity; the top five entities ranked by sector; electric power industry generating capacity by primary energy source; electric power industry generation by primary energy source; utility delivered fuel prices for coal, petroleum, and natural gas; electric power industry emissions estimates; retail sales, revenue, and average retail price by sector; retail electricity sales statistics; and supply and disposition of electricity; net metering counts and capacity by technology and customer type; and advanced metering counts by customer type.
Increasing point-count duration increases standard error
Smith, W.P.; Twedt, D.J.; Hamel, P.B.; Ford, R.P.; Wiedenfeld, D.A.; Cooper, R.J.
1998-01-01
We examined data from point counts of varying duration in bottomland forests of west Tennessee and the Mississippi Alluvial Valley to determine if counting interval influenced sampling efficiency. Estimates of standard error increased as point count duration increased both for cumulative number of individuals and species in both locations. Although point counts appear to yield data with standard errors proportional to means, a square root transformation of the data may stabilize the variance. Using long (>10 min) point counts may reduce sample size and increase sampling error, both of which diminish statistical power and thereby the ability to detect meaningful changes in avian populations.
NASA Astrophysics Data System (ADS)
Sasamal, Trailokya Nath; Singh, Ashutosh Kumar; Ghanekar, Umesh
2018-04-01
Nanotechnologies, remarkably Quantum-dot Cellular Automata (QCA), offer an attractive perspective for future computing technologies. In this paper, QCA is investigated as an implementation method for designing area and power efficient reversible logic gates. The proposed designs achieve superior performance by incorporating a compact 2-input XOR gate. The proposed design for Feynman, Toffoli, and Fredkin gates demonstrates 28.12, 24.4, and 7% reduction in cell count and utilizes 46, 24.4, and 7.6% less area, respectively over previous best designs. Regarding the cell count (area cover) that of the proposed Peres gate and Double Feynman gate are 44.32% (21.5%) and 12% (25%), respectively less than the most compact previous designs. Further, the delay of Fredkin and Toffoli gates is 0.75 clock cycles, which is equal to the delay of the previous best designs. While the Feynman and Double Feynman gates achieve a delay of 0.5 clock cycles, equal to the least delay previous one. Energy analysis confirms that the average energy dissipation of the developed Feynman, Toffoli, and Fredkin gates is 30.80, 18.08, and 4.3% (for 1.0 E k energy level), respectively less compared to best reported designs. This emphasizes the beneficial role of using proposed reversible gates to design complex and power efficient QCA circuits. The QCADesigner tool is used to validate the layout of the proposed designs, and the QCAPro tool is used to evaluate the energy dissipation.
Wang, Tianyu; Nabavi, Sheida
2018-04-24
Differential gene expression analysis is one of the significant efforts in single cell RNA sequencing (scRNAseq) analysis to discover the specific changes in expression levels of individual cell types. Since scRNAseq exhibits multimodality, large amounts of zero counts, and sparsity, it is different from the traditional bulk RNA sequencing (RNAseq) data. The new challenges of scRNAseq data promote the development of new methods for identifying differentially expressed (DE) genes. In this study, we proposed a new method, SigEMD, that combines a data imputation approach, a logistic regression model and a nonparametric method based on the Earth Mover's Distance, to precisely and efficiently identify DE genes in scRNAseq data. The regression model and data imputation are used to reduce the impact of large amounts of zero counts, and the nonparametric method is used to improve the sensitivity of detecting DE genes from multimodal scRNAseq data. By additionally employing gene interaction network information to adjust the final states of DE genes, we further reduce the false positives of calling DE genes. We used simulated datasets and real datasets to evaluate the detection accuracy of the proposed method and to compare its performance with those of other differential expression analysis methods. Results indicate that the proposed method has an overall powerful performance in terms of precision in detection, sensitivity, and specificity. Copyright © 2018 Elsevier Inc. All rights reserved.
Acta Aeronautica et Astronautica Sinica,
1983-03-04
power spectrum and counting methods [1,2,3]. If the stochastic load-time mechanism (such as gusts of wind, random 38...34 "- " ° - " . . .. . . . . . . . . ’ - - - Ř vibrations, etc.), then we can use the power spectrum technique, and we can also use the counting method. However, the...simplification for treat - ment so that the differences in obtained results are very minute, and are also closest to the random spectrum. This then tells us
A flexible count data regression model for risk analysis.
Guikema, Seth D; Coffelt, Jeremy P; Goffelt, Jeremy P
2008-02-01
In many cases, risk and reliability analyses involve estimating the probabilities of discrete events such as hardware failures and occurrences of disease or death. There is often additional information in the form of explanatory variables that can be used to help estimate the likelihood of different numbers of events in the future through the use of an appropriate regression model, such as a generalized linear model. However, existing generalized linear models (GLM) are limited in their ability to handle the types of variance structures often encountered in using count data in risk and reliability analysis. In particular, standard models cannot handle both underdispersed data (variance less than the mean) and overdispersed data (variance greater than the mean) in a single coherent modeling framework. This article presents a new GLM based on a reformulation of the Conway-Maxwell Poisson (COM) distribution that is useful for both underdispersed and overdispersed count data and demonstrates this model by applying it to the assessment of electric power system reliability. The results show that the proposed COM GLM can provide as good of fits to data as the commonly used existing models for overdispered data sets while outperforming these commonly used models for underdispersed data sets.
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Nishimichi, Takahiro; Li, Baojiu; Higuchi, Yuichi
2017-04-01
We investigate the information content of various cosmic shear statistics on the theory of gravity. Focusing on the Hu-Sawicki-type f(R) model, we perform a set of ray-tracing simulations and measure the convergence bispectrum, peak counts and Minkowski functionals. We first show that while the convergence power spectrum does have sensitivity to the current value of extra scalar degree of freedom |fR0|, it is largely compensated by a change in the present density amplitude parameter σ8 and the matter density parameter Ωm0. With accurate covariance matrices obtained from 1000 lensing simulations, we then examine the constraining power of the three additional statistics. We find that these probes are indeed helpful to break the parameter degeneracy, which cannot be resolved from the power spectrum alone. We show that especially the peak counts and Minkowski functionals have the potential to rigorously (marginally) detect the signature of modified gravity with the parameter |fR0| as small as 10-5 (10-6) if we can properly model them on small (˜1 arcmin) scale in a future survey with a sky coverage of 1500 deg2. We also show that the signal level is similar among the additional three statistics and all of them provide complementary information to the power spectrum. These findings indicate the importance of combining multiple probes beyond the standard power spectrum analysis to detect possible modifications to general relativity.
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Elsas, José Hugo; Szalay, Alexander S.; Meneveau, Charles
2018-04-01
Motivated by interest in the geometry of high intensity events of turbulent flows, we examine the spatial correlation functions of sets where turbulent events are particularly intense. These sets are defined using indicator functions on excursion and iso-value sets. Their geometric scaling properties are analysed by examining possible power-law decay of their radial correlation function. We apply the analysis to enstrophy, dissipation and velocity gradient invariants Q and R and their joint spatial distributions, using data from a direct numerical simulation of isotropic turbulence at Reλ ≈ 430. While no fractal scaling is found in the inertial range using box-counting in the finite Reynolds number flow considered here, power-law scaling in the inertial range is found in the radial correlation functions. Thus, a geometric characterisation in terms of these sets' correlation dimension is possible. Strong dependence on the enstrophy and dissipation threshold is found, consistent with multifractal behaviour. Nevertheless, the lack of scaling of the box-counting analysis precludes direct quantitative comparisons with earlier work based on multifractal formalism. Surprising trends, such as a lower correlation dimension for strong dissipation events compared to strong enstrophy events, are observed and interpreted in terms of spatial coherence of vortices in the flow.
NASA Astrophysics Data System (ADS)
Liao, Yi; Ma, Xiao-Dong
2018-03-01
We study two aspects of higher dimensional operators in standard model effective field theory. We first introduce a perturbative power counting rule for the entries in the anomalous dimension matrix of operators with equal mass dimension. The power counting is determined by the number of loops and the difference of the indices of the two operators involved, which in turn is defined by assuming that all terms in the standard model Lagrangian have an equal perturbative power. Then we show that the operators with the lowest index are unique at each mass dimension d, i.e., (H † H) d/2 for even d ≥ 4, and (LT∈ H)C(LT∈ H) T (H † H)(d-5)/2 for odd d ≥ 5. Here H, L are the Higgs and lepton doublet, and ∈, C the antisymmetric matrix of rank two and the charge conjugation matrix, respectively. The renormalization group running of these operators can be studied separately from other operators of equal mass dimension at the leading order in power counting. We compute their anomalous dimensions at one loop for general d and find that they are enhanced quadratically in d due to combinatorics. We also make connections with classification of operators in terms of their holomorphic and anti-holomorphic weights. Supported by the National Natural Science Foundation of China under Grant Nos. 11025525, 11575089, and by the CAS Center for Excellence in Particle Physics (CCEPP)
PBF (PER620) interior. Counting room, main floor. Date: May 2004. ...
PBF (PER-620) interior. Counting room, main floor. Date: May 2004. INEEL negative no. HD-41-6-1 - Idaho National Engineering Laboratory, SPERT-I & Power Burst Facility Area, Scoville, Butte County, ID
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-01-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325
Marien, Koen M.; Andries, Luc; De Schepper, Stefanie; Kockx, Mark M.; De Meyer, Guido R.Y.
2015-01-01
Tumor angiogenesis is measured by counting microvessels in tissue sections at high power magnification as a potential prognostic or predictive biomarker. Until now, regions of interest1 (ROIs) were selected by manual operations within a tumor by using a systematic uniform random sampling2 (SURS) approach. Although SURS is the most reliable sampling method, it implies a high workload. However, SURS can be semi-automated and in this way contribute to the development of a validated quantification method for microvessel counting in the clinical setting. Here, we report a method to use semi-automated SURS for microvessel counting: • Whole slide imaging with Pannoramic SCAN (3DHISTECH) • Computer-assisted sampling in Pannoramic Viewer (3DHISTECH) extended by two self-written AutoHotkey applications (AutoTag and AutoSnap) • The use of digital grids in Photoshop® and Bridge® (Adobe Systems) This rapid procedure allows traceability essential for high throughput protein analysis of immunohistochemically stained tissue. PMID:26150998
Marcheselli, Luigi; Bari, Alessia; Anastasia, Antonella; Botto, Barbara; Puccini, Benedetta; Dondi, Alessandra; Carella, Angelo M; Alvarez, Isabel; Chiarenza, Annalisa; Arcari, Annalisa; Salvi, Flavia; Federico, Massimo
2015-05-01
Recently, in an attempt to improve the discrimination power of the international prognostic index (IPI), patients with diffuse large B-cell lymphoma were evaluated to determine the prognostic roles of peripheral blood absolute monocyte count (AMC) and absolute lymphocyte count (ALC). Here, we analysed data of 428 patients with follicular lymphoma (FL) enrolled in a prospective, randomized trial (FOLL05 study) conducted by Fondazione Italiana Linfomi, to assess the impact of AMC and ALC on progression-free survival (PFS). All patients had been treated with one of three treatment combinations: (i) rituximab (R) plus cyclophosphamide, vincristine and prednisone; (ii) R plus cyclophosphamide, doxorubicin, vincristine and prednisone or (iii) R plus mitoxantrone and fludarabine. We showed that only AMC was a powerful predictor of PFS, and possibly overall survival, in patients with FL treated with combination chemotherapy regimens that contained R. The AMC can be used alone as a novel, simple factor that can predict survival outcome in patients with FL, independent of the immunochemotherapy regimen. It may therefore be widely used by clinicians, due to its simplicity and broad applicability. Additionally, it can be combined with other factors that determine the IPI or FLIPI, to increase the discriminating ability of these indices. © 2015 John Wiley & Sons Ltd.
Shultzaberger, Ryan K.; Paddock, Mark L.; Katsuki, Takeo; Greenspan, Ralph J.; Golden, Susan S.
2016-01-01
The temporal measurement of a bioluminescent reporter has proven to be one of the most powerful tools for characterizing circadian rhythms in the cyanobacterium Synechococcus elongatus. Primarily, two approaches have been used to automate this process: (1) detection of cell culture bioluminescence in 96-well plates by a photomultiplier tube-based plate-cycling luminometer (TopCount Microplate Scintillation and Luminescence Counter, Perkin Elmer) and (2) detection of individual colony bioluminescence by iteratively rotating a Petri dish under a cooled CCD camera using a computer-controlled turntable. Each approach has distinct advantages. The TopCount provides a more quantitative measurement of bioluminescence, enabling the direct comparison of clock output levels among strains. The computer-controlled turntable approach has a shorter set-up time and greater throughput, making it a more powerful phenotypic screening tool. While the latter approach is extremely useful, only a few labs have been able to build such an apparatus because of technical hurdles involved in coordinating and controlling both the camera and the turntable, and in processing the resulting images. This protocol provides instructions on how to construct, use, and process data from a computer-controlled turntable to measure the temporal changes in bioluminescence of individual cyanobacterial colonies. Furthermore, we describe how to prepare samples for use with the TopCount to minimize experimental noise, and generate meaningful quantitative measurements of clock output levels for advanced analysis. PMID:25662451
X-ray fluorescence analysis of alloy and stainless steels using a mercuric iodide detector
NASA Technical Reports Server (NTRS)
Kelliher, Warren C.; Maddox, W. Gene
1988-01-01
A mercuric iodide detector was used for the XRF analysis of a number of NBS standard steels, applying a specially developed correction method for interelemental effects. It is shown that, using this method and a good peak-deconvolution technique, the HgI2 detector is capable of achieving resolutions and count rates needed in the XRF anlysis of multielement samples. The freedom from cryogenic cooling and from power supplies necessary for an electrically cooled device makes this detector a very good candidate for a portable instrument.
Zito, G.V.
1959-04-21
This patent relates to high voltage supply circuits adapted for providing operating voltages for GeigerMueller counter tubes, and is especially directed to an arrangement for maintaining uniform voltage under changing conditions of operation. In the usual power supply arrangement for counter tubes the counter voltage is taken from across the power supply output capacitor. If the count rate exceeds the current delivering capaciiy of the capacitor, the capacitor voltage will drop, decreasing the counter voltage. The present invention provides a multivibrator which has its output voltage controlled by a signal proportional to the counting rate. As the counting rate increases beyond the current delivering capacity of the capacitor, the rectified voltage output from the multivibrator is increased to maintain uniform counter voltage.
Evaluation of Pulse Counting for the Mars Organic Mass Analyzer (MOMA) Ion Trap Detection Scheme
NASA Technical Reports Server (NTRS)
Van Amerom, Friso H.; Short, Tim; Brinckerhoff, William; Mahaffy, Paul; Kleyner, Igor; Cotter, Robert J.; Pinnick, Veronica; Hoffman, Lars; Danell, Ryan M.; Lyness, Eric I.
2011-01-01
The Mars Organic Mass Analyzer is being developed at Goddard Space Flight Center to identify organics and possible biological compounds on Mars. In the process of characterizing mass spectrometer size, weight, and power consumption, the use of pulse counting was considered for ion detection. Pulse counting has advantages over analog-mode amplification of the electron multiplier signal. Some advantages are reduced size of electronic components, low power consumption, ability to remotely characterize detector performance, and avoidance of analog circuit noise. The use of pulse counting as a detection method with ion trap instruments is relatively rare. However, with the recent development of high performance electrical components, this detection method is quite suitable and can demonstrate significant advantages over analog methods. Methods A prototype quadrupole ion trap mass spectrometer with an internal electron ionization source was used as a test setup to develop and evaluate the pulse-counting method. The anode signal from the electron multiplier was preamplified. The an1plified signal was fed into a fast comparator for pulse-level discrimination. The output of the comparator was fed directly into a Xilinx FPGA development board. Verilog HDL software was written to bin the counts at user-selectable intervals. This system was able to count pulses at rates in the GHz range. The stored ion count nun1ber per bin was transferred to custom ion trap control software. Pulse-counting mass spectra were compared with mass spectra obtained using the standard analog-mode ion detection. Prelin1inary Data Preliminary mass spectra have been obtained for both analog mode and pulse-counting mode under several sets of instrument operating conditions. Comparison of the spectra revealed better peak shapes for pulse-counting mode. Noise levels are as good as, or better than, analog-mode detection noise levels. To artificially force ion pile-up conditions, the ion trap was overfilled and ions were ejected at very high scan rates. Pile-up of ions was not significant for the ion trap under investigation even though the ions are ejected in so-called 'ion-micro packets'. It was found that pulse counting mode had higher dynamic range than analog mode, and that the first amplification stage in analog mode can distort mass peaks. The inherent speed of the pulse counting method also proved to be beneficial to ion trap operation and ion ejection characterization. Very high scan rates were possible with pulse counting since the digital circuitry response time is so much smaller than with the analog method. Careful investigation of the pulse-counting data also allowed observation of the applied resonant ejection frequency during mass analysis. Ejection of ion micro packets could be clearly observed in the binned data. A second oscillation frequency, much lower than the secular frequency, was also observed. Such an effect was earlier attributed to the oscillation of the total plasma cloud in the ion trap. While the components used to implement pulse counting are quite advanced, due to their prevalence in consumer electronics, the cost of this detection system is no more than that of an analog mode system. Total pulse-counting detection system electronics cost is under $250
Electric prototype power processor for a 30cm ion thruster
NASA Technical Reports Server (NTRS)
Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.
1977-01-01
An electrical prototype power processor unit was designed, fabricated and tested with a 30 cm mercury ion engine for primary space propulsion. The power processor unit used the thyristor series resonant inverter as the basic power stage for the high power beam and discharge supplies. A transistorized series resonant inverter processed the remaining power for the low power outputs. The power processor included a digital interface unit to process all input commands and internal telemetry signals so that electric propulsion systems could be operated with a central computer system. The electrical prototype unit included design improvement in the power components such as thyristors, transistors, filters and resonant capacitors, and power transformers and inductors in order to reduce component weight, to minimize losses, and to control the component temperature rise. A design analysis for the electrical prototype is also presented on the component weight, losses, part count and reliability estimate. The electrical prototype was tested in a thermal vacuum environment. Integration tests were performed with a 30 cm ion engine and demonstrated operational compatibility. Electromagnetic interference data was also recorded on the design to provide information for spacecraft integration.
Optimizing Controlling-Value-Based Power Gating with Gate Count and Switching Activity
NASA Astrophysics Data System (ADS)
Chen, Lei; Kimura, Shinji
In this paper, a new heuristic algorithm is proposed to optimize the power domain clustering in controlling-value-based (CV-based) power gating technology. In this algorithm, both the switching activity of sleep signals (p) and the overall numbers of sleep gates (gate count, N) are considered, and the sum of the product of p and N is optimized. The algorithm effectively exerts the total power reduction obtained from the CV-based power gating. Even when the maximum depth is kept to be the same, the proposed algorithm can still achieve power reduction approximately 10% more than that of the prior algorithms. Furthermore, detailed comparison between the proposed heuristic algorithm and other possible heuristic algorithms are also presented. HSPICE simulation results show that over 26% of total power reduction can be obtained by using the new heuristic algorithm. In addition, the effect of dynamic power reduction through the CV-based power gating method and the delay overhead caused by the switching of sleep transistors are also shown in this paper.
Phasor imaging with a widefield photon-counting detector
Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Weiss, Shimon
2012-01-01
Abstract. Fluorescence lifetime can be used as a contrast mechanism to distinguish fluorophores for localization or tracking, for studying molecular interactions, binding, assembly, and aggregation, or for observing conformational changes via Förster resonance energy transfer (FRET) between donor and acceptor molecules. Fluorescence lifetime imaging microscopy (FLIM) is thus a powerful technique but its widespread use has been hampered by demanding hardware and software requirements. FLIM data is often analyzed in terms of multicomponent fluorescence lifetime decays, which requires large signals for a good signal-to-noise ratio. This confines the approach to very low frame rates and limits the number of frames which can be acquired before bleaching the sample. Recently, a computationally efficient and intuitive graphical representation, the phasor approach, has been proposed as an alternative method for FLIM data analysis at the ensemble and single-molecule level. In this article, we illustrate the advantages of combining phasor analysis with a widefield time-resolved single photon-counting detector (the H33D detector) for FLIM applications. In particular we show that phasor analysis allows real-time subsecond identification of species by their lifetimes and rapid representation of their spatial distribution, thanks to the parallel acquisition of FLIM information over a wide field of view by the H33D detector. We also discuss possible improvements of the H33D detector’s performance made possible by the simplicity of phasor analysis and its relaxed timing accuracy requirements compared to standard time-correlated single-photon counting (TCSPC) methods. PMID:22352658
Heavy-quark meson spectrum tests of the Oktay–Kronfeld action
Bailey, Jon A.; DeTar, Carleton; Jang, Yong -Chull; ...
2017-11-15
The Oktay-Kronfeld (OK) action extends the Fermilab improvement program for massive Wilson fermions to higher order in suitable power-counting schemes. It includes dimension-six and -seven operators necessary for matching to QCD through ordermore » $${\\mathrm{O}}(\\Lambda^3/m_Q^3)$$ in HQET power counting, for applications to heavy-light systems, and $${\\mathrm{O}}(v^6)$$ in NRQCD power counting, for applications to quarkonia. In the Symanzik power counting of lattice gauge theory near the continuum limit, the OK action includes all $${\\mathrm{O}}(a^2)$$ and some $${\\mathrm{O}}(a^3)$$ terms. To assess whether the theoretical improvement is realized in practice, we study combinations of heavy-strange and quarkonia masses and mass splittings, designed to isolate heavy-quark discretization effects. We find that, with one exception, the results obtained with the tree-level-matched OK action are significantly closer to the continuum limit than the results obtained with the Fermilab action. The exception is the hyperfine splitting of the bottom-strange system, for which our statistical errors are too large to draw a firm conclusion. Lastly, these studies are carried out with data generated with the tadpole-improved Fermilab and OK actions on 500 gauge configurations from one of MILC's $$a\\approx0.12$$~fm, $$N_f=2+1$$-flavor, asqtad-staggered ensembles.« less
Heavy-quark meson spectrum tests of the Oktay–Kronfeld action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Jon A.; DeTar, Carleton; Jang, Yong -Chull
The Oktay-Kronfeld (OK) action extends the Fermilab improvement program for massive Wilson fermions to higher order in suitable power-counting schemes. It includes dimension-six and -seven operators necessary for matching to QCD through ordermore » $${\\mathrm{O}}(\\Lambda^3/m_Q^3)$$ in HQET power counting, for applications to heavy-light systems, and $${\\mathrm{O}}(v^6)$$ in NRQCD power counting, for applications to quarkonia. In the Symanzik power counting of lattice gauge theory near the continuum limit, the OK action includes all $${\\mathrm{O}}(a^2)$$ and some $${\\mathrm{O}}(a^3)$$ terms. To assess whether the theoretical improvement is realized in practice, we study combinations of heavy-strange and quarkonia masses and mass splittings, designed to isolate heavy-quark discretization effects. We find that, with one exception, the results obtained with the tree-level-matched OK action are significantly closer to the continuum limit than the results obtained with the Fermilab action. The exception is the hyperfine splitting of the bottom-strange system, for which our statistical errors are too large to draw a firm conclusion. Lastly, these studies are carried out with data generated with the tadpole-improved Fermilab and OK actions on 500 gauge configurations from one of MILC's $$a\\approx0.12$$~fm, $$N_f=2+1$$-flavor, asqtad-staggered ensembles.« less
Low-Power, 8-Channel EEG Recorder and Seizure Detector ASIC for a Subdermal Implantable System.
Do Valle, Bruno G; Cash, Sydney S; Sodini, Charles G
2016-12-01
EEG remains the mainstay test for the diagnosis and treatment of patients with epilepsy. Unfortunately, ambulatory EEG systems are far from ideal for patients who have infrequent seizures. These systems only last up to 3 days and if a seizure is not captured during the recordings, a definite diagnosis of the patient's condition cannot be given. This work aims to address this need by proposing a subdermal implantable, eight-channel EEG recorder and seizure detector that has two modes of operation: diagnosis and seizure counting. In the diagnosis mode, EEG is continuously recorded until a number of seizures are recorded. In the seizure counting mode, the system uses a low-power algorithm to track the number of seizures a patient has, providing doctors with a reliable count to help determine medication efficacy or other clinical endpoint. An ASIC that implements the EEG recording and seizure detection algorithm was designed and fabricated in a 0.18 μm CMOS process. The ASIC includes eight EEG channels and is designed to minimize the system's power and size. The result is a power-efficient analog front end that requires 2.75 μW per channel in diagnosis mode and 0.84 μW per channel in seizure counting mode. Both modes have an input referred noise of approximately 1.1 μVrms.
NASA Astrophysics Data System (ADS)
Moralis-Pegios, M.; Terzenidis, N.; Mourgias-Alexandris, G.; Vyrsokinos, K.; Pleros, N.
2018-02-01
Disaggregated Data Centers (DCs) have emerged as a powerful architectural framework towards increasing resource utilization and system power efficiency, requiring, however, a networking infrastructure that can ensure low-latency and high-bandwidth connectivity between a high-number of interconnected nodes. This reality has been the driving force towards high-port count and low-latency optical switching platforms, with recent efforts concluding that the use of distributed control architectures as offered by Broadcast-and-Select (BS) layouts can lead to sub-μsec latencies. However, almost all high-port count optical switch designs proposed so far rely either on electronic buffering and associated SerDes circuitry for resolving contention or on buffer-less designs with packet drop and re-transmit procedures, unavoidably increasing latency or limiting throughput. In this article, we demonstrate a 256x256 optical switch architecture for disaggregated DCs that employs small-size optical delay line buffering in a distributed control scheme, exploiting FPGA-based header processing over a hybrid BS/Wavelength routing topology that is implemented by a 16x16 BS design and a 16x16 AWGR. Simulation-based performance analysis reveals that even the use of a 2- packet optical buffer can yield <620nsec latency with >85% throughput for up to 100% loads. The switch has been experimentally validated with 10Gb/s optical data packets using 1:16 optical splitting and a SOA-MZI wavelength converter (WC) along with fiber delay lines for the 2-packet buffer implementation at every BS outgoing port, followed by an additional SOA-MZI tunable WC and the 16x16 AWGR. Error-free performance in all different switch input/output combinations has been obtained with a power penalty of <2.5dB.
NASA Astrophysics Data System (ADS)
Danielsson, Anna T.; Berge, Maria; Lidar, Malena
2018-03-01
The purpose of this paper is to develop and illustrate an analytical framework for exploring how relations between knowledge and power are constituted in science and technology classrooms. In addition, the empirical purpose of this paper is to explore how disciplinary knowledge and knowledge-making are constituted in teacher-student interactions. In our analysis we focus on how instances of teacher-student interaction can be understood as simultaneously contributing to meaning-making and producing power relations. The analytical framework we have developed makes use of practical epistemological analysis in combination with a Foucauldian conceptualisation of power, assuming that privileging of educational content needs to be understood as integral to the execution of power in the classroom. The empirical data consists of video-recorded teaching episodes, taken from a teaching sequence of three 1-h lessons in one Swedish technology classroom with sixteen 13-14 years old students. In the analysis we have identified how different epistemological moves contribute to the normalisation and exclusion of knowledge as well as ways of knowledge-making. Further, by looking at how the teacher communicates what counts as (ir)relevant knowledge or (ir)relevant ways of acquiring knowledge we are able to describe what kind of technology student is made desirable in the analysed classroom.
Point-of-care, portable microfluidic blood analyzer system
NASA Astrophysics Data System (ADS)
Maleki, Teimour; Fricke, Todd; Quesenberry, J. T.; Todd, Paul W.; Leary, James F.
2012-03-01
Recent advances in MEMS technology have provided an opportunity to develop microfluidic devices with enormous potential for portable, point-of-care, low-cost medical diagnostic tools. Hand-held flow cytometers will soon be used in disease diagnosis and monitoring. Despite much interest in miniaturizing commercially available cytometers, they remain costly, bulky, and require expert operation. In this article, we report progress on the development of a battery-powered handheld blood analyzer that will quickly and automatically process a drop of whole human blood by real-time, on-chip magnetic separation of white blood cells (WBCs), fluorescence analysis of labeled WBC subsets, and counting a reproducible fraction of the red blood cells (RBCs) by light scattering. The whole blood (WB) analyzer is composed of a micro-mixer, a special branching/separation system, an optical detection system, and electronic readout circuitry. A droplet of un-processed blood is mixed with the reagents, i.e. magnetic beads and fluorescent stain in the micro-mixer. Valve-less sorting is achieved by magnetic deflection of magnetic microparticle-labeled WBC. LED excitation in combination with an avalanche photodiode (APD) detection system is used for counting fluorescent WBC subsets using several colors of immune-Qdots, while counting a reproducible fraction of red blood cells (RBC) is performed using a laser light scatting measurement with a photodiode. Optimized branching/channel width is achieved using Comsol Multi-Physics™ simulation. To accommodate full portability, all required power supplies (40v, +/-10V, and +3V) are provided via step-up voltage converters from one battery. A simple onboard lock-in amplifier is used to increase the sensitivity/resolution of the pulse counting circuitry.
Slusarewicz, Paul; Pagano, Stefanie; Mills, Christopher; Popa, Gabriel; Chow, K Martin; Mendenhall, Michael; Rodgers, David W; Nielsen, Martin K
2016-07-01
Intestinal parasites are a concern in veterinary medicine worldwide and for human health in the developing world. Infections are identified by microscopic visualisation of parasite eggs in faeces, which is time-consuming, requires technical expertise and is impractical for use on-site. For these reasons, recommendations for parasite surveillance are not widely adopted and parasite control is based on administration of rote prophylactic treatments with anthelmintic drugs. This approach is known to promote anthelmintic resistance, so there is a pronounced need for a convenient egg counting assay to promote good clinical practice. Using a fluorescent chitin-binding protein, we show that this structural carbohydrate is present and accessible in shells of ova of strongyle, ascarid, trichurid and coccidian parasites. Furthermore, we show that a cellular smartphone can be used as an inexpensive device to image fluorescent eggs and, by harnessing the computational power of the phone, to perform image analysis to count the eggs. Strongyle egg counts generated by the smartphone system had a significant linear correlation with manual McMaster counts (R(2)=0.98), but with a significantly lower coefficient of variation (P=0.0177). Furthermore, the system was capable of differentiating equine strongyle and ascarid eggs similar to the McMaster method, but with significantly lower coefficients of variation (P<0.0001). This demonstrates the feasibility of a simple, automated on-site test to detect and/or enumerate parasite eggs in mammalian faeces without the need for a laboratory microscope, and highlights the potential of smartphones as relatively sophisticated, inexpensive and portable medical diagnostic devices. Copyright © 2016 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.
Higher order relativistic galaxy number counts: dominating terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Jeppe TrØst; Durrer, Ruth, E-mail: Jeppe.Trost@nbi.dk, E-mail: Ruth.Durrer@unige.ch
2017-03-01
We review the number counts to second order concentrating on the terms which dominate on sub horizon scales. We re-derive the result for these terms and compare it with the different versions found in the literature. We generalize our derivation to higher order terms, especially the third order number counts which are needed to compute the 1-loop contribution to the power spectrum.
Prognostic health monitoring in switch-mode power supplies with voltage regulation
NASA Technical Reports Server (NTRS)
Hofmeister, James P (Inventor); Judkins, Justin B (Inventor)
2009-01-01
The system includes a current injection device in electrical communication with the switch mode power supply. The current injection device is positioned to alter the initial, non-zero load current when activated. A prognostic control is in communication with the current injection device, controlling activation of the current injection device. A frequency detector is positioned to receive an output signal from the switch mode power supply and is able to count cycles in a sinusoidal wave within the output signal. An output device is in communication with the frequency detector. The output device outputs a result of the counted cycles, which are indicative of damage to an a remaining useful life of the switch mode power supply.
Blocking Losses With a Photon Counter
NASA Technical Reports Server (NTRS)
Moision, Burce E.; Piazzolla, Sabino
2012-01-01
It was not known how to assess accurately losses in a communications link due to photodetector blocking, a phenomenon wherein a detector is rendered inactive for a short time after the detection of a photon. When used to detect a communications signal, blocking leads to losses relative to an ideal detector, which may be measured as a reduction in the communications rate for a given received signal power, or an increase in the signal power required to support the same communications rate. This work involved characterizing blocking losses for single detectors and arrays of detectors. Blocking may be mitigated by spreading the signal intensity over an array of detectors, reducing the count rate on any one detector. A simple approximation was made to the blocking loss as a function of the probability that a detector is unblocked at a given time, essentially treating the blocking probability as a scaling of the detection efficiency. An exact statistical characterization was derived for a single detector, and an approximation for multiple detectors. This allowed derivation of several accurate approximations to the loss. Methods were also derived to account for a rise time in recovery, and non-uniform illumination due to diffraction and atmospheric distortion of the phase front. It was assumed that the communications signal is intensity modulated and received by an array of photon-counting photodetectors. For the purpose of this analysis, it was assumed that the detectors are ideal, in that they produce a signal that allows one to reproduce the arrival times of electrons, produced either as photoelectrons or from dark noise, exactly. For single detectors, the performance of the maximum-likelihood (ML) receiver in blocking is illustrated, as well as a maximum-count (MC) receiver, that, when receiving a pulse-position-modulated (PPM) signal, selects the symbol corresponding to the slot with the largest electron count. Whereas the MC receiver saturates at high count rates, the ML receiver may not. The loss in capacity, symbol-error-rate (SER), and count-rate were numerically computed. It was shown that the capacity and symbol-error-rate losses track, whereas the count-rate loss does not generally reflect the SER or capacity loss, as the slot-statistics at the detector output are no longer Poisson. It is also shown that the MC receiver loss may be accurately predicted for dead times on the order of a slot.
Measuring the lensing potential with tomographic galaxy number counts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montanari, Francesco; Durrer, Ruth, E-mail: francesco.montanari@unige.ch, E-mail: ruth.durrer@unige.ch
2015-10-01
We investigate how the lensing potential can be measured tomographically with future galaxy surveys using their number counts. Such a measurement is an independent test of the standard ΛCDM framework and can be used to discern modified theories of gravity. We perform a Fisher matrix forecast based on galaxy angular-redshift power spectra, assuming specifications consistent with future photometric Euclid-like surveys and spectroscopic SKA-like surveys. For the Euclid-like survey we derive a fitting formula for the magnification bias. Our analysis suggests that the cross correlation between different redshift bins is very sensitive to the lensing potential such that the survey canmore » measure the amplitude of the lensing potential at the same level of precision as other standard ΛCDM cosmological parameters.« less
Satoh, K; Noguchi, M; Higuchi, H; Kitamura, K
1984-12-01
Liquid scintillation counting of alpha rays with pulse shape discrimination was applied to the analysis of 226Ra and 239+240Pu in environmental samples and of alpha-emitters in/on a filter paper. The instrument used in this study was either a specially designed detector or a commercial liquid scintillation counter with an automatic sample changer, both of which were connected to the pulse shape discrimination circuit. The background counting rate in alpha energy region of 5-7 MeV was 0.01 or 0.04 cpm/MeV, respectively. The figure of merit indicating the resolving power for alpha- and beta-particles in time spectrum was found to be 5.7 for the commercial liquid scintillation counter.
Winter Habitat Preferences for Florida Manatees and Vulnerability to Cold
Laist, David W.; Taylor, Cynthia; Reynolds, John E.
2013-01-01
To survive cold winter periods most, if not all, Florida manatees rely on warm-water refuges in the southern two-thirds of the Florida peninsula. Most refuges are either warm-water discharges from power plant and natural springs, or passive thermal basins that temporarily trap relatively warm water for a week or more. Strong fidelity to one or more refuges has created four relatively discrete Florida manatee subpopulations. Using statewide winter counts of manatees from 1999 to 2011, we provide the first attempt to quantify the proportion of animals using the three principal refuge types (power plants, springs, and passive thermal basins) statewide and for each subpopulation. Statewide across all years, 48.5% of all manatees were counted at power plant outfalls, 17.5% at natural springs, and 34.9 % at passive thermal basins or sites with no known warm-water features. Atlantic Coast and Southwest Florida subpopulations comprised 82.2% of all manatees counted (45.6% and 36.6%, respectively) with each subpopulation relying principally on power plants (66.6% and 47.4%, respectively). The upper St. Johns River and Northwest Florida subpopulations comprised 17.8% of all manatees counted with almost all animals relying entirely on springs (99.2% and 88.6% of those subpopulations, respectively). A record high count of 5,076 manatees in January 2010 revealed minimum sizes for the four subpopulations of: 230 manatees in the upper St. Johns River; 2,548 on the Atlantic Coast; 645 in Northwest Florida; and 1,774 in Southwest Florida. Based on a comparison of carcass recovery locations for 713 manatees killed by cold stress between 1999 and 2011 and the distribution of known refuges, it appears that springs offer manatees the best protection against cold stress. Long-term survival of Florida manatees will require improved efforts to enhance and protect manatee access to and use of warm-water springs as power plant outfalls are shut down. PMID:23527063
Karulin, Alexey Y; Caspell, Richard; Dittrich, Marcus; Lehmann, Paul V
2015-03-02
Accurate assessment of positive ELISPOT responses for low frequencies of antigen-specific T-cells is controversial. In particular, it is still unknown whether ELISPOT counts within replicate wells follow a theoretical distribution function, and thus whether high power parametric statistics can be used to discriminate between positive and negative wells. We studied experimental distributions of spot counts for up to 120 replicate wells of IFN-γ production by CD8+ T-cell responding to EBV LMP2A (426 - 434) peptide in human PBMC. The cells were tested in serial dilutions covering a wide range of average spot counts per condition, from just a few to hundreds of spots per well. Statistical analysis of the data using diagnostic Q-Q plots and the Shapiro-Wilk normality test showed that in the entire dynamic range of ELISPOT spot counts within replicate wells followed a normal distribution. This result implies that the Student t-Test and ANOVA are suited to identify positive responses. We also show experimentally that borderline responses can be reliably detected by involving more replicate wells, plating higher numbers of PBMC, addition of IL-7, or a combination of these. Furthermore, we have experimentally verified that the number of replicates needed for detection of weak responses can be calculated using parametric statistics.
Kang, Guangliang; Du, Li; Zhang, Hong
2016-06-22
The growing complexity of biological experiment design based on high-throughput RNA sequencing (RNA-seq) is calling for more accommodative statistical tools. We focus on differential expression (DE) analysis using RNA-seq data in the presence of multiple treatment conditions. We propose a novel method, multiDE, for facilitating DE analysis using RNA-seq read count data with multiple treatment conditions. The read count is assumed to follow a log-linear model incorporating two factors (i.e., condition and gene), where an interaction term is used to quantify the association between gene and condition. The number of the degrees of freedom is reduced to one through the first order decomposition of the interaction, leading to a dramatically power improvement in testing DE genes when the number of conditions is greater than two. In our simulation situations, multiDE outperformed the benchmark methods (i.e. edgeR and DESeq2) even if the underlying model was severely misspecified, and the power gain was increasing in the number of conditions. In the application to two real datasets, multiDE identified more biologically meaningful DE genes than the benchmark methods. An R package implementing multiDE is available publicly at http://homepage.fudan.edu.cn/zhangh/softwares/multiDE . When the number of conditions is two, multiDE performs comparably with the benchmark methods. When the number of conditions is greater than two, multiDE outperforms the benchmark methods.
Energy harvesting using AC machines with high effective pole count
NASA Astrophysics Data System (ADS)
Geiger, Richard Theodore
In this thesis, ways to improve the power conversion of rotating generators at low rotor speeds in energy harvesting applications were investigated. One method is to increase the pole count, which increases the generator back-emf without also increasing the I2R losses, thereby increasing both torque density and conversion efficiency. One machine topology that has a high effective pole count is a hybrid "stepper" machine. However, the large self inductance of these machines decreases their power factor and hence the maximum power that can be delivered to a load. This effect can be cancelled by the addition of capacitors in series with the stepper windings. A circuit was designed and implemented to automatically vary the series capacitance over the entire speed range investigated. The addition of the series capacitors improved the power output of the stepper machine by up to 700%. At low rotor speeds, with the addition of series capacitance, the power output of the hybrid "stepper" was more than 200% that of a similarly sized PMDC brushed motor. Finally, in this thesis a hybrid lumped parameter / finite element model was used to investigate the impact of number, shape and size of the rotor and stator teeth on machine performance. A typical off-the-shelf hybrid stepper machine has significant cogging torque by design. This cogging torque is a major problem in most small energy harvesting applications. In this thesis it was shown that the cogging and ripple torque can be dramatically reduced. These findings confirm that high-pole-count topologies, and specifically the hybrid stepper configuration, are an attractive choice for energy harvesting applications.
Cosmology from galaxy clusters as observed by Planck
NASA Astrophysics Data System (ADS)
Pierpaoli, Elena
We propose to use current all-sky data on galaxy clusters in the radio/infrared bands in order to constrain cosmology. This will be achieved performing parameter estimation with number counts and power spectra for galaxy clusters detected by Planck through their Sunyaev—Zeldovich signature. The ultimate goal of this proposal is to use clusters as tracers of matter density in order to provide information about fundamental properties of our Universe, such as the law of gravity on large scale, early Universe phenomena, structure formation and the nature of dark matter and dark energy. We will leverage on the availability of a larger and deeper cluster catalog from the latest Planck data release in order to include, for the first time, the cluster power spectrum in the cosmological parameter determination analysis. Furthermore, we will extend clusters' analysis to cosmological models not yet investigated by the Planck collaboration. These aims require a diverse set of activities, ranging from the characterization of the clusters' selection function, the choice of the cosmological cluster sample to be used for parameter estimation, the construction of mock samples in the various cosmological models with correct correlation properties in order to produce reliable selection functions and noise covariance matrices, and finally the construction of the appropriate likelihood for number counts and power spectra. We plan to make the final code available to the community and compatible with the most widely used cosmological parameter estimation code. This research makes use of data from the NASA satellites Planck and, less directly, Chandra, in order to constrain cosmology; and therefore perfectly fits the NASA objectives and the specifications of this solicitation.
Behera, G; Sutar, P P; Aditya, S
2017-11-01
The commercially available dry turmeric powder at 10.34% d.b. moisture content was decontaminated using microwaves at high power density for short time. To avoid the loss of moisture from turmeric due to high microwave power, the drying kinetics were modelled and considered during optimization of microwave decontamination process. The effect of microwave power density (10, 33.5 and 57 W g -1 ), exposure time (10, 20 and 30 s) and thickness of turmeric layer (1, 2 and 3 mm) on total plate, total yeast and mold (YMC) counts, color change (∆E), average final temperature of the product (T af ), water activity (a w ), Page model rate constant (k) and total moisture loss (ML) was studied. The perturbation analysis was carried out for all variables. It was found that to achieve more than one log reduction in yeast and mold count, a substantial reduction in moisture content takes place leading to the reduced output. The microwave power density significantly affected the YMC, T af and a w of turmeric powder. But the thickness of sample and microwave exposure time showed effect only on T af , a w and ML. The colour of turmeric and Page model rate constant were not significantly changed during the process as anticipated. The numerical optimization was done at 57.00 W g -1 power density, 1.64 mm thickness of sample layer and 30 s exposure time. It resulted into 1.6 × 10 7 CFU g -1 YMC, 82.71 °C T af , 0.383 a w and 8.41% (d.b.) final moisture content.
NASA Technical Reports Server (NTRS)
Degnan, John J.; Smith, David E. (Technical Monitor)
2000-01-01
We consider the optimum design of photon-counting microlaser altimeters operating from airborne and spaceborne platforms under both day and night conditions. Extremely compact Q-switched microlaser transmitters produce trains of low energy pulses at multi-kHz rates and can easily generate subnanosecond pulse-widths for precise ranging. To guide the design, we have modeled the solar noise background and developed simple algorithms, based on Post-Detection Poisson Filtering (PDPF), to optimally extract the weak altimeter signal from a high noise background during daytime operations. Practical technology issues, such as detector and/or receiver dead times, have also been considered in the analysis. We describe an airborne prototype, being developed under NASA's instrument Incubator Program, which is designed to operate at a 10 kHz rate from aircraft cruise altitudes up to 12 km with laser pulse energies on the order of a few microjoules. We also analyze a compact and power efficient system designed to operate from Mars orbit at an altitude of 300 km and sample the Martian surface at rates up to 4.3 kHz using a 1 watt laser transmitter and an 18 cm telescope. This yields a Power-Aperture Product of 0.24 W-square meter, corresponding to a value almost 4 times smaller than the Mars Orbiting Laser Altimeter (0. 88W-square meter), yet the sampling rate is roughly 400 times greater (4 kHz vs 10 Hz) Relative to conventional high power laser altimeters, advantages of photon-counting laser altimeters include: (1) a more efficient use of available laser photons providing up to two orders of magnitude greater surface sampling rates for a given laser power-telescope aperture product; (2) a simultaneous two order of magnitude reduction in the volume, cost and weight of the telescope system; (3) the unique ability to spatially resolve the source of the surface return in a photon counting mode through the use of pixellated or imaging detectors; and (4) improved vertical and transverse spatial resolution resulting from both (1) and (3). Furthermore, because of significantly lower laser pulse energies, the microaltimeter is inherently more eyesafe to observers on the ground and less prone to internal optical damage, which can terminate a space mission prematurely.
Elucidating Proteoform Families from Proteoform Intact-Mass and Lysine-Count Measurements
2016-01-01
Proteomics is presently dominated by the “bottom-up” strategy, in which proteins are enzymatically digested into peptides for mass spectrometric identification. Although this approach is highly effective at identifying large numbers of proteins present in complex samples, the digestion into peptides renders it impossible to identify the proteoforms from which they were derived. We present here a powerful new strategy for the identification of proteoforms and the elucidation of proteoform families (groups of related proteoforms) from the experimental determination of the accurate proteoform mass and number of lysine residues contained. Accurate proteoform masses are determined by standard LC–MS analysis of undigested protein mixtures in an Orbitrap mass spectrometer, and the lysine count is determined using the NeuCode isotopic tagging method. We demonstrate the approach in analysis of the yeast proteome, revealing 8637 unique proteoforms and 1178 proteoform families. The elucidation of proteoforms and proteoform families afforded here provides an unprecedented new perspective upon proteome complexity and dynamics. PMID:26941048
NASA Astrophysics Data System (ADS)
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-01
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.
Wildhaber, M.L.; Holan, S.H.; Bryan, J.L.; Gladish, D.W.; Ellersieck, M.
2011-01-01
In 2003, the US Army Corps of Engineers initiated the Pallid Sturgeon Population Assessment Program (PSPAP) to monitor pallid sturgeon and the fish community of the Missouri River. The power analysis of PSPAP presented here was conducted to guide sampling design and effort decisions. The PSPAP sampling design has a nested structure with multiple gear subsamples within a river bend. Power analyses were based on a normal linear mixed model, using a mixed cell means approach, with variance estimates from the original data. It was found that, at current effort levels, at least 20 years for pallid and 10 years for shovelnose sturgeon is needed to detect a 5% annual decline. Modified bootstrap simulations suggest power estimates from the original data are conservative due to excessive zero fish counts. In general, the approach presented is applicable to a wide array of animal monitoring programs.
No control genes required: Bayesian analysis of qRT-PCR data.
Matz, Mikhail V; Wright, Rachel M; Scott, James G
2013-01-01
Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the "classic" analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R.
Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2014-10-16
Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.
Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang
2017-01-01
Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648
Relativistic Transformations of Light Power.
ERIC Educational Resources Information Center
McKinley, John M.
1979-01-01
Using a photon-counting technique, finds the angular distribution of emitted and detected power and the total radiated power of an arbitrary moving source, and uses the technique to verify the predicted effect of the earth's motion through the cosmic blackbody radiation. (Author/GA)
Impacts of geothermal energy developments on hydrological environment in hot spring areas
NASA Astrophysics Data System (ADS)
Taniguchi, M.
2015-12-01
Water-energy nexus such as geothermal energy developments and its impacts on groundwater, river water, and coastal water is one of the key issues for the sustainable society. This is because the demand of both water and energy resources will be increasing in near future, and the tradeoff between both resources and conflict between stakeholders will be arisen. Geothermal power generation, hot springs heat power generation, and steam power generation, are developing in hot spring areas in Ring of Fire countries including Japan, as renewable and sustainable energy. Impacts of the wasted hot water after using hot springs heat and steam power generation on ecosystem in the rivers have been observed in Beppu, Oita prefecture, Japan. The number of the fish species with wasted hot water in the Hirata river is much less than that without wasted hot water in Hiyakawa river although the dominant species of tilapia was found in the Hirata river with wasted hot water. The water temperature in Hirata rive is increased by wasted hot water by 10 degree C. The impacts of the developments of steam power generations on hot spring water and groundwater in downstream are also evaluated in Beppu. The decreases in temperature and volume of the hot spring water and groundwater after the development are concerning. Stakeholder analysis related to hot spa and power generation business and others in Beppu showed common interests in community development among stakeholders and gaps in prerequisite knowledge and recognition of the geothermal resource in terms of economic/non-economic value and utilization as power generation/hot-spring. We screened stakeholders of four categories (hot spring resorts inhabitants, industries, supporters, environmentalists), and set up three communities consisting of 50 persons of the above categories. One remarkable result regarding the pros and cons of geothermal power in general terms was that the supporter count increased greatly while the neutralities count decreased greatly after deliberation, suggesting a response from providing scientific evidence on the issue.
Clement, Matthew; O'Keefe, Joy M; Walters, Brianne
2015-01-01
While numerous methods exist for estimating abundance when detection is imperfect, these methods may not be appropriate due to logistical difficulties or unrealistic assumptions. In particular, if highly mobile taxa are frequently absent from survey locations, methods that estimate a probability of detection conditional on presence will generate biased abundance estimates. Here, we propose a new estimator for estimating abundance of mobile populations using telemetry and counts of unmarked animals. The estimator assumes that the target population conforms to a fission-fusion grouping pattern, in which the population is divided into groups that frequently change in size and composition. If assumptions are met, it is not necessary to locate all groups in the population to estimate abundance. We derive an estimator, perform a simulation study, conduct a power analysis, and apply the method to field data. The simulation study confirmed that our estimator is asymptotically unbiased with low bias, narrow confidence intervals, and good coverage, given a modest survey effort. The power analysis provided initial guidance on survey effort. When applied to small data sets obtained by radio-tracking Indiana bats, abundance estimates were reasonable, although imprecise. The proposed method has the potential to improve abundance estimates for mobile species that have a fission-fusion social structure, such as Indiana bats, because it does not condition detection on presence at survey locations and because it avoids certain restrictive assumptions.
A powerful and flexible approach to the analysis of RNA sequence count data
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.
2011-01-01
Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900
Reddy, P J; Bhade, S P D; Kolekar, R V; Singh, Rajvir; Pradeepkumar, K S
2014-01-01
The measurement of tritium in environmental samples requires highest possible sensitivity. In the present study, the authors have optimised the counting window for the analysis of (3)H in environmental samples using the recently installed Ultra Low Level Quantulus 1220 Liquid Scintillation Counting at BARC to improve the detection limit of the system. The optimised counting window corresponding to the highest figure of merit of 883.8 was found to be 20-162 channels. Different brands of packaged drinking waters were analysed to select a blank that would define the system background. The minimum detectable activity (MDA) achieved was 1.5 Bq l(-1) for a total counting time of 500 min. The concentration of tritium in well and bore well water samples collected from the villages of Pune, villages located at 1.8 km from Tarapur Atomic Power Station, Kolhapur and Ratnagiri, was analysed. The activity concentration ranged from 0.55 to 3.66 Bq l(-1). The associated age-dependant dose from water ingestion in the study area was estimated. The effective committed dose recorded for different age classes is negligible compared with World Health Organization and US Environmental Protection Agency dose guidelines.
Handheld 2-channel impedimetric cell counting system with embedded real-time processing
NASA Astrophysics Data System (ADS)
Rottigni, A.; Carminati, M.; Ferrari, G.; Vahey, M. D.; Voldman, J.; Sampietro, M.
2011-05-01
Lab-on-a-chip systems have been attracting a growing attention for the perspective of miniaturization and portability of bio-chemical assays. Here we present a the design and characterization of a miniaturized, USB-powered, self-contained, 2-channel instrument for impedance sensing, suitable for label-free tracking and real-time detection of cells flowing in microfluidic channels. This original circuit features a signal generator based on a direct digital synthesizer, a transimpedance amplifier, an integrated square-wave lock-in coupled to a Σ▵ ADC converter, and a digital processing platform. Real-time automatic peak detection on two channels is implemented in a FPGA. System functionality has been tested with an electronic resistance modulator to simulate 1% impedance variation produced by cells, reaching a time resolution of 50μs (enabling a count rate of 2000 events/s) with an applied voltage as low as 200mV. Biological experiments have been carried out counting yeast cells. Statistical analysis of events is in agreement with the expected amplitude and time distributions. 2-channel yeast counting has been performed with concomitant dielectrophoretic cell separation, showing that this novel and ultra compact sensing system, thanks to the selectivity of the lock-in detector, is compatible with other AC electrical fields applied to the device.
Power counting and modes in SCET
NASA Astrophysics Data System (ADS)
Goerke, Raymond; Luke, Michael
2018-02-01
We present a formulation of soft-collinear effective theory (SCET) in the two-jet sector as a theory of decoupled sectors of QCD coupled to Wilson lines. The formulation is manifestly boost-invariant, does not require the introduction of ultrasoft modes at the hard matching scale Q, and has manifest power counting in inverse powers of Q. The spurious infrared divergences which arise in SCET when ultrasoft modes are not included in loops disappear when the overlap between the sectors is correctly subtracted, in a manner similar to the familiar zero-bin subtraction of SCET. We illustrate this approach by analyzing deep inelastic scattering in the endpoint region in SCET and comment on other applications.
Bouwman, Aniek C; Hayes, Ben J; Calus, Mario P L
2017-10-30
Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of allele counts results in less shrinkage towards the mean for low minor allele frequency (MAF) variants. Scaling may become relevant for estimating ASE as more low MAF variants will be used in genomic evaluations. We show the impact of scaling on estimates of ASE using real data and a theoretical framework, and in terms of power, model fit and predictive performance. In a dairy cattle dataset with 630 K SNP genotypes, the correlation between DGV for stature from a random regression model using centered allele counts (RRc) and centered and scaled allele counts (RRcs) was 0.9988, whereas the overall correlation between ASE using RRc and RRcs was 0.27. The main difference in ASE between both methods was found for SNPs with a MAF lower than 0.01. Both the ratio (ASE from RRcs/ASE from RRc) and the regression coefficient (regression of ASE from RRcs on ASE from RRc) were much higher than 1 for low MAF SNPs. Derived equations showed that scenarios with a high heritability, a large number of individuals and a small number of variants have lower ratios between ASE from RRc and RRcs. We also investigated the optimal scaling parameter [from - 1 (RRcs) to 0 (RRc) in steps of 0.1] in the bovine stature dataset. We found that the log-likelihood was maximized with a scaling parameter of - 0.8, while the mean squared error of prediction was minimized with a scaling parameter of - 1, i.e., RRcs. Large differences in estimated ASE were observed for low MAF SNPs when allele counts were scaled or not scaled because there is less shrinkage towards the mean for scaled allele counts. We derived a theoretical framework that shows that the difference in ASE due to shrinkage is heavily influenced by the power of the data. Increasing the power results in smaller differences in ASE whether allele counts are scaled or not.
27. EXTERIOR VIEW LOOKING INTO THE FIRST TAILRACE (COUNTING FROM ...
27. EXTERIOR VIEW LOOKING INTO THE FIRST TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
NASA Astrophysics Data System (ADS)
Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.
2018-04-01
The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses
Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.
2015-01-01
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durrer, Ruth; Tansella, Vittorio, E-mail: ruth.durrer@unige.ch, E-mail: vittorio.tansella@unige.ch
We derive the contribution to relativistic galaxy number count fluctuations from vector and tensor perturbations within linear perturbation theory. Our result is consistent with the the relativistic corrections to number counts due to scalar perturbation, where the Bardeen potentials are replaced with line-of-sight projection of vector and tensor quantities. Since vector and tensor perturbations do not lead to density fluctuations the standard density term in the number counts is absent. We apply our results to vector perturbations which are induced from scalar perturbations at second order and give numerical estimates of their contributions to the power spectrum of relativistic galaxymore » number counts.« less
Clinical use of photodynamic antimicrobial chemotherapy for the treatment of deep carious lesions
NASA Astrophysics Data System (ADS)
Guglielmi, Camila De Almeida B.; Simionato, Maria Regina L.; Ramalho, Karen Müller; Imparato, José Carlos P.; Pinheiro, Sérgio Luiz; Luz, Maria A. A. C.
2011-08-01
The purpose of this study was to assess photodynamic antimicrobial chemotherapy (PACT) via irradiation, using a low power laser associated with a photosensitization dye, as an alternative to remove cariogenic microorganisms by drilling. Remaining dentinal samples in deep carious lesions on permanent molars (n = 26) were treated with 0.01% methylene blue dye and irradiated with a low power laser (InGaAIP - indium gallium aluminum phosphide; λ = 660 nm; 100 mW; 320 Jcm-2 90 s; 9J). Samples of dentin from the pulpal wall region were collected with a micropunch before and immediately after PACT and kept in a transport medium for microbiological analysis. Samples were cultured in plates of Brucella blood agar, Mitis Salivarius Bacitracin agar and Rogosa SL agar to determine the total viable bacteria, mutans streptococci and Lactobacillus spp. counts, respectively. After incubation, colony-forming units were counted and microbial reduction was calculated for each group of bacteria. PACT led to statistically significant reductions in mutans streptococci (1.38 log), Lactobacillus spp. (0.93 log), and total viable bacteria (0.91 log). This therapy may be an appropriate approach for the treatment of deep carious lesions using minimally invasive procedures.
The Chandra Source Catalog 2.0: Spectral Properties
NASA Astrophysics Data System (ADS)
McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team
2018-01-01
The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Programmable random interval generator
NASA Technical Reports Server (NTRS)
Lindsey, R. S., Jr.
1973-01-01
Random pulse generator can supply constant-amplitude randomly distributed pulses with average rate ranging from a few counts per second to more than one million counts per second. Generator requires no high-voltage power supply or any special thermal cooling apparatus. Device is uniquely versatile and provides wide dynamic range of operation.
USDA-ARS?s Scientific Manuscript database
Ultrasonography is a powerful technology that can be used to improve reproductive management in heifers. By counting the number of antral follicles observed on an ultrasound screen the practitioner can gather additional information when reproductive tract scoring, because the number of antral folli...
SGR 1822-1606 (Swift J1822.3-1606): X-ray spectrum and refined spin period from Swift XRT analysis
NASA Astrophysics Data System (ADS)
Esposito, P.; Rea, N.; Israel, G. L.
2011-07-01
We have analysed 1.6 ks s of Photon Counting (PC) XRT data of the new SGR/magnetar candidate Swift J1822.3-1606 (Cummings et al. GCN #12159), including the first 0.6 ks on which Pagani et al. reported in GCN #12163. We found that the source spectrum is well described by a power-law plus blackbody model, modified for the interstellar absorption (reduced chi-squared = 1.04 for 97 degrees of freedom).
Counting your chickens before they're hatched: power analysis.
Jupiter, Daniel C
2014-01-01
How does an investigator know that he has enough subjects in his study design to have the predicted outcomes appear statistically significant? In this Investigators' Corner I discuss why such planning is necessary, give an intuitive introduction to the calculations needed to determine required sample sizes, and hint at some of the more technical difficulties inherent in this aspect of study planning. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Generating Discrete Power-Law Distributions from a Death- Multiple Immigration Population Process
NASA Astrophysics Data System (ADS)
Matthews, J. O.; Jakeman, E.; Hopcraft, K. I.
2003-04-01
We consider the evolution of a simple population process governed by deaths and multiple immigrations that arrive with rates particular to their order. For a particular choice of rates, the equilibrium solution has a discrete power-law form. The model is a generalization of a process investigated previously where immigrants arrived in pairs [1]. The general properties of this model are discussed in a companion paper. The population is initiated with precisely M individuals present and evolves to an equilibrium distribution with a power-law tail. However the power-law tails of the equilibrium distribution are established immediately, so that moments and correlation properties of the population are undefined for any non-zero time. The technique we develop to characterize this process utilizes external monitoring that counts the emigrants leaving the population in specified time intervals. This counting distribution also possesses a power-law tail for all sampling times and the resulting time series exhibits two features worthy of note, a large variation in the strength of the signal, reflecting the power-law PDF; and secondly, intermittency of the emissions. We show that counting with a detector of finite dynamic range regularizes naturally the fluctuations, in effect `clipping' the events. All previously undefined characteristics such as the mean, autocorrelation and probabilities to the first event and time between events are well defined and derived. These properties, although obtained by discarding much data, nevertheless possess embedded power-law regimes that characterize the population in a way that is analogous to box averaging determination of fractal-dimension.
A comprehensive simulation study on classification of RNA-Seq data.
Zararsız, Gökmen; Goksuluk, Dincer; Korkmaz, Selcuk; Eldem, Vahap; Zararsiz, Gozde Erturk; Duru, Izzet Parug; Ozturk, Ahmet
2017-01-01
RNA sequencing (RNA-Seq) is a powerful technique for the gene-expression profiling of organisms that uses the capabilities of next-generation sequencing technologies. Developing gene-expression-based classification algorithms is an emerging powerful method for diagnosis, disease classification and monitoring at molecular level, as well as providing potential markers of diseases. Most of the statistical methods proposed for the classification of gene-expression data are either based on a continuous scale (eg. microarray data) or require a normal distribution assumption. Hence, these methods cannot be directly applied to RNA-Seq data since they violate both data structure and distributional assumptions. However, it is possible to apply these algorithms with appropriate modifications to RNA-Seq data. One way is to develop count-based classifiers, such as Poisson linear discriminant analysis and negative binomial linear discriminant analysis. Another way is to bring the data closer to microarrays and apply microarray-based classifiers. In this study, we compared several classifiers including PLDA with and without power transformation, NBLDA, single SVM, bagging SVM (bagSVM), classification and regression trees (CART), and random forests (RF). We also examined the effect of several parameters such as overdispersion, sample size, number of genes, number of classes, differential-expression rate, and the transformation method on model performances. A comprehensive simulation study is conducted and the results are compared with the results of two miRNA and two mRNA experimental datasets. The results revealed that increasing the sample size, differential-expression rate and decreasing the dispersion parameter and number of groups lead to an increase in classification accuracy. Similar with differential-expression studies, the classification of RNA-Seq data requires careful attention when handling data overdispersion. We conclude that, as a count-based classifier, the power transformed PLDA and, as a microarray-based classifier, vst or rlog transformed RF and SVM classifiers may be a good choice for classification. An R/BIOCONDUCTOR package, MLSeq, is freely available at https://www.bioconductor.org/packages/release/bioc/html/MLSeq.html.
Cirigliano, V.; Dekens, W.; de Vries, J.; ...
2017-12-15
Here, we analyze neutrinoless double beta decay (0νββ) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We then develop a power-counting scheme and derive the two-nucleon 0νββ currents up to leading order in the power counting for each lepton-number-violating operator. We arguemore » that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We also calculate the constraints that can be set on dimension-seven lepton-number-violating operators from 0νββ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of 0νββ in terms of the effective Majorana mass m ββ .« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cirigliano, V.; Dekens, W.; de Vries, J.
Here, we analyze neutrinoless double beta decay (0νββ) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We then develop a power-counting scheme and derive the two-nucleon 0νββ currents up to leading order in the power counting for each lepton-number-violating operator. We arguemore » that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We also calculate the constraints that can be set on dimension-seven lepton-number-violating operators from 0νββ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of 0νββ in terms of the effective Majorana mass m ββ .« less
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-30
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less
Kim, Hee Geun; Kong, Tae Young
2012-08-01
During a maintenance period at a Korean nuclear power plant, internal exposure of radiation workers occurred by the inhalation of (131)I that was released into the reactor building from a primary system opening due to defective fuels. The internal activity in radiation workers contaminated by (131)I was immediately measured using a whole body counter (WBC). A whole body counting was performed again a few days later, considering the factors of equilibrium in the body. The intake and the committed effective dose were estimated based on the WBC results. The intake was also calculated by hand, based on both the entrance records to the reactor building, and the counted results of the air concentration for (131)I were compared with the whole body counting results.
Fluorescence lifetime imaging system with nm-resolution and single-molecule sensitivity
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Juergen; Ortmann, Uwe; Erdmann, Rainer; Boehmer, Martin; Enderlein, Joerg
2002-03-01
Fluorescence lifetime measurement of organic fluorophores is a powerful tool for distinguishing molecules of interest from background or other species. This is of interest in sensitive analysis and Single Molecule Detection (SMD). A demand in many applications is to provide 2-D imaging together with lifetime information. The method of choice is then Time-Correlated Single Photon Counting (TCSPC). We have devloped a compact system on a single PC board that can perform TCSPC at high throughput, while synchronously driving a piezo scanner holding the immobilized sample. The system allows count rates up to 3 MHz and a resolution down to 30 ps. An overall Instrument Response Function down to 300ps is achieved with inexpensive detectors and diode lasers. The board is designed for the PCI bus, permitting high throughput without loss of counts. It is reconfigurable to operate in different modes. The Time-Tagged Time-Resolved (TTTR) mode permits the recording of all photon events with a real-time tag allowing data analysis with unlimited flexibility. We use the Time-Tag clock for an external piezo scanner that moves the sample. As the clock source is common for scanning and tagging, the individual photons can be matched to pixels. Demonstrating the capablities of the system we studied single molecule solutions. Lifetime imaging can be performed at high resolution with as few as 100 photons per pixel.
Xu, Maoqi; Chen, Liang
2018-01-01
The individual sample heterogeneity is one of the biggest obstacles in biomarker identification for complex diseases such as cancers. Current statistical models to identify differentially expressed genes between disease and control groups often overlook the substantial human sample heterogeneity. Meanwhile, traditional nonparametric tests lose detailed data information and sacrifice the analysis power, although they are distribution free and robust to heterogeneity. Here, we propose an empirical likelihood ratio test with a mean-variance relationship constraint (ELTSeq) for the differential expression analysis of RNA sequencing (RNA-seq). As a distribution-free nonparametric model, ELTSeq handles individual heterogeneity by estimating an empirical probability for each observation without making any assumption about read-count distribution. It also incorporates a constraint for the read-count overdispersion, which is widely observed in RNA-seq data. ELTSeq demonstrates a significant improvement over existing methods such as edgeR, DESeq, t-tests, Wilcoxon tests and the classic empirical likelihood-ratio test when handling heterogeneous groups. It will significantly advance the transcriptomics studies of cancers and other complex disease. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
High-resolution simulation of deep pencil beam surveys - analysis of quasi-periodicity
NASA Astrophysics Data System (ADS)
Weiss, A. G.; Buchert, T.
1993-07-01
We carry out pencil beam constructions in a high-resolution simulation of the large-scale structure of galaxies. The initial density fluctuations are taken to have a truncated power spectrum. All the models have {OMEGA} = 1. As an example we present the results for the case of "Hot-Dark-Matter" (HDM) initial conditions with scale-free n = 1 power index on large scales as a representative of models with sufficient large-scale power. We use an analytic approximation for particle trajectories of a self-gravitating dust continuum and apply a local dynamical biasing of volume elements to identify luminous matter in the model. Using this method, we are able to resolve formally a simulation box of 1200h^-1^ Mpc (e.g. for HDM initial conditions) down to the scale of galactic halos using 2160^3^ particles. We consider this as the minimal resolution necessary for a sensible simulation of deep pencil beam data. Pencil beam probes are taken for a given epoch using the parameters of observed beams. In particular, our analysis concentrates on the detection of a quasi-periodicity in the beam probes using several different methods. The resulting beam ensembles are analyzed statistically using number distributions, pair-count histograms, unnormalized pair-counts, power spectrum analysis and trial-period folding. Periodicities are classified according to their significance level in the power spectrum of the beams. The simulation is designed for application to parameter studies which prepare future observational projects. We find that a large percentage of the beams show quasi- periodicities with periods which cluster at a certain length scale. The periods found range between one and eight times the cutoff length in the initial fluctuation spectrum. At significance levels similar to those of the data of Broadhurst et al. (1990), we find about 15% of the pencil beams to show periodicities, about 30% of which are around the mean separation of rich clusters, while the distribution of scales reaches values of more than 200h^-1^ Mpc. The detection of periodicities larger than the typical void size must not be due to missing of "walls" (like the so called "Great Wall" seen in the CfA catalogue of galaxies), but can be due to different clustering properties of galaxies along the beams.
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
Counting pollen grains using readily available, free image processing and analysis software.
Costa, Clayton M; Yang, Suann
2009-10-01
Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.
Educators' Expectations and Aspirations around Young Children's Mathematical Knowledge
ERIC Educational Resources Information Center
Perry, Bob; MacDonald, Amy
2015-01-01
Let's Count is a mathematics professional learning programme for preschool educators in Australia, managed by a prominent non-government organisation and sponsored by industry. It has been implemented in both face-to-face and online modes over 2013/14. Let's Count is based on the constructs that all young children are powerful mathematicians and…
A superconducting focal plane array for ultraviolet, optical, and near-infrared astrophysics.
Mazin, Benjamin A; Bumble, Bruce; Meeker, Seth R; O'Brien, Kieran; McHugh, Sean; Langman, Eric
2012-01-16
Microwave Kinetic Inductance Detectors, or MKIDs, have proven to be a powerful cryogenic detector technology due to their sensitivity and the ease with which they can be multiplexed into large arrays. A MKID is an energy sensor based on a photon-variable superconducting inductance in a lithographed microresonator, and is capable of functioning as a photon detector across the electromagnetic spectrum as well as a particle detector. Here we describe the first successful effort to create a photon-counting, energy-resolving ultraviolet, optical, and near infrared MKID focal plane array. These new Optical Lumped Element (OLE) MKID arrays have significant advantages over semiconductor detectors like charge coupled devices (CCDs). They can count individual photons with essentially no false counts and determine the energy and arrival time of every photon with good quantum efficiency. Their physical pixel size and maximum count rate is well matched with large telescopes. These capabilities enable powerful new astrophysical instruments usable from the ground and space. MKIDs could eventually supplant semiconductor detectors for most astronomical instrumentation, and will be useful for other disciplines such as quantum optics and biological imaging.
Neutron Resonance Densitometry for Particle-like Debris of Melted Fuel
NASA Astrophysics Data System (ADS)
Harada, H.; Kitatani, F.; Koizumi, M.; Takamine, J.; Kureta, M.; Tsutiya, H.; Iimura, H.; Seya, M.; Becker, B.; Kopecky, S.; Schillebeeckx, P.
2014-04-01
Neutron Resonance Densitometry (NRD) is proposed for the quantification of nuclear materials in particle-like debris of melted fuel from the reactors of the Fukushima Daiichi nuclear power plant. The method is based on a combination of neutron resonance transmission analysis (NRTA) and neutron resonance capture analysis (NRCA). It uses the neutron time-of-flight (TOF) technique with a pulsed white neutron source and a neutron flight path as short as 5 m. The spectrometer for NRCA is made of LaBr3(Ce) detectors. The achievable uncertainty due to only counting statistics is less than 1 % to determine Pu and U isotopes.
Rearranging Pionless Effective Field Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin Savage; Silas Beane
2001-11-19
We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less
Design and construction of portable survey meter
NASA Astrophysics Data System (ADS)
Singseeta, W.; Thong-aram, D.; Pencharee, S.
2017-09-01
This work was aimed to design and construction of portable survey meter for radiation dose measuring. The designed system consists of 4 main parts consisting of low voltage power supply, radiation detection, radiation measurement and data display part on android phone. The test results show that the ripple voltage of low voltage power supply is less than 1%, the maximum integral counts are found to be 104 counts per second and the maximum distance of wireless commination between the server and the client is about 10 meter. It was found that the developed system had small size and light weight for portable instrument.
Quantification of differential gene expression by multiplexed targeted resequencing of cDNA
Arts, Peer; van der Raadt, Jori; van Gestel, Sebastianus H.C.; Steehouwer, Marloes; Shendure, Jay; Hoischen, Alexander; Albers, Cornelis A.
2017-01-01
Whole-transcriptome or RNA sequencing (RNA-Seq) is a powerful and versatile tool for functional analysis of different types of RNA molecules, but sample reagent and sequencing cost can be prohibitive for hypothesis-driven studies where the aim is to quantify differential expression of a limited number of genes. Here we present an approach for quantification of differential mRNA expression by targeted resequencing of complementary DNA using single-molecule molecular inversion probes (cDNA-smMIPs) that enable highly multiplexed resequencing of cDNA target regions of ∼100 nucleotides and counting of individual molecules. We show that accurate estimates of differential expression can be obtained from molecule counts for hundreds of smMIPs per reaction and that smMIPs are also suitable for quantification of relative gene expression and allele-specific expression. Compared with low-coverage RNA-Seq and a hybridization-based targeted RNA-Seq method, cDNA-smMIPs are a cost-effective high-throughput tool for hypothesis-driven expression analysis in large numbers of genes (10 to 500) and samples (hundreds to thousands). PMID:28474677
Analysis of high resolution satellite data for cosmic gamma ray bursts
NASA Technical Reports Server (NTRS)
Imhof, W. L.; Nakano, G. H.; Reagan, J. B.
1976-01-01
Cosmic gamma ray bursts detected a germanium spectrometer on the low altitude satellite 1972-076B were surveyed. Several bursts with durations ranging from approximately 0.032 to 15 seconds were found and are tabulated. The frequency of occurrence/intensity distribution of these events was compared with the S to the -3/2 power curve of confirmed events. The longer duration events fall above the S to the -3/2 power curve of confirmed events, suggesting they are perhaps not all true cosmic gamma-ray bursts. The narrow duration events fall closely on the S to the -3/2 power curve. The survey also revealed several counting rate spikes, with durations comparable to confirmed gamma-ray bursts, which were shown to be of magnetospheric origin. Confirmation that energetic electrons were responsible for these bursts was achieved from analysis of all data from the complete payload of gamma-ray and energetic particle detectors on board the satellite. The analyses also revealed that the narrowness of the spikes was primarily spatial rather than temporal in character.
Lehr, Hans-Anton; Rochat, Candice; Schaper, Cornelia; Nobile, Antoine; Shanouda, Sherien; Vijgen, Sandrine; Gauthier, Arnaud; Obermann, Ellen; Leuba, Susana; Schmidt, Marcus; C, Curzio Ruegg; Delaloye, Jean-Francois; Simiantonaki, Nectaria; Schaefer, Stephan C
2013-03-01
Several authors have demonstrated an increased number of mitotic figures in breast cancer resection specimen when compared with biopsy material. This has been ascribed to a sampling artifact where biopsies are (i) either too small to allow formal mitotic figure counting or (ii) not necessarily taken form the proliferating tumor periphery. Herein, we propose a different explanation for this phenomenon. Biopsy and resection material of 52 invasive ductal carcinomas was studied. We counted mitotic figures in 10 representative high power fields and quantified MIB-1 immunohistochemistry by visual estimation, counting and image analysis. We found that mitotic figures were elevated by more than three-fold on average in resection specimen over biopsy material from the same tumors (20±6 vs 6±2 mitoses per 10 high power fields, P=0.008), and that this resulted in a relative diminution of post-metaphase figures (anaphase/telophase), which made up 7% of all mitotic figures in biopsies but only 3% in resection specimen (P<0.005). At the same time, the percentages of MIB-1 immunostained tumor cells among total tumor cells were comparable in biopsy and resection material, irrespective of the mode of MIB-1 quantification. Finally, we found no association between the size of the biopsy material and the relative increase of mitotic figures in resection specimen. We propose that the increase in mitotic figures in resection specimen and the significant shift towards metaphase figures is not due to a sampling artifact, but reflects ongoing cell cycle activity in the resected tumor tissue due to fixation delay. The dwindling energy supply will eventually arrest tumor cells in metaphase, where they are readily identified by the diagnostic pathologist. Taken together, we suggest that the rapidly fixed biopsy material better represents true tumor biology and should be privileged as predictive marker of putative response to cytotoxic chemotherapy.
Breast tumor angiogenesis analysis using 3D power Doppler ultrasound
NASA Astrophysics Data System (ADS)
Chang, Ruey-Feng; Huang, Sheng-Fang; Lee, Yu-Hau; Chen, Dar-Ren; Moon, Woo Kyung
2006-03-01
Angiogenesis is the process that correlates to tumor growth, invasion, and metastasis. Breast cancer angiogenesis has been the most extensively studied and now serves as a paradigm for understanding the biology of angiogenesis and its effects on tumor outcome and patient prognosis. Most studies on characterization of angiogenesis focus on pixel/voxel counts more than morphological analysis. Nevertheless, in cancer, the blood flow is greatly affected by the morphological changes, such as the number of vessels, branching pattern, length, and diameter. This paper presents a computer-aided diagnostic (CAD) system that can quantify vascular morphology using 3-D power Doppler ultrasound (US) on breast tumors. We propose a scheme to extract the morphological information from angiography and to relate them to tumor diagnosis outcome. At first, a 3-D thinning algorithm helps narrow down the vessels into their skeletons. The measurements of vascular morphology significantly rely on the traversing of the vascular trees produced from skeletons. Our study of 3-D assessment of vascular morphological features regards vessel count, length, bifurcation, and diameter of vessels. Investigations into 221 solid breast tumors including 110 benign and 111 malignant cases, the p values using the Student's t-test for all features are less than 0.05 indicating that the proposed features are deemed statistically significant. Our scheme focuses on the vascular architecture without involving the technique of tumor segmentation. The results show that the proposed method is feasible, and have a good agreement with the diagnosis of the pathologists.
Digital pathology: elementary, rapid and reliable automated image analysis.
Bouzin, Caroline; Saini, Monika L; Khaing, Kyi-Kyi; Ambroise, Jérôme; Marbaix, Etienne; Grégoire, Vincent; Bol, Vanesa
2016-05-01
Slide digitalization has brought pathology to a new era, including powerful image analysis possibilities. However, while being a powerful prognostic tool, immunostaining automated analysis on digital images is still not implemented worldwide in routine clinical practice. Digitalized biopsy sections from two independent cohorts of patients, immunostained for membrane or nuclear markers, were quantified with two automated methods. The first was based on stained cell counting through tissue segmentation, while the second relied upon stained area proportion within tissue sections. Different steps of image preparation, such as automated tissue detection, folds exclusion and scanning magnification, were also assessed and validated. Quantification of either stained cells or the stained area was found to be correlated highly for all tested markers. Both methods were also correlated with visual scoring performed by a pathologist. For an equivalent reliability, quantification of the stained area is, however, faster and easier to fine-tune and is therefore more compatible with time constraints for prognosis. This work provides an incentive for the implementation of automated immunostaining analysis with a stained area method in routine laboratory practice. © 2015 John Wiley & Sons Ltd.
Cosmic ray scintillations in the frequency range from 0.00001 to 0.01 Hz
NASA Technical Reports Server (NTRS)
Gehrels, N.; Lheureux, J.
1978-01-01
Power spectra of the flux variations in cosmic rays of energy greater than a few GeV are presented. The data were obtained at balloon altitudes (40-45 km) from two scintillation-type detectors flown for six hours from Palestine, Texas, on November 4, 1972. The large area detectors had effective count rates up to 2000 cps setting the Poisson noise level in the power spectra of the relative fluctuations at 0.001/Hz. The analysis was made on the singles rate of each of the counters as well as on the coincidence rates between them. In all cases, the spectra between 0.0001 and 0.002 Hz are power laws in frequency of the form f to the exponent negative gamma, where gamma is between 1.5 and 2.0. No significant peaks in the range 0.0001 to 0.01 Hz are observed.
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...
2018-03-29
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Mauro, M.; Manconi, S.; Zechlin, H. -S.
Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less
Uesugi, Masaki; Watanabe, Ryosuke; Sakai, Hiroaki; Yokoyama, Akihiko
2018-02-01
A rapid determination method of 90 Sr is developed for the monitoring of seawater around the Fukushima Daiichi Nuclear Power Plant (FDNPP). Three ideas of chemical separation and measurements to accelerate 90 Sr analysis are investigated. Strontium is co-precipitated in a two-step procedure with hydroxyapatite after the removal of magnesium phosphate in the presence of citric acid. The purification process of strontium is in combination with solid phase extraction disks. One or two sheets of Sr Rad disk and cyclic operations are examined to eliminate interfering substances and secure the exchange capacity. The suitable conditions of adsorption and stripping are determined with a 85 Sr tracer. Seawater samples up to 1L can be analyzed within 4h. Additionally, the appropriate pH conditions to extract strontium to the scintillator are studied, and the 90 Sr activity is assessed via liquid scintillation counting using an extractive scintillator based on the di-(2-etyl hexyl)-phosphoric acid (HDEHP) extraction method. The new scintillation counting method involves a small quenching effect and a low background compared to the conventional emulsion scintillator method. The minimum detectable activity (MDA) is 35mBq/L of 90 Sr in 180min of counting. The proposed method provides analytical results within a day after receipt of the samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Effects of Space Missions on the Human Immune System: A Meta-Analysis
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Barger, L. K.; Baldini, F.; Huff, D.
1995-01-01
Future spaceflight will require travelers to spend ever-increasing periods of time in microgravity. Optimal functioning of the immune system is of paramount importance for the health and performance of these travelers. A meta-analysis statistical procedure was used to analyze immune system data from crew members in United States and Soviet space missions from 8.5 to 140 days duration between 1968 and 1985. Ten immunological parameters (immunoglobulins A, G, M, D, white blood cell (WBC) count, number of lymphocytes, percent total lymphocytes, percent B lymphocytes, percent T lymphocytes, and lymphocyte reactivity to mitogen) were investigated using multifactorial, repeated measure analysis of variance. With the preflight level set at 100, WBC count increased to 154 +/- 14% (mean +/- SE; p less than or equal to 0.05) immediately after flight; there was a decrease in lymphocyte count (83 +/- 4%; p less than or equal to 0.05) and percent of total lymphocytes (69 +/- 1%; p less than or equal to 0.05) immediately after flight, with reduction in RNA synthesis to phytohemagglutinin (PHA) to 51 +/- 21% (p less than or equal to 0.05) and DNA synthesis to PHA to 61 +/- 8% (p less than or equal to 0.05) at the first postflight measurement. Thus, some cellular immunological functions are decreased significantly following spaceflight. More data are needed on astronauts' age, aerobic power output, and parameters of their exercise training program to determine if these immune system responses are due solely to microgravity exposure or perhaps to some other aspect of spaceflight.
Thorsen, Jonathan; Brejnrod, Asker; Mortensen, Martin; Rasmussen, Morten A; Stokholm, Jakob; Al-Soud, Waleed Abu; Sørensen, Søren; Bisgaard, Hans; Waage, Johannes
2016-11-25
There is an immense scientific interest in the human microbiome and its effects on human physiology, health, and disease. A common approach for examining bacterial communities is high-throughput sequencing of 16S rRNA gene hypervariable regions, aggregating sequence-similar amplicons into operational taxonomic units (OTUs). Strategies for detecting differential relative abundance of OTUs between sample conditions include classical statistical approaches as well as a plethora of newer methods, many borrowing from the related field of RNA-seq analysis. This effort is complicated by unique data characteristics, including sparsity, sequencing depth variation, and nonconformity of read counts to theoretical distributions, which is often exacerbated by exploratory and/or unbalanced study designs. Here, we assess the robustness of available methods for (1) inference in differential relative abundance analysis and (2) beta-diversity-based sample separation, using a rigorous benchmarking framework based on large clinical 16S microbiome datasets from different sources. Running more than 380,000 full differential relative abundance tests on real datasets with permuted case/control assignments and in silico-spiked OTUs, we identify large differences in method performance on a range of parameters, including false positive rates, sensitivity to sparsity and case/control balances, and spike-in retrieval rate. In large datasets, methods with the highest false positive rates also tend to have the best detection power. For beta-diversity-based sample separation, we show that library size normalization has very little effect and that the distance metric is the most important factor in terms of separation power. Our results, generalizable to datasets from different sequencing platforms, demonstrate how the choice of method considerably affects analysis outcome. Here, we give recommendations for tools that exhibit low false positive rates, have good retrieval power across effect sizes and case/control proportions, and have low sparsity bias. Result output from some commonly used methods should be interpreted with caution. We provide an easily extensible framework for benchmarking of new methods and future microbiome datasets.
The Chandra Source Catalog: Spectral Properties
NASA Astrophysics Data System (ADS)
Doe, Stephen; Siemiginowska, Aneta L.; Refsdal, Brian L.; Evans, Ian N.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Glotfelty, Kenny J.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula
2009-09-01
The first release of the Chandra Source Catalog (CSC) contains all sources identified from eight years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard) using the Bayesian algorithm (BEHR, Park et al. 2006). The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package, developed by the Chandra X-ray Center; see Freeman et al. 2001). Two models were fit to each source: an absorbed power law and a blackbody emission. The fitted parameter values for the power-law and blackbody models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy flux computed from the normalizations of predefined power-law and black-body models needed to match the observed net X-ray counts. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. This work is supported by NASA contract NAS8-03060 (CXC).
Rapid enumeration of viable bacteria by image analysis
NASA Technical Reports Server (NTRS)
Singh, A.; Pyle, B. H.; McFeters, G. A.
1989-01-01
A direct viable counting method for enumerating viable bacteria was modified and made compatible with image analysis. A comparison was made between viable cell counts determined by the spread plate method and direct viable counts obtained using epifluorescence microscopy either manually or by automatic image analysis. Cultures of Escherichia coli, Salmonella typhimurium, Vibrio cholerae, Yersinia enterocolitica and Pseudomonas aeruginosa were incubated at 35 degrees C in a dilute nutrient medium containing nalidixic acid. Filtered samples were stained for epifluorescence microscopy and analysed manually as well as by image analysis. Cells enlarged after incubation were considered viable. The viable cell counts determined using image analysis were higher than those obtained by either the direct manual count of viable cells or spread plate methods. The volume of sample filtered or the number of cells in the original sample did not influence the efficiency of the method. However, the optimal concentration of nalidixic acid (2.5-20 micrograms ml-1) and length of incubation (4-8 h) varied with the culture tested. The results of this study showed that under optimal conditions, the modification of the direct viable count method in combination with image analysis microscopy provided an efficient and quantitative technique for counting viable bacteria in a short time.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
NASA Astrophysics Data System (ADS)
Béthermin, M.; Dole, H.; Beelen, A.; Aussel, H.
2010-03-01
Aims: We aim to place stronger lower limits on the cosmic infrared background (CIB) brightness at 24 μm, 70 μm and 160 μm and measure the extragalactic number counts at these wavelengths in a homogeneous way from various surveys. Methods: Using Spitzer legacy data over 53.6 deg2 of various depths, we build catalogs with the same extraction method at each wavelength. Completeness and photometric accuracy are estimated with Monte-Carlo simulations. Number count uncertainties are estimated with a counts-in-cells moment method to take galaxy clustering into account. Furthermore, we use a stacking analysis to estimate number counts of sources not detected at 70 μm and 160 μm. This method is validated by simulations. The integration of the number counts gives new CIB lower limits. Results: Number counts reach 35 μJy, 3.5 mJy and 40 mJy at 24 μm, 70 μm, and 160 μm, respectively. We reach deeper flux densities of 0.38 mJy at 70, and 3.1 at 160 μm with a stacking analysis. We confirm the number count turnover at 24 μm and 70 μm, and observe it for the first time at 160 μm at about 20 mJy, together with a power-law behavior below 10 mJy. These mid- and far-infrared counts: 1) are homogeneously built by combining fields of different depths and sizes, providing a legacy over about three orders of magnitude in flux density; 2) are the deepest to date at 70 μm and 160 μm; 3) agree with previously published results in the common measured flux density range; 4) globally agree with the Lagache et al. (2004) model, except at 160 μm, where the model slightly overestimates the counts around 20 and 200 mJy. Conclusions: These counts are integrated to estimate new CIB firm lower limits of 2.29-0.09+0.09 nW m-2 sr-1, 5.4-0.4+0.4 nW m-2 sr-1, and 8.9-1.1+1.1 nW m-2 sr-1 at 24 μm, 70 μm, and 160 μm, respectively, and extrapolated to give new estimates of the CIB due to galaxies of 2.86-0.16+0.19 nW m-2 sr-1, 6.6-0.6+0.7 nW m-2 sr-1, and 14.6-2.9+7.1 nW m-2 sr-1, respectively. Products (point spread function, counts, CIB contributions, software) are publicly available for download at
Trumbo, D.E.
1959-02-10
A transistorized pulse-counting circuit adapted for use with nuclear radiation detecting detecting devices to provide a small, light weight portable counter is reported. The small size and low power requirements of the transistor are of particular value in this instance. The circuit provides an adjustable count scale with a single transistor which is triggered by the accumulated charge on a storage capacitor.
Constraints from thermal Sunyaev-Zel'dovich cluster counts and power spectrum combined with CMB
NASA Astrophysics Data System (ADS)
Salvati, Laura; Douspis, Marian; Aghanim, Nabila
2018-06-01
The thermal Sunyaev-Zel'dovich (tSZ) effect is one of the recent probes of cosmology and large-scale structures. We update constraints on cosmological parameters from galaxy clusters observed by the Planck satellite in a first attempt to combine cluster number counts and the power spectrum of hot gas; we used a new value of the optical depth and, at the same time, sampling on cosmological and scaling-relation parameters. We find that in the ΛCDM model, the addition of a tSZ power spectrum provides small improvements with respect to number counts alone, leading to the 68% c.l. constraints Ωm = 0.32 ± 0.02, σ8 = 0.76 ± 0.03, and σ8(Ωm/0.3)1/3 = 0.78 ± 0.03 and lowering the discrepancy with results for cosmic microwave background (CMB) primary anisotropies (updated with the new value of τ) to ≃1.8σ on σ8. We analysed extensions to the standard model, considering the effect of massive neutrinos and varying the equation of state parameter for dark energy. In the first case, we find that the addition of the tSZ power spectrum helps in improving cosmological constraints with respect to number count alone results, leading to the 95% upper limit ∑ mν < 1.88 eV. For the varying dark energy equation of state scenario, we find no important improvements when adding tSZ power spectrum, but still the combination of tSZ probes is able to provide constraints, producing w = -1.0 ± 0.2. In all cosmological scenarios, the mass bias to reconcile CMB and tSZ probes remains low at (1 - b) ≲ 0.67 as compared to estimates from weak lensing and X-ray mass estimate comparisons or numerical simulations.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Measures of large-scale structure in the CfA redshift survey slices
NASA Technical Reports Server (NTRS)
De Lapparent, Valerie; Geller, Margaret J.; Huchra, John P.
1991-01-01
Variations of the counts-in-cells with cell size are used here to define two statistical measures of large-scale clustering in three 6 deg slices of the CfA redshift survey. A percolation criterion is used to estimate the filling factor which measures the fraction of the total volume in the survey occupied by the large-scale structures. For the full 18 deg slice of the CfA redshift survey, f is about 0.25 + or - 0.05. After removing groups with more than five members from two of the slices, variations of the counts in occupied cells with cell size have a power-law behavior with a slope beta about 2.2 on scales from 1-10/h Mpc. Application of both this statistic and the percolation analysis to simulations suggests that a network of two-dimensional structures is a better description of the geometry of the clustering in the CfA slices than a network of one-dimensional structures. Counts-in-cells are also used to estimate at 0.3 galaxy h-squared/Mpc the average galaxy surface density in sheets like the Great Wall.
The Proposed 2009 War Powers Consultation Act
2009-03-19
both political branches of government participate in matters of national security. 15. SUBJECT TERMS Separation of Powers , National Security Law...Strategy Research Project DATE: 19 March 2009 WORD COUNT: 8,090 PAGES: 46 KEY TERMS: Separation of Powers , National Security Law, Constitution...Arthur Bestor, “ Separation of Powers in the Domain of Foreign Affairs: The Intent of the Constitution Historically Examined,” Seton Hall L. Rev. 5 (1974
Tutorial on X-ray photon counting detector characterization.
Ren, Liqiang; Zheng, Bin; Liu, Hong
2018-01-01
Recent advances in photon counting detection technology have led to significant research interest in X-ray imaging. As a tutorial level review, this paper covers a wide range of aspects related to X-ray photon counting detector characterization. The tutorial begins with a detailed description of the working principle and operating modes of a pixelated X-ray photon counting detector with basic architecture and detection mechanism. Currently available methods and techniques for charactering major aspects including energy response, noise floor, energy resolution, count rate performance (detector efficiency), and charge sharing effect of photon counting detectors are comprehensively reviewed. Other characterization aspects such as point spread function (PSF), line spread function (LSF), contrast transfer function (CTF), modulation transfer function (MTF), noise power spectrum (NPS), detective quantum efficiency (DQE), bias voltage, radiation damage, and polarization effect are also remarked. A cadmium telluride (CdTe) pixelated photon counting detector is employed for part of the characterization demonstration and the results are presented. This review can serve as a tutorial for X-ray imaging researchers and investigators to understand, operate, characterize, and optimize photon counting detectors for a variety of applications.
Photon-Counting Kinetic Inductance Detectors for the Origins Space Telescope
NASA Astrophysics Data System (ADS)
Noroozian, Omid
We propose to develop photon-counting Kinetic Inductance Detectors (KIDs) for the Origins Space Telescope (OST) and any predecessor missions, with the goal of producing background-limited photon-counting sensitivity, and with a preliminary technology demonstration in time to inform the Decadal Survey planning process. The OST, a midto far- infrared observatory concept, is being developed as a major NASA mission to be considered by the next Decadal Survey with support from NASA Headquarters. The objective of such a facility is to allow rapid spectroscopic surveys of the high redshift universe at 420-800 μm, using arrays of integrated spectrometers with moderate resolutions (R=λ/Δλ 1000), to create a powerful new data set for exploring galaxy evolution and the growth of structure in the Universe. A second objective of OST is to perform higher resolution (R 10,000-100,000) spectroscopic surveys at 20-300 µm, a uniquely powerful tool for exploring the evolution of protoplanetary disks into fledgling solar systems. Finally the OST aims to obtain sensitive mid-infrared (5-40 µm) spectroscopy of thermal emission from rocky planets in the habitable zone using the transit method. These OST science objectives are very exciting and represent a wellorganized community agreement. However, they are all impossible to reach without new detector technology, and the OST can’t be recommended or approved if suitable detectors do not exist. In all of the above instrument concepts, photon-counting direct detectors are mission-enabling and essential for reaching the sensitivity permitted by the cryogenic Origins Space Telescope and the performance required for its important science programs. Our group has developed an innovative design for an optically-coupled KID that can reach the photon-counting sensitivity required by the ambitious science goals of the OST mission. A KID is a planar microwave resonator patterned from a superconducting thin film, which responds to incident photons with a change in its resonance frequency and dissipation. This detector response is intrinsically frequency multiplexed, and consequently KIDs at different resonance frequencies can be read out using standard digital radio techniques, which enables multiplexing of 10,000s of detectors. In our photon-counting KID design we employ a small-volume (and thin) superconducting Al inductor to enhance the per-photon responsivity, and large parallel-plate NbTiN capacitors on single-crystal silicon-on-insulator (SOI) substrates to eliminate frequency noise. We have developed a comprehensive design demonstrating that photon-counting sensitivity is possible in a small-volume Al KID. In addition, we have already demonstrated ultra-high quality factors in resonators made of very thin ( 10 nm) Al films with long electron lifetimes. These are the critical material parameters for reaching photon-counting sensitivity levels. In our proposed work plan our objective is to implement these high quality films into our optically-coupled small-volume KID design and demonstrate photon-counting sensitivity. The successful development of our photon-counting technology will significantly increase the sensitivity of the OST mission, making it more scientifically competitive than one based on power detectors. Photon-counting at the background limit provides a x4 increase in observation speed over that of background-limited power detection, since there is no need to measure and subtract a zero point. Photon-counting detectors will enable an instrument on the OST to observe the fine structure lines of galaxies which are currently only observable at redshifts of z 1, out to redshifts of z=6, probing the early stages of galaxy, star and planet formation. Our photon-counting detectors will also enable entirely new science, including the mapping of the composition and evolution of water and other key volatiles in planet-forming materials around large samples of nearby young stars.
A miniaturized 4 K platform for superconducting infrared photon counting detectors
NASA Astrophysics Data System (ADS)
Gemmell, Nathan R.; Hills, Matthew; Bradshaw, Tom; Rawlings, Tom; Green, Ben; Heath, Robert M.; Tsimvrakidis, Konstantinos; Dobrovolskiy, Sergiy; Zwiller, Val; Dorenbos, Sander N.; Crook, Martin; Hadfield, Robert H.
2017-11-01
We report on a miniaturized platform for superconducting infrared photon counting detectors. We have implemented a fibre-coupled superconducting nanowire single photon detector in a Stirling/Joule-Thomson platform with a base temperature of 4.2 K. We have verified a cooling power of 4 mW at 4.7 K. We report 20% system detection efficiency at 1310 nm wavelength at a dark count rate of 1 kHz. We have carried out compelling application demonstrations in single photon depth metrology and singlet oxygen luminescence detection.
Harding, A.M.A.; Piatt, John F.; Byrd, G.V.; Hatch, Shyla A.; Konyukhov, N.B.; Golubova, E.U.; Williams, J.C.
2005-01-01
It is difficult to survey crevice-nesting seabirds because nest-sites are hard to identify and count, and the number of adult birds attending a colony can be extremely variable within and between days. There is no standardized method for surveying crevice-nesting horned puffins (Fratercula corniculata), and consequently little is known about abundance or changes in their numbers. We examined the variability in colony attendance of horned puffins at 5 breeding colonies in the North Pacific to assess whether variation in count data can be reduced to a level that would allow us to detect changes in the number of birds attending a colony. We used within-year measures of variation in attendance to examine the power to detect a change in numbers between 2 years, and we used measures of among-year variation to examine the power to detect trends over multiple years. Diurnal patterns of attendance differed among colonies, and among-day variation in attendance was generally lowest from mid- to late-incubation to early chick rearing. Within-year variation in water counts was lower than in land counts, and variation was lower using a daily index based on 5 counts per day than it was using 1 count per day. Measures of among-year variation in attendance also were higher for land-based than water-based counts, and they were higher when we used a 10-day survey period than when we used a 30-day period. The use of either 1 or 5 counts a day during the colony-specific diurnal peak of attendance had little influence on levels of among-year variation. Overall, our study suggests that variation in count data may be reduced to a level that allows detection of trends in numbers. However, more studies of interannual variability in horned puffin attendance are needed. Further, the relationship between count data and breeding population size needs more study before the number of birds present at the colony can be used with confidence as an index of population trend.
No Control Genes Required: Bayesian Analysis of qRT-PCR Data
Matz, Mikhail V.; Wright, Rachel M.; Scott, James G.
2013-01-01
Background Model-based analysis of data from quantitative reverse-transcription PCR (qRT-PCR) is potentially more powerful and versatile than traditional methods. Yet existing model-based approaches cannot properly deal with the higher sampling variances associated with low-abundant targets, nor do they provide a natural way to incorporate assumptions about the stability of control genes directly into the model-fitting process. Results In our method, raw qPCR data are represented as molecule counts, and described using generalized linear mixed models under Poisson-lognormal error. A Markov Chain Monte Carlo (MCMC) algorithm is used to sample from the joint posterior distribution over all model parameters, thereby estimating the effects of all experimental factors on the expression of every gene. The Poisson-based model allows for the correct specification of the mean-variance relationship of the PCR amplification process, and can also glean information from instances of no amplification (zero counts). Our method is very flexible with respect to control genes: any prior knowledge about the expected degree of their stability can be directly incorporated into the model. Yet the method provides sensible answers without such assumptions, or even in the complete absence of control genes. We also present a natural Bayesian analogue of the “classic” analysis, which uses standard data pre-processing steps (logarithmic transformation and multi-gene normalization) but estimates all gene expression changes jointly within a single model. The new methods are considerably more flexible and powerful than the standard delta-delta Ct analysis based on pairwise t-tests. Conclusions Our methodology expands the applicability of the relative-quantification analysis protocol all the way to the lowest-abundance targets, and provides a novel opportunity to analyze qRT-PCR data without making any assumptions concerning target stability. These procedures have been implemented as the MCMC.qpcr package in R. PMID:23977043
Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.
Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran
2016-05-01
In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.
Environmental impacts of cooling system on Abou Qir Bay.
Mohamed, Manal A; Abd-Elaty, Magda M; El-Shall, Wafaa I; Ramadan, Abou Bakr; Tawfik, Mohamed S
2005-01-01
This study was conducted to evaluate the impacts of cooling water on cooling system of Abou Qir Power Plant and on the receiving Abou Qir Bay. Abou Qir Power Plant is a conventional steam electric power plant located in Alexandria Governorate, Egypt. Water and biota samples were collected monthly from cooling water and Abou Qir Bay over a year. Heavy metals, radionuclide, anions and total hydrocarbons were analyzed in the samples using Instrumental Neutron Activation Analysis (INAA), Gamma-ray Spectrometry (GS), Ion Selective Electrodes (ISE) and Gas Chromatography (GC). The results revealed that the characteristics of inlet cooling water had a tendency to be corrosive to the cooling system. The outlet cooling water complied with Environmental Law 4/1994 in all measured parameters except phosphate, ammonia and total petroleum hydrocarbons. On the other hand, samples from all sites had the lowest annual total count of algae in winter and highest count during summer. There are -ve correlations between algae and heavy metals, hydrocarbons, and radioactivity. Algae correlated highly significantly (p<0.01) with Pb, Cu, Ni, total petroleum hydrocarbons, dissolved petroleum hydrocarbon and uranium. Anabaena Sp. (blue green algae) and Euglina Sp.(flagellate) had highly significant (p<0.01) -ve correlation with heavy metals and natural radioactivity. The accumulation percentage of heavy metals by algae ranged from 22% to 37%, and the highest percent was for uranium and the lowest was for chromium. It is recommended to optimize the addition of polyphosphate inhibitor at inlet cooling water to inhibit corrosion in the cooling system and to avoid increase of Anabaena Sp. in the outlet, and to avoid enhancing algae growth that has a great tendency to accumulate heavy metals, and good housekeeping to avoid oil spills containing hydrocarbons from the power plant to sea water.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
ChIP-PaM: an algorithm to identify protein-DNA interaction using ChIP-Seq data.
Wu, Song; Wang, Jianmin; Zhao, Wei; Pounds, Stanley; Cheng, Cheng
2010-06-03
ChIP-Seq is a powerful tool for identifying the interaction between genomic regulators and their bound DNAs, especially for locating transcription factor binding sites. However, high cost and high rate of false discovery of transcription factor binding sites identified from ChIP-Seq data significantly limit its application. Here we report a new algorithm, ChIP-PaM, for identifying transcription factor target regions in ChIP-Seq datasets. This algorithm makes full use of a protein-DNA binding pattern by capitalizing on three lines of evidence: 1) the tag count modelling at the peak position, 2) pattern matching of a specific tag count distribution, and 3) motif searching along the genome. A novel data-based two-step eFDR procedure is proposed to integrate the three lines of evidence to determine significantly enriched regions. Our algorithm requires no technical controls and efficiently discriminates falsely enriched regions from regions enriched by true transcription factor (TF) binding on the basis of ChIP-Seq data only. An analysis of real genomic data is presented to demonstrate our method. In a comparison with other existing methods, we found that our algorithm provides more accurate binding site discovery while maintaining comparable statistical power.
Fractal analysis as a potential tool for surface morphology of thin films
NASA Astrophysics Data System (ADS)
Soumya, S.; Swapna, M. S.; Raj, Vimal; Mahadevan Pillai, V. P.; Sankararaman, S.
2017-12-01
Fractal geometry developed by Mandelbrot has emerged as a potential tool for analyzing complex systems in the diversified fields of science, social science, and technology. Self-similar objects having the same details in different scales are referred to as fractals and are analyzed using the mathematics of non-Euclidean geometry. The present work is an attempt to correlate fractal dimension for surface characterization by Atomic Force Microscopy (AFM). Taking the AFM images of zinc sulphide (ZnS) thin films prepared by pulsed laser deposition (PLD) technique, under different annealing temperatures, the effect of annealing temperature and surface roughness on fractal dimension is studied. The annealing temperature and surface roughness show a strong correlation with fractal dimension. From the regression equation set, the surface roughness at a given annealing temperature can be calculated from the fractal dimension. The AFM images are processed using Photoshop and fractal dimension is calculated by box-counting method. The fractal dimension decreases from 1.986 to 1.633 while the surface roughness increases from 1.110 to 3.427, for a change of annealing temperature 30 ° C to 600 ° C. The images are also analyzed by power spectrum method to find the fractal dimension. The study reveals that the box-counting method gives better results compared to the power spectrum method.
Life Cycle analysis data and results for geothermal and other electricity generation technologies
Sullivan, John
2013-06-04
Life cycle analysis (LCA) is an environmental assessment method that quantifies the environmental performance of a product system over its entire lifetime, from cradle to grave. Based on a set of relevant metrics, the method is aptly suited for comparing the environmental performance of competing products systems. This file contains LCA data and results for electric power production including geothermal power. The LCA for electric power has been broken down into two life cycle stages, namely plant and fuel cycles. Relevant metrics include the energy ratio and greenhouse gas (GHG) ratios, where the former is the ratio of system input energy to total lifetime electrical energy out and the latter is the ratio of the sum of all incurred greenhouse gases (in CO2 equivalents) divided by the same energy output. Specific information included herein are material to power (MPR) ratios for a range of power technologies for conventional thermoelectric, renewables (including three geothermal power technologies), and coproduced natural gas/geothermal power. For the geothermal power scenarios, the MPRs include the casing, cement, diesel, and water requirements for drilling wells and topside piping. Also included herein are energy and GHG ratios for plant and fuel cycle stages for the range of considered electricity generating technologies. Some of this information are MPR data extracted directly from the literature or from models (eg. ICARUS – a subset of ASPEN models) and others (energy and GHG ratios) are results calculated using GREET models and MPR data. MPR data for wells included herein were based on the Argonne well materials model and GETEM well count results.
Development of an automated asbestos counting software based on fluorescence microscopy.
Alexandrov, Maxym; Ichida, Etsuko; Nishimura, Tomoki; Aoki, Kousuke; Ishida, Takenori; Hirota, Ryuichi; Ikeda, Takeshi; Kawasaki, Tetsuo; Kuroda, Akio
2015-01-01
An emerging alternative to the commonly used analytical methods for asbestos analysis is fluorescence microscopy (FM), which relies on highly specific asbestos-binding probes to distinguish asbestos from interfering non-asbestos fibers. However, all types of microscopic asbestos analysis require laborious examination of large number of fields of view and are prone to subjective errors and large variability between asbestos counts by different analysts and laboratories. A possible solution to these problems is automated counting of asbestos fibers by image analysis software, which would lower the cost and increase the reliability of asbestos testing. This study seeks to develop a fiber recognition and counting software for FM-based asbestos analysis. We discuss the main features of the developed software and the results of its testing. Software testing showed good correlation between automated and manual counts for the samples with medium and high fiber concentrations. At low fiber concentrations, the automated counts were less accurate, leading us to implement correction mode for automated counts. While the full automation of asbestos analysis would require further improvements in accuracy of fiber identification, the developed software could already assist professional asbestos analysts and record detailed fiber dimensions for the use in epidemiological research.
28. EXTERIOR VIEW LOOKING INTO THE SECOND TAILRACE (COUNTING FROM ...
28. EXTERIOR VIEW LOOKING INTO THE SECOND TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). NOTE THE STEEL BULKHEAD OF THE FLUME AND THE DRAFT TUBE EXTENDING BENEATH THE SILT DEPOSITS. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
Technology Counts 2010: Powering Up--Mobile Learning Seeks the Spotlight in K-12 Education
ERIC Educational Resources Information Center
Education Week, 2010
2010-01-01
Much like the shifting landscape in K-12 educational technology, this year's "Technology Counts" issue is changing to address the challenges of covering schools in the digital age. The 2010 report does not issue state report cards or state policy reports. Instead, the report takes a more district- and school-level look at educational…
G T-Mohr Start-up Reactivity Insertion Transient Analysis Using Simulink
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fard, Mehdi Reisi; Blue, Thomas E.; Miller, Don W.
2006-07-01
As a part of a Department of Energy-Nuclear Engineering Research Initiative (NERI) Project, we at OSU are investigating SiC semiconductor detectors as neutron power monitors for Generation IV power reactors. As a part of this project, we are investigating the power monitoring requirements for a specific type of Generation IV reactor, namely the GT-MHR. To evaluate the power monitoring requirements for the GT-MHR that are most demanding for a SiC diode power monitor, we have developed a Simulink model to study the transient behavior of the GT-MHR. In this paper, we describe the application of the Simulink code to themore » analysis of a series of Start-up Reactivity Insertion Transients (SURITs). The SURIT is considered to be a limiting protectable accident in terms of establishing the dynamic range of a SiC power monitor because of the low count rate of the detector during the start-up and absence of the reactivity feedback mechanism at the beginning of transient. The SURIT is studied with the ultimate goal of identifying combinations of 1) reactor power scram setpoints and 2) cram initiation times (the time in which a scram must be initiated once the setpoint is exceeded) for which the GT-MHR core is protected in the event of a continuous withdrawal of a control rod bank from the core from low powers. The SURIT is initiated by withdrawing a rod bank when the reactor is cold (300 K) and sub-critical at the BOEC (Beginning of Equilibrium Cycle) condition. Various initial power levels have been considered corresponding to various degrees of sub-criticality and various source strengths. An envelope of response is determined to establish which initial powers correspond to the worst case SURIT. (authors)« less
Zhu, Lizhi
2007-11-13
A power converter architecture interleaves full bridge converters to alleviate thermal management problems in high current applications, and may, for example, double the output power capability while reducing parts count and costs. For example, one phase of a three phase inverter is shared between two transformers, which provide power to a rectifier such as a current doubler rectifier to provide two full bridge DC/DC converters with three rather than four high voltage inverter legs.
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Poor prognostic role of the pretreatment platelet counts in colorectal cancer: A meta-analysis.
Rao, Xu-Dong; Zhang, Hua; Xu, Zheng-Shui; Cheng, Hua; Shen, Wei; Wang, Xin-Ping
2018-06-01
Recently, a wide variety of studies have suggested that elevated platelet counts are associated with survival in patients with colorectal cancer. On one hand several studies suggest a negative connection in colorectal cancer patients with pre-operative thrombocytosis, on the other hand other studies contradicts this. However, it remains unknown whether elevated platelet counts are associated with survival in colorectal cancer patients. We therefore conducted this meta-analysis to evaluate the prognostic role of platelet counts in colorectal cancer. PubMed, Embase, and the Cochrane Library databases were searched from their inception to October 15, 2016 to identify relevant studies that have explored the prognostic role of platelet counts in colorectal cancer. Studies that examined the association between platelet counts and prognoses in colorectal cancer and that provided a hazard ratio (HR) and 95% confidence interval (CI) for overall survival (OS) and/or disease-free survival (DFS) were included. This meta-analysis included 9 retrospective cohort studies involving 3413 patients with colorectal cancer. OS was shorter in patients with elevated platelet counts than in patients with normal counts (HR 2.11, 95% CI: 1.68-2.65). For DFS, an elevated platelet count was also a poor predictor (HR 2.51, 95% CI: 1.84-3.43). In this meta-analysis, we suggest that an elevated platelet count is a negative predictor of survival in both primary colorectal cancer and resectable colorectal liver metastases.
Image-based red cell counting for wild animals blood.
Mauricio, Claudio R M; Schneider, Fabio K; Dos Santos, Leonilda Correia
2010-01-01
An image-based red blood cell (RBC) automatic counting system is presented for wild animals blood analysis. Images with 2048×1536-pixel resolution acquired on an optical microscope using Neubauer chambers are used to evaluate RBC counting for three animal species (Leopardus pardalis, Cebus apella and Nasua nasua) and the error found using the proposed method is similar to that obtained for inter observer visual counting method, i.e., around 10%. Smaller errors (e.g., 3%) can be obtained in regions with less grid artifacts. These promising results allow the use of the proposed method either as a complete automatic counting tool in laboratories for wild animal's blood analysis or as a first counting stage in a semi-automatic counting tool.
Andersen, Line Holm; Sunde, Peter; Pellegrino, Irene; Loeschcke, Volker; Pertoldi, Cino
2017-12-01
The agricultural scene has changed over the past decades, resulting in a declining population trend in many species. It is therefore important to determine the factors that the individual species depend on in order to understand their decline. The landscape changes have also resulted in habitat fragmentation, turning once continuous populations into metapopulations. It is thus increasingly important to estimate both the number of individuals it takes to create a genetically viable population and the population trend. Here, population viability analysis and habitat suitability modeling were used to estimate population viability and future prospects across Europe of the Little Owl Athene noctua , a widespread species associated with agricultural landscapes. The results show a high risk of population declines over the coming 100 years, especially toward the north of Europe, whereas populations toward the southeastern part of Europe have a greater probability of persistence. In order to be considered genetically viable, individual populations must count 1,000-30,000 individuals. As Little Owl populations of several countries count <30,000, and many isolated populations in northern Europe count <1,000 individuals, management actions resulting in exchange of individuals between populations or even countries are probably necessary to prevent losing <1% genetic diversity over a 100-year period. At a continental scale, a habitat suitability analysis suggested Little Owl to be affected positively by increasing temperatures and urban areas, whereas an increased tree cover, an increasing annual rainfall, grassland, and sparsely vegetated areas affect the presence of the owl negatively. However, the low predictive power of the habitat suitability model suggests that habitat suitability might be better explained at a smaller scale.
Zugaza, J L; Casabiell, X A; Bokser, L; Eiras, A; Beiras, A; Casanueva, F F
1995-07-01
We have previously demonstrated that pretreatment of several cell lines with cis-unsaturated fatty acids, like oleic acid, blocks epidermal growth factor (EGF)-induced early ionic signals, and in particular the [Ca2+]i rise. In the present work we show that this blockade does not alter EGF-stimulated cellular proliferation evaluated by direct cell counting, but induces a powerful enhancement in the pulsed thymidine incorporation assay. The lack of effect of oleic acid on EGF-stimulated cellular proliferation was confirmed by repeated cell counts, cumulative thymidine incorporation, and protein synthesis, but a clear synergistic effect between oleic acid and EGF was again obtained by means of time course experiments with pulsed thymidine. Combined flow cytometry analysis and cell counts at earlier times in EGF-stimulated cells showed that oleic acids accelerates the entrance of cells into the replicative cycle leading to an earlier cell division. Afterward, these oleic acid-pretreated cells became delayed by an unknown compensatory mechanism in such a way that at 48 h post-EGF, the cell count in control and oleic acid-pretreated cells was equal. In conclusion (a) oleic acid accelerates or enhances the EGF mitogenic action and (b) in the long term cells compensate the initial perturbation with respect to untreated cells. As a side observation, the widely employed pulsed thymidine incorporation method as a measure of cell division could be extremely misleading unless experimental conditions are well controlled.
30. EXTERIOR VIEW LOOKING INTO THE THIRD TAILRACE (COUNTING FROM ...
30. EXTERIOR VIEW LOOKING INTO THE THIRD TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). NOTE THE STEEL BULKHEAD OF THE FLUME AND THE DRAFT TUBE EXTENDING BENEATH THE SILT DEPOSITS AND WATER LINE. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
The Case for an Open Data Model
1998-08-01
Microsoft Word, Pagemaker, and Framemaker , and the drawing programs MacDraw, Adobe Illustrator, and Microsoft PowerPoint, use their own proprietary...needs a custom word counting tool, since no utility could work in Word and other word processors. Framemaker for Windows does not have a word counting...supplied in 2 At least none that I could find in Framemaker 5.5 for Windows. Another problem with
Li, Xue-Ying; Li, Bin; Sun, Xing-Li
2014-04-15
The effects of a thermal discharge from a coastal power plant on phytoplankton were determined in Zhanjiang Bay. Monthly cruises were undertaken at four tide times during April-October 2011. There were significant differences for dominant species among seven sampling months and four sampling tides. Species diversity (H') and Evenness showed a distinct increasing gradient from the heated water source to the control zone and fluctuated during four tides with no visible patterns. Species richness, cell count and Chl a at mixed and control zones were significantly higher than heated zones, and showed tidal changes with no obvious patterns. The threshold temperature of phytoplankton species can be regarded as that of phytoplankton community at ebb slack. The average threshold temperature over phytoplankton species, cell count and Chl a, and the threshold temperature of cell count can be regarded as that of phytoplankton community at flood slack during spring and neap respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Voutsina, Chronoula
2016-01-01
Empirical research has documented how children's early counting develops into an increasingly abstract process, and initial counting procedures are reified as children develop and use more sophisticated counting. In this development, the learning of different oral counting sequences that allow children to count in steps bigger than one is seen as…
Ito, Yukiko; Hattori, Reiko; Mase, Hiroki; Watanabe, Masako; Shiotani, Itaru
2008-12-01
Pollen information is indispensable for allergic individuals and clinicians. This study aimed to develop forecasting models for the total annual count of airborne pollen grains based on data monitored over the last 20 years at the Mie Chuo Medical Center, Tsu, Mie, Japan. Airborne pollen grains were collected using a Durham sampler. Total annual pollen count and pollen count from October to December (OD pollen count) of the previous year were transformed to logarithms. Regression analysis of the total pollen count was performed using variables such as the OD pollen count and the maximum temperature for mid-July of the previous year. Time series analysis revealed an alternate rhythm of the series of total pollen count. The alternate rhythm consisted of a cyclic alternation of an "on" year (high pollen count) and an "off" year (low pollen count). This rhythm was used as a dummy variable in regression equations. Of the three models involving the OD pollen count, a multiple regression equation that included the alternate rhythm variable and the interaction of this rhythm with OD pollen count showed a high coefficient of determination (0.844). Of the three models involving the maximum temperature for mid-July, those including the alternate rhythm variable and the interaction of this rhythm with maximum temperature had the highest coefficient of determination (0.925). An alternate pollen dispersal rhythm represented by a dummy variable in the multiple regression analysis plays a key role in improving forecasting models for the total annual sugi pollen count.
Space and power efficient hybrid counters array
Gara, Alan G [Mount Kisco, NY; Salapura, Valentina [Chappaqua, NY
2009-05-12
A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.
Space and power efficient hybrid counters array
Gara, Alan G.; Salapura, Valentina
2010-03-30
A hybrid counter array device for counting events. The hybrid counter array includes a first counter portion comprising N counter devices, each counter device for receiving signals representing occurrences of events from an event source and providing a first count value corresponding to a lower order bits of the hybrid counter array. The hybrid counter array includes a second counter portion comprising a memory array device having N addressable memory locations in correspondence with the N counter devices, each addressable memory location for storing a second count value representing higher order bits of the hybrid counter array. A control device monitors each of the N counter devices of the first counter portion and initiates updating a value of a corresponding second count value stored at the corresponding addressable memory location in the second counter portion. Thus, a combination of the first and second count values provide an instantaneous measure of number of events received.
2012-01-01
Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944
Adrion, Christine; Mansmann, Ulrich
2012-09-10
A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.
Power counting in peripheral partial waves: The singlet channels
NASA Astrophysics Data System (ADS)
Valderrama, M. Pavón; Sánchez, M. Sánchez; Yang, C.-J.; Long, Bingwei; Carbonell, J.; van Kolck, U.
2017-05-01
We analyze the power counting of the peripheral singlet partial waves in nucleon-nucleon scattering. In agreement with conventional wisdom, we find that pion exchanges are perturbative in the peripheral singlets. We quantify from the effective field theory perspective the well-known suppression induced by the centrifugal barrier in the pion-exchange interactions. By exploring perturbation theory up to fourth order, we find that the one-pion-exchange potential in these channels is demoted from leading to subleading order by a given power of the expansion parameter that grows with the orbital angular momentum. We discuss the implications of these demotions for few-body calculations: though higher partial waves have been known for a long time to be irrelevant in these calculations (and are hence ignored), here we explain how to systematize the procedure in a way that is compatible with the effective field theory expansion.
Heijnen, Ingmar A F M; Barnett, David; Arroz, Maria J; Barry, Simon M; Bonneville, Marc; Brando, Bruno; D'hautcourt, Jean-Luc; Kern, Florian; Tötterman, Thomas H; Marijt, Erik W A; Bossy, David; Preijers, Frank W M B; Rothe, Gregor; Gratama, Jan W
2004-11-01
HLA class I peptide tetramers represent powerful diagnostic tools for detection and monitoring of antigen-specific CD8(+) T cells. The impetus for the current multicenter study is the critical need to standardize tetramer flow cytometry if it is to be implemented as a routine diagnostic assay. Hence, the European Working Group on Clinical Cell Analysis set out to develop and evaluate a single-platform tetramer-based method that used cytomegalovirus (CMV) as the antigenic model. Absolute numbers of CMV-specific CD8(+) T cells were obtained by combining the percentage of tetramer-binding cells with the absolute CD8(+) T-cell count. Six send-outs of stabilized blood from healthy individuals or CMV-carrying donors with CMV-specific CD8(+) T-cell counts of 3 to 10 cells/microl were distributed to 7 to 16 clinical sites. These sites were requested to enumerate CD8(+) T cells and, in the case of CMV-positive donors, CMV-specific subsets on three separate occasions using the standard method. Between-site coefficients of variation of less than 10% (absolute CD8(+) T-cell counts) and approximately 30% (percentage and absolute numbers of CMV-specific CD8(+) T cells) were achieved. Within-site coefficients of variation were approximately 5% (absolute CD8(+) T-cell counts), approximately 9% (percentage CMV-specific CD8(+) T cells), and approximately 17% (absolute CMV-specific CD8(+) T-cell counts). The degree of variation tended to correlate inversely with the proportion of CMV-specific CD8(+) T-cell subsets. The single-platform MHC tetramer-based method for antigen-specific CD8(+) T-cell counting has been evaluated by a European group of laboratories and can be considered a reproducible assay for routine enumeration of antigen-specific CD8(+) T cells. (c) 2004 Wiley-Liss, Inc.
A new stratification of mourning dove call-count routes
Blankenship, L.H.; Humphrey, A.B.; MacDonald, D.
1971-01-01
The mourning dove (Zenaidura macroura) call-count survey is a nationwide audio-census of breeding mourning doves. Recent analyses of the call-count routes have utilized a stratification based upon physiographic regions of the United States. An analysis of 5 years of call-count data, based upon stratification using potential natural vegetation, has demonstrated that this uew stratification results in strata with greater homogeneity than the physiographic strata, provides lower error variance, and hence generates greatet precision in the analysis without an increase in call-count routes. Error variance was reduced approximately 30 percent for the contiguous United States. This indicates that future analysis based upon the new stratification will result in an increased ability to detect significant year-to-year changes.
New gene functions in megakaryopoiesis and platelet formation
Gieger, Christian; Radhakrishnan, Aparna; Cvejic, Ana; Tang, Weihong; Porcu, Eleonora; Pistis, Giorgio; Serbanovic-Canic, Jovana; Elling, Ulrich; Goodall, Alison H.; Labrune, Yann; Lopez, Lorna M.; Mägi, Reedik; Meacham, Stuart; Okada, Yukinori; Pirastu, Nicola; Sorice, Rossella; Teumer, Alexander; Voss, Katrin; Zhang, Weihua; Ramirez-Solis, Ramiro; Bis, Joshua C.; Ellinghaus, David; Gögele, Martin; Hottenga, Jouke-Jan; Langenberg, Claudia; Kovacs, Peter; O’Reilly, Paul F.; Shin, So-Youn; Esko, Tõnu; Hartiala, Jaana; Kanoni, Stavroula; Murgia, Federico; Parsa, Afshin; Stephens, Jonathan; van der Harst, Pim; van der Schoot, C. Ellen; Allayee, Hooman; Attwood, Antony; Balkau, Beverley; Bastardot, François; Basu, Saonli; Baumeister, Sebastian E.; Biino, Ginevra; Bomba, Lorenzo; Bonnefond, Amélie; Cambien, François; Chambers, John C.; Cucca, Francesco; D’Adamo, Pio; Davies, Gail; de Boer, Rudolf A.; de Geus, Eco J. C.; Döring, Angela; Elliott, Paul; Erdmann, Jeanette; Evans, David M.; Falchi, Mario; Feng, Wei; Folsom, Aaron R.; Frazer, Ian H.; Gibson, Quince D.; Glazer, Nicole L.; Hammond, Chris; Hartikainen, Anna-Liisa; Heckbert, Susan R.; Hengstenberg, Christian; Hersch, Micha; Illig, Thomas; Loos, Ruth J. F.; Jolley, Jennifer; Khaw, Kay Tee; Kühnel, Brigitte; Kyrtsonis, Marie-Christine; Lagou, Vasiliki; Lloyd-Jones, Heather; Lumley, Thomas; Mangino, Massimo; Maschio, Andrea; Leach, Irene Mateo; McKnight, Barbara; Memari, Yasin; Mitchell, Braxton D.; Montgomery, Grant W.; Nakamura, Yusuke; Nauck, Matthias; Navis, Gerjan; Nöthlings, Ute; Nolte, Ilja M.; Porteous, David J.; Pouta, Anneli; Pramstaller, Peter P.; Pullat, Janne; Ring, Susan M.; Rotter, Jerome I.; Ruggiero, Daniela; Ruokonen, Aimo; Sala, Cinzia; Samani, Nilesh J.; Sambrook, Jennifer; Schlessinger, David; Schreiber, Stefan; Schunkert, Heribert; Scott, James; Smith, Nicholas L.; Snieder, Harold; Starr, John M.; Stumvoll, Michael; Takahashi, Atsushi; Tang, W. H. Wilson; Taylor, Kent; Tenesa, Albert; Thein, Swee Lay; Tönjes, Anke; Uda, Manuela; Ulivi, Sheila; van Veldhuisen, Dirk J.; Visscher, Peter M.; Völker, Uwe; Wichmann, H.-Erich; Wiggins, Kerri L.; Willemsen, Gonneke; Yang, Tsun-Po; Zhao, Jing Hua; Zitting, Paavo; Bradley, John R.; Dedoussis, George V.; Gasparini, Paolo; Hazen, Stanley L.; Metspalu, Andres; Pirastu, Mario; Shuldiner, Alan R.; van Pelt, L. Joost; Zwaginga, Jaap-Jan; Boomsma, Dorret I.; Deary, Ian J.; Franke, Andre; Froguel, Philippe; Ganesh, Santhi K.; Jarvelin, Marjo-Riitta; Martin, Nicholas G.; Meisinger, Christa; Psaty, Bruce M.; Spector, Timothy D.; Wareham, Nicholas J.; Akkerman, Jan-Willem N.; Ciullo, Marina; Deloukas, Panos; Greinacher, Andreas; Jupe, Steve; Kamatani, Naoyuki; Khadake, Jyoti; Kooner, Jaspal S.; Penninger, Josef; Prokopenko, Inga; Stemple, Derek; Toniolo, Daniela; Wernisch, Lorenz; Sanna, Serena; Hicks, Andrew A.; Rendon, Augusto; Ferreira, Manuel A.; Ouwehand, Willem H.; Soranzo, Nicole
2012-01-01
Platelets are the second most abundant cell type in blood and are essential for maintaining haemostasis. Their count and volume are tightly controlled within narrow physiological ranges, but there is only limited understanding of the molecular processes controlling both traits. Here we carried out a high-powered meta-analysis of genome-wide association studies (GWAS) in up to 66,867 individuals of European ancestry, followed by extensive biological and functional assessment. We identified 68 genomic loci reliably associated with platelet count and volume mapping to established and putative novel regulators of megakaryopoiesis and platelet formation. These genes show megakaryocyte-specific gene expression patterns and extensive network connectivity. Using gene silencing in Danio rerio and Drosophila melanogaster, we identified 11 of the genes as novel regulators of blood cell formation. Taken together, our findings advance understanding of novel gene functions controlling fate-determining events during megakaryopoiesis and platelet formation, providing a new example of successful translation of GWAS to function. PMID:22139419
High-Sensitivity Fast Neutron Detector KNK-2-8M
NASA Astrophysics Data System (ADS)
Koshelev, A. S.; Dovbysh, L. Ye.; Ovchinnikov, M. A.; Pikulina, G. N.; Drozdov, Yu. M.; Chuklyaev, S. V.; Pepyolyshev, Yu. N.
2017-12-01
The design of the fast neutron detector KNK-2-8M is outlined. The results of he detector study in the pulse counting mode with pulses from 238U nuclei fission in the radiator of the neutron-sensitive section and in the current mode with separation of functional section currents are presented. The possibilities of determination of the effective number of 238U nuclei in the radiator of the neutron-sensitive section are considered. The diagnostic capabilities of the detector in the counting mode are demonstrated, as exemplified by the analysis of reference data on characteristics of neutron fields in the BR-1 reactor hall. The diagnostic capabilities of the detector in the current mode are demonstrated, as exemplified by the results of measurements of 238U fission intensity in the power startup of the BR-K1 reactor in the fission pulse generation mode with delayed neutrons and the detector placed in the reactor cavity in conditions of large-scale variation of the reactor radiation fields.
Update on Heavy-Meson Spectrum Tests of the Oktay--Kronfeld Action
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Jon A.; Jang, Yong-Chull; Lee, Weonjong
2016-01-18
We present updated results of a numerical improvement test with heavy-meson spectrum for the Oktay--Kronfeld (OK) action. The OK action is an extension of the Fermilab improvement program for massive Wilson fermions including all dimension-six and some dimension-seven bilinear terms. Improvement terms are truncated by HQET power counting atmore » $$\\mathrm{O}(\\Lambda^3/m_Q^3)$$ for heavy-light systems, and by NRQCD power counting at $$\\mathrm{O}(v^6)$$ for quarkonium. They suffice for tree-level matching to QCD to the given order in the power-counting schemes. To assess the improvement, we generate new data with the OK and Fermilab action that covers both charm and bottom quark mass regions on a MILC coarse $$(a \\approx 0.12~\\text{fm})$$ $2+1$ flavor, asqtad-staggered ensemble. We update the analyses of the inconsistency quantity and the hyperfine splittings for the rest and kinetic masses. With one exception, the results clearly show that the OK action significantly reduces heavy-quark discretization effects in the meson spectrum. The exception is the hyperfine splitting of the heavy-light system near the $$B_s$$ meson mass, where statistics are too low to draw a firm conclusion, despite promising results.« less
Bacteriological evaluation of Allium sativum oil as a new medicament for pulpotomy of primary teeth
Mohammad, Shukry Gamal; Baroudi, Kusai
2015-01-01
Objective: To compare the effects of Allium sativum oil and formocresol on the pulp tissue of the pulpotomized teeth. Materials and Methods: Twenty children were selected for this study. All children had a pair of non-vital primary molars. A sterile paper point was dipped in the root canals prior to the mortal pulpotomy. These paper points were collected in transfer media and immediately transported to the microbiological lab to be investigated microbiologically (for Streptococcus mutans and Lactobacillus acidophilus). Then the procedure of mortal pulpotomy was performed. After 2 weeks, the cotton pellets were removed and sterile paper points were dipped in the root canals for microbiological examination. Then comparison between the count of bacteria before and after treatment was conducted. Statistical analysis was performed using independent t-test and paired t-test at the significance level of α = 0.05. Results: After application of both medicaments, there was a marked decrease in S. mutans and L. acidophilus counts. The difference between the mean of log values of the count before and after the application was highly significant for both medicaments (P < 0.05); however, better results were obtained when A. sativum oil was used. Conclusion: A. sativum oil had more powerful antimicrobial effects than formocresol on the bacteria of the infected root canals. PMID:25992338
Bacteriological evaluation of Allium sativum oil as a new medicament for pulpotomy of primary teeth.
Mohammad, Shukry Gamal; Baroudi, Kusai
2015-01-01
To compare the effects of Allium sativum oil and formocresol on the pulp tissue of the pulpotomized teeth. Twenty children were selected for this study. All children had a pair of non-vital primary molars. A sterile paper point was dipped in the root canals prior to the mortal pulpotomy. These paper points were collected in transfer media and immediately transported to the microbiological lab to be investigated microbiologically (for Streptococcus mutans and Lactobacillus acidophilus). Then the procedure of mortal pulpotomy was performed. After 2 weeks, the cotton pellets were removed and sterile paper points were dipped in the root canals for microbiological examination. Then comparison between the count of bacteria before and after treatment was conducted. Statistical analysis was performed using independent t-test and paired t-test at the significance level of α = 0.05. After application of both medicaments, there was a marked decrease in S. mutans and L. acidophilus counts. The difference between the mean of log values of the count before and after the application was highly significant for both medicaments (P < 0.05); however, better results were obtained when A. sativum oil was used. A. sativum oil had more powerful antimicrobial effects than formocresol on the bacteria of the infected root canals.
Robustly detecting differential expression in RNA sequencing data using observation weights
Zhou, Xiaobei; Lindsay, Helen; Robinson, Mark D.
2014-01-01
A popular approach for comparing gene expression levels between (replicated) conditions of RNA sequencing data relies on counting reads that map to features of interest. Within such count-based methods, many flexible and advanced statistical approaches now exist and offer the ability to adjust for covariates (e.g. batch effects). Often, these methods include some sort of ‘sharing of information’ across features to improve inferences in small samples. It is important to achieve an appropriate tradeoff between statistical power and protection against outliers. Here, we study the robustness of existing approaches for count-based differential expression analysis and propose a new strategy based on observation weights that can be used within existing frameworks. The results suggest that outliers can have a global effect on differential analyses. We demonstrate the effectiveness of our new approach with real data and simulated data that reflects properties of real datasets (e.g. dispersion-mean trend) and develop an extensible framework for comprehensive testing of current and future methods. In addition, we explore the origin of such outliers, in some cases highlighting additional biological or technical factors within the experiment. Further details can be downloaded from the project website: http://imlspenticton.uzh.ch/robinson_lab/edgeR_robust/. PMID:24753412
A Streaming PCA VLSI Chip for Neural Data Compression.
Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi
2017-12-01
Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.
Link, W.A.; Sauer, J.R.; Helbig, Andreas J.; Flade, Martin
1999-01-01
Count survey data are commonly used for estimating temporal and spatial patterns of population change. Since count surveys are not censuses, counts can be influenced by 'nuisance factors' related to the probability of detecting animals but unrelated to the actual population size. The effects of systematic changes in these factors can be confounded with patterns of population change. Thus, valid analysis of count survey data requires the identification of nuisance factors and flexible models for their effects. We illustrate using data from the Christmas Bird Count (CBC), a midwinter survey of bird populations in North America. CBC survey effort has substantially increased in recent years, suggesting that unadjusted counts may overstate population growth (or understate declines). We describe a flexible family of models for the effect of effort, that includes models in which increasing effort leads to diminishing returns in terms of the number of birds counted.
Ghost-free, finite, fourth-order D = 3 gravity.
Deser, S
2009-09-04
Canonical analysis of a recently proposed linear + quadratic curvature gravity model in D = 3 establishes its pure, irreducibly fourth derivative, quadratic curvature limit as both ghost-free and power-counting UV finite, thereby maximally violating standard folklore. This limit is representative of a generic class whose kinetic terms are conformally invariant in any dimension, but it is unique in simultaneously avoiding the transverse-traceless graviton ghosts plaguing D > 3 quadratic actions as well as double pole propagators in its other variables. While the two-term model is also unitary, its additional mode's second-derivative nature forfeits finiteness.
Design and Performance of the Astro-E/XRS Signal Processing System
NASA Technical Reports Server (NTRS)
Boyce, Kevin R.; Audley, M. D.; Baker, R. G.; Dumonthier, J. J.; Fujimoto, R.; Gendreau, K. C.; Ishisaki, Y.; Kelley, R. L.; Stahle, C. K.; Szymkowiak, A. E.
1999-01-01
We describe the signal processing system of the Astro-E XRS instrument. The Calorimeter Analog Processor (CAP) provides bias and power for the detectors and amplifies the detector signals by a factor of 20,000. The Calorimeter Digital Processor (CDP) performs the digital processing of the calorimeter signals, detecting X-ray pulses and analyzing them by optimal filtering. We describe the operation of pulse detection, Pulse height analysis. and risetime determination. We also discuss performance, including the three event grades (hi-res mid-res, and low-res). anticoincidence detection, counting rate dependence, and noise rejection.
29. EXTERIOR VIEW LOOKING INTO THE FOURTH TAILRACE (COUNTING FROM ...
29. EXTERIOR VIEW LOOKING INTO THE FOURTH TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). NOTE THE DRIVE SHAFT, GEAR AND WATER LINE EXTENDING FROM THE STEEL BULKHEAD. THIS EQUIPMENT IS EXTANT FROM THE ERA OF PULP MILL OPERATIONS. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
Uncertainties in internal gas counting
NASA Astrophysics Data System (ADS)
Unterweger, M.; Johansson, L.; Karam, L.; Rodrigues, M.; Yunoki, A.
2015-06-01
The uncertainties in internal gas counting will be broken down into counting uncertainties and gas handling uncertainties. Counting statistics, spectrum analysis, and electronic uncertainties will be discussed with respect to the actual counting of the activity. The effects of the gas handling and quantities of counting and sample gases on the uncertainty in the determination of the activity will be included when describing the uncertainties arising in the sample preparation.
Atmospheric mold spore counts in relation to meteorological parameters
NASA Astrophysics Data System (ADS)
Katial, R. K.; Zhang, Yiming; Jones, Richard H.; Dyer, Philip D.
Fungal spore counts of Cladosporium, Alternaria, and Epicoccum were studied during 8 years in Denver, Colorado. Fungal spore counts were obtained daily during the pollinating season by a Rotorod sampler. Weather data were obtained from the National Climatic Data Center. Daily averages of temperature, relative humidity, daily precipitation, barometric pressure, and wind speed were studied. A time series analysis was performed on the data to mathematically model the spore counts in relation to weather parameters. Using SAS PROC ARIMA software, a regression analysis was performed, regressing the spore counts on the weather variables assuming an autoregressive moving average (ARMA) error structure. Cladosporium was found to be positively correlated (P<0.02) with average daily temperature, relative humidity, and negatively correlated with precipitation. Alternaria and Epicoccum did not show increased predictability with weather variables. A mathematical model was derived for Cladosporium spore counts using the annual seasonal cycle and significant weather variables. The model for Alternaria and Epicoccum incorporated the annual seasonal cycle. Fungal spore counts can be modeled by time series analysis and related to meteorological parameters controlling for seasonallity; this modeling can provide estimates of exposure to fungal aeroallergens.
NASA Technical Reports Server (NTRS)
Haskin, Larry A.; Wang, Alian; Rockow, Kaylynn M.; Jolliff, Bradley L.; Korotev, Randy L.; Viskupic, Karen M.
1997-01-01
Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.
Aguzzi, Jacopo; Costa, Corrado; Robert, Katleen; Matabos, Marjolaine; Antonucci, Francesca; Juniper, S. Kim; Menesatti, Paolo
2011-01-01
The development and deployment of sensors for undersea cabled observatories is presently biased toward the measurement of habitat variables, while sensor technologies for biological community characterization through species identification and individual counting are less common. The VENUS cabled multisensory network (Vancouver Island, Canada) deploys seafloor camera systems at several sites. Our objective in this study was to implement new automated image analysis protocols for the recognition and counting of benthic decapods (i.e., the galatheid squat lobster, Munida quadrispina), as well as for the evaluation of changes in bacterial mat coverage (i.e., Beggiatoa spp.), using a camera deployed in Saanich Inlet (103 m depth). For the counting of Munida we remotely acquired 100 digital photos at hourly intervals from 2 to 6 December 2009. In the case of bacterial mat coverage estimation, images were taken from 2 to 8 December 2009 at the same time frequency. The automated image analysis protocols for both study cases were created in MatLab 7.1. Automation for Munida counting incorporated the combination of both filtering and background correction (Median- and Top-Hat Filters) with Euclidean Distances (ED) on Red-Green-Blue (RGB) channels. The Scale-Invariant Feature Transform (SIFT) features and Fourier Descriptors (FD) of tracked objects were then extracted. Animal classifications were carried out with the tools of morphometric multivariate statistic (i.e., Partial Least Square Discriminant Analysis; PLSDA) on Mean RGB (RGBv) value for each object and Fourier Descriptors (RGBv+FD) matrices plus SIFT and ED. The SIFT approach returned the better results. Higher percentages of images were correctly classified and lower misclassification errors (an animal is present but not detected) occurred. In contrast, RGBv+FD and ED resulted in a high incidence of records being generated for non-present animals. Bacterial mat coverage was estimated in terms of Percent Coverage and Fractal Dimension. A constant Region of Interest (ROI) was defined and background extraction by a Gaussian Blurring Filter was performed. Image subtraction within ROI was followed by the sum of the RGB channels matrices. Percent Coverage was calculated on the resulting image. Fractal Dimension was estimated using the box-counting method. The images were then resized to a dimension in pixels equal to a power of 2, allowing subdivision into sub-multiple quadrants. In comparisons of manual and automated Percent Coverage and Fractal Dimension estimates, the former showed an overestimation tendency for both parameters. The primary limitations on the automatic analysis of benthic images were habitat variations in sediment texture and water column turbidity. The application of filters for background corrections is a required preliminary step for the efficient recognition of animals and bacterial mat patches. PMID:22346657
In vivo cell tracking and quantification method in adult zebrafish
NASA Astrophysics Data System (ADS)
Zhang, Li; Alt, Clemens; Li, Pulin; White, Richard M.; Zon, Leonard I.; Wei, Xunbin; Lin, Charles P.
2012-03-01
Zebrafish have become a powerful vertebrate model organism for drug discovery, cancer and stem cell research. A recently developed transparent adult zebrafish using double pigmentation mutant, called casper, provide unparalleled imaging power in in vivo longitudinal analysis of biological processes at an anatomic resolution not readily achievable in murine or other systems. In this paper we introduce an optical method for simultaneous visualization and cell quantification, which combines the laser scanning confocal microscopy (LSCM) and the in vivo flow cytometry (IVFC). The system is designed specifically for non-invasive tracking of both stationary and circulating cells in adult zebrafish casper, under physiological conditions in the same fish over time. The confocal imaging part in this system serves the dual purposes of imaging fish tissue microstructure and a 3D navigation tool to locate a suitable vessel for circulating cell counting. The multi-color, multi-channel instrument allows the detection of multiple cell populations or different tissues or organs simultaneously. We demonstrate initial testing of this novel instrument by imaging vasculature and tracking circulating cells in CD41: GFP/Gata1: DsRed transgenic casper fish whose thrombocytes/erythrocytes express the green and red fluorescent proteins. Circulating fluorescent cell incidents were recorded and counted repeatedly over time and in different types of vessels. Great application opportunities in cancer and stem cell researches are discussed.
Arraycount, an algorithm for automatic cell counting in microwell arrays.
Kachouie, Nezamoddin; Kang, Lifeng; Khademhosseini, Ali
2009-09-01
Microscale technologies have emerged as a powerful tool for studying and manipulating biological systems and miniaturizing experiments. However, the lack of software complementing these techniques has made it difficult to apply them for many high-throughput experiments. This work establishes Arraycount, an approach to automatically count cells in microwell arrays. The procedure consists of fluorescent microscope imaging of cells that are seeded in microwells of a microarray system and then analyzing images via computer to recognize the array and count cells inside each microwell. To start counting, green and red fluorescent images (representing live and dead cells, respectively) are extracted from the original image and processed separately. A template-matching algorithm is proposed in which pre-defined well and cell templates are matched against the red and green images to locate microwells and cells. Subsequently, local maxima in the correlation maps are determined and local maxima maps are thresholded. At the end, the software records the cell counts for each detected microwell on the original image in high-throughput. The automated counting was shown to be accurate compared with manual counting, with a difference of approximately 1-2 cells per microwell: based on cell concentration, the absolute difference between manual and automatic counting measurements was 2.5-13%.
Cubison, M. J.; Jimenez, J. L.
2015-06-05
Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less
NASA Astrophysics Data System (ADS)
Jiang, Shyh-Biau; Yeh, Tse-Liang; Chen, Li-Wu; Liu, Jann-Yenq; Yu, Ming-Hsuan; Huang, Yu-Qin; Chiang, Chen-Kiang; Chou, Chung-Jen
2018-05-01
In this study, we construct a photomultiplier calibration system. This calibration system can help scientists measuring and establishing the characteristic curve of the photon count versus light intensity. The system uses an innovative 10-fold optical attenuator to enable an optical power meter to calibrate photomultiplier tubes which have the resolution being much greater than that of the optical power meter. A simulation is firstly conducted to validate the feasibility of the system, and then the system construction, including optical design, circuit design, and software algorithm, is realized. The simulation generally agrees with measurement data of the constructed system, which are further used to establish the characteristic curve of the photon count versus light intensity.
Reliable enumeration of malaria parasites in thick blood films using digital image analysis.
Frean, John A
2009-09-23
Quantitation of malaria parasite density is an important component of laboratory diagnosis of malaria. Microscopy of Giemsa-stained thick blood films is the conventional method for parasite enumeration. Accurate and reproducible parasite counts are difficult to achieve, because of inherent technical limitations and human inconsistency. Inaccurate parasite density estimation may have adverse clinical and therapeutic implications for patients, and for endpoints of clinical trials of anti-malarial vaccines or drugs. Digital image analysis provides an opportunity to improve performance of parasite density quantitation. Accurate manual parasite counts were done on 497 images of a range of thick blood films with varying densities of malaria parasites, to establish a uniformly reliable standard against which to assess the digital technique. By utilizing descriptive statistical parameters of parasite size frequency distributions, particle counting algorithms of the digital image analysis programme were semi-automatically adapted to variations in parasite size, shape and staining characteristics, to produce optimum signal/noise ratios. A reliable counting process was developed that requires no operator decisions that might bias the outcome. Digital counts were highly correlated with manual counts for medium to high parasite densities, and slightly less well correlated with conventional counts. At low densities (fewer than 6 parasites per analysed image) signal/noise ratios were compromised and correlation between digital and manual counts was poor. Conventional counts were consistently lower than both digital and manual counts. Using open-access software and avoiding custom programming or any special operator intervention, accurate digital counts were obtained, particularly at high parasite densities that are difficult to count conventionally. The technique is potentially useful for laboratories that routinely perform malaria parasite enumeration. The requirements of a digital microscope camera, personal computer and good quality staining of slides are potentially reasonably easy to meet.
Golwala, Zainab Mohammedi; Shah, Hardik; Gupta, Neeraj; Sreenivas, V; Puliyel, Jacob M
2016-06-01
Thrombocytopenia has been shown to predict mortality. We hypothesize that platelet indices may be more useful prognostic indicators. Our study subjects were children one month to 14 years old admitted to our hospital. To determine whether platelet count, plateletcrit (PCT), mean platelet volume (MPV) and platelet distribution width (PDW) and their ratios can predict mortality in hospitalised children. Children who died during hospital stay were the cases. Controls were age matched children admitted contemporaneously. The first blood sample after admission was used for analysis. Receiver operating characteristic (ROC) curve was used to identify the best threshold for measured variables and the ratios studied. Multiple regression analysis was done to identify independent predictors of mortality. Forty cases and forty controls were studied. Platelet count, PCT and the ratios of MPV/Platelet count, MPV/PCT, PDW/Platelet count, PDW/PCT and MPV × PDW/Platelet count × PCT were significantly different among children who survived compared to those who died. On multiple regression analysis the ratio of MPV/PCT, PDW/Platelet count and MPV/Platelet count were risk factors for mortality with an odds ratio of 4.31(95% CI, 1.69-10.99), 3.86 (95% CI, 1.53-9.75), 3.45 (95% CI, 1.38-8.64) respectively. In 67% of the patients who died MPV/PCT ratio was above 41.8 and PDW/Platelet count was above 3.86. In 65% of patients who died MPV/Platelet count was above 3.45. The MPV/PCT, PDW/Platelet count and MPV/Platelet count, in the first sample after admission in this case control study were predictors of mortality and could predict 65% to 67% of deaths accurately.
Sauer, J.R.
1999-01-01
The North American Breeding Bird Survey was started in 1966, and provides information on population change for >400 species of birds. it covers the continental United States, Canada, and Alaska, and is conducted once each year, in June, by volunteer observers. A 39.4 kIn roadside survey route is driven starting 30 min before sunrise, and a 3 min point count is conducted at each of 50 stops spaced every 0.8 kIn. Existing analyses of the data are internet-based (http://www.mbr-pwrc.usgs.govlbbslbbs.html), and include maps of relative abundance, estimates of population change including trends (%/yr), composite annual indices (pattern in time), and maps of population trend (pattern in space). At least 36 species of marsh birds are encountered on the BBS, and the survey provides estimates with greatly varying levels of efficiency for the species. It is often difficult to understand how well the BBS surveys a species. Often, efficiency is judged by estimating trend and its variance for a species, then by calculating power and needed samples to detect a prespecified trend over some time period (e.g., a 2%/yr trend over 31 yr). Unfortunately, this approach is not always valid, as estimated trends and variances can be of little use if the population is poorly sampled. Lurking concerns with BBS data include (1) incomplete coverage of species range; (2) undersampling of habitats; and (3) low and variable visibility of birds during point counts. It is difficult to evaluate these concerns, because known populations do not exist for comparison with counts, and detection rates are time-consuming and costly to estimate. I evaluated the efficiency of the BBS for selected rails (Rallidae) and snipes (Scolopacidae), presenting estimates of population trend over 1966-1996 (T), power to detect 2%/yr trend over 31 yr, needed samples to achieve power of 0.75 with alpha= 0.1, number of survey routes with data for the species (N), average abundance on survey routes (RA), and maps of relative abundance. Examples include Yellow Rail (Coturnicops noveboracensis) (T=12 %/yr; P= 0.0085; N =28; routes; RA=0.05; Power=0.37; Needed samples=85), Black Rail (Laterallus jamaicensis) (No trend data or power information available, N =8), Clapper Rail (Rallus longirostris) (T=1.9%/yr; P=0.55; N =64; RA=0.31; Power=0.35; Needed samples=590), King Rail (Rallus elegans) (T=-4.2 %/yr; P= 0.03; N =76; Power=0.41; Needed samples=159), Sora (Porzana carolina) (T=0.98 %/yr; P= 0.24; N =720; RA= 0.92; Power=0.69; Needed samples= 377), and Common Snipe (Gallinago gallinago) (T=-0.24 %/yr; P= 0.54; N =1412; RA= 2.19; Power=0.98; Needed samples=205). With regard to quality of BBS data, marsh birds fall into 3 categories: (1) almost never encountered on BBS routes; (2) encountered at extremely low abundances on BBS routes; and (3) probably fairly well sampled by BBS roadside counts. BBS data can provide useful information for many marsh bird species, but users should be aware of the limitations of the BBS sample for monitoring species that have low visibility from point counts and prefer habitats not often encountered on roadsides.
31. EXTERIOR VIEW LOOKING INTO THE SIXTH TAILRACE (COUNTING FROM ...
31. EXTERIOR VIEW LOOKING INTO THE SIXTH TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). THIS AREA IS THE PORTION OF THE PULP MILL THAT WAS NEVER REBUILT AFTER A DEVASTATING FIRE IN 1925 AND SUBSEQUENT END TO PULP PRODUCTION AT THIS SITE. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
33. EXTERIOR VIEW LOOKING INTO THE FIFTH TAILRACE (COUNTING FROM ...
33. EXTERIOR VIEW LOOKING INTO THE FIFTH TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END SOUTHEAST TO NORTHWEST). THIS AREA IS THE PORTION OF THE PULP MILL THAT WAS NEVER REBUILT AFTER A DEVASTATING FIRE IN 1925 AND SUBSEQUENT END TO PULP PRODUCTION AT THIS SITE. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
Accurate live and dead bacterial cell enumeration using flow cytometry (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ou, Fang; McGoverin, Cushla; Swift, Simon; Vanholsbeeck, Frédérique
2017-03-01
Flow cytometry (FCM) is based on the detection of scattered light and fluorescence to identify cells with particular characteristics of interest. However most FCM cannot precisely control the flow through its interrogation point and hence the volume and concentration of the sample cannot be immediately obtained. The easiest, most reliable and inexpensive way of obtaining absolute counts with FCM is by using reference beads. We investigated a method of using FCM with reference beads to measure live and dead bacterial concentration over the range of 106 to 108 cells/mL and ratio varying from 0 to 100%. We believe we are the first to use this method for such a large cell concentration range while also establishing the effect of varying the live/dead bacteria ratios. Escherichia coli solutions with differing ratios of live:dead cells were stained with fluorescent dyes SYTO 9 and propidium iodide (PI), which label live and dead cells, respectively. Samples were measured using a LSR II Flow Cytometer (BD Biosciences); using 488 nm excitation with 20 mW power. Both SYTO 9 and PI fluorescence were collected and threshold was set to side scatter. Traditional culture-based plate count was done in parallel to the FCM analysis. The concentration of live bacteria from FCM was compared to that obtained by plate counts. Preliminary results show that the concentration of live bacteria obtained by FCM and plate counts correlate well with each other and indicates this may be extended to a wider concentration range or for studying other cell characteristics.
Sohn, Won; Lee, Oh Young; Lee, Sang Pyo; Lee, Kang Nyeong; Jun, Dae Won; Lee, Hang Lak; Yoon, Byung Chul; Choi, Ho Soon; Sim, Jongmin; Jang, Ki-Seok
2014-01-01
Recent studies have shown that mast cells play an important role in irritable bowel syndrome (IBS). We investigated the relationship between mast cells and the gut hormones substance P and vasoactive intestinal peptide (VIP) in irritable bowel syndrome with diarrhea (IBS-D). Colonoscopic biopsies were performed on the rectal mucosa of 43 subjects (IBS-D patients: 22, healthy volunteers: 21) diagnosed according to the Rome III criteria. Mast cells, and substance P & VIP were evaluated by quantitative immunohistology and image analysis. Mast cells were counted as tryptase-positive cells in the lamina propria, and substance P and VIP levels were expressed as percentages of total areas of staining. Mast cell counts were higher in IBS-D patients than healthy volunteers (9.6 ± 3.3 vs. 5.7 ± 2.5/high power field (HPF), p < 0.01). Substance P was also elevated (0.11 ± 0.08% vs. 0.03 ± 0.02 %, p < 0.01) while VIP was only high in women with IBS-D. Mast cell counts were positively correlated with levels of substance P & VIP in women but not men (women: r = 0.625, p < 0.01 for substance P and r = 0.651, p < 0.01 for VIP). However, mast cell counts were not correlated with IBS symptoms including abdominal pain. Mast cells are activated leading to the raised levels of substance P & VIP in IBS-D patients. However, the correlation between mast cells and levels of substance P & VIP differs according to gender.
Molenaar, Heike; Glawe, Martin; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Ornamental plant variety improvement is limited by current phenotyping approaches and neglected use of experimental designs. The present study was conducted to show the benefits of using an experimental design and corresponding analysis in ornamental breeding regarding simulated response to selection in Pelargonium zonale for production-related traits. This required establishment of phenotyping protocols for root formation and stem cutting counts, with which 974 genotypes were assessed in a two-phase experimental design. The present paper evaluates this protocol. The possibility of varietal improvement through indirect selection on secondary traits such as branch count and flower count was assessed by genetic correlations. Simulated response to selection varied greatly, depending on the genotypic variances of the breeding population and traits. A varietal improvement of over 20% is possible for stem cutting count, root formation, branch count and flower count. In contrast, indirect selection of stem cutting count by branch count or flower count was found to be ineffective. The established phenotypic protocols and two-phase experimental designs are valuable tools for breeding of P. zonale. PMID:28243453
Molenaar, Heike; Glawe, Martin; Boehm, Robert; Piepho, Hans-Peter
2017-01-01
Ornamental plant variety improvement is limited by current phenotyping approaches and neglected use of experimental designs. The present study was conducted to show the benefits of using an experimental design and corresponding analysis in ornamental breeding regarding simulated response to selection in Pelargonium zonale for production-related traits. This required establishment of phenotyping protocols for root formation and stem cutting counts, with which 974 genotypes were assessed in a two-phase experimental design. The present paper evaluates this protocol. The possibility of varietal improvement through indirect selection on secondary traits such as branch count and flower count was assessed by genetic correlations. Simulated response to selection varied greatly, depending on the genotypic variances of the breeding population and traits. A varietal improvement of over 20% is possible for stem cutting count, root formation, branch count and flower count. In contrast, indirect selection of stem cutting count by branch count or flower count was found to be ineffective. The established phenotypic protocols and two-phase experimental designs are valuable tools for breeding of P. zonale .
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven
2016-01-01
The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan
2016-01-01
This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
Optimizing the duration of point counts for monitoring trends in bird populations
Jared Verner
1988-01-01
Minute-by-minute analysis of point counts of birds in mixed-conifer forests in the Sierra National Forest, central California, showed that cumulative counts of species and individuals increased in a curvilinear fashion but did not reach asymptotes after 10 minutes of counting. Comparison of the expected number of individuals counted per hour with various combinations...
Adaptive Search through Constraint Violations
1990-01-01
underly counting (Gelman & Gallistel , 1978; Gelman & Meck, 1986). Modifying slightly the analysis by Gelman and Gallistel (1978), we identify three...understanding of counting, Gelman and Gallistel (1978) invented two non-standard counting tasks, ordered counting, in which the objects are counted In... Gallistel , 1978; Gelman & Meck, 1986). The most plausible explanation for this flexibility is that children can derive the control knowledge for the
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C
2018-05-01
Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.
NO TIME FOR DEAD TIME: TIMING ANALYSIS OF BRIGHT BLACK HOLE BINARIES WITH NuSTAR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bachetti, Matteo; Barret, Didier; Harrison, Fiona A.
Timing of high-count-rate sources with the NuSTAR Small Explorer Mission requires specialized analysis techniques. NuSTAR was primarily designed for spectroscopic observations of sources with relatively low count rates rather than for timing analysis of bright objects. The instrumental dead time per event is relatively long (∼2.5 msec) and varies event-to-event by a few percent. The most obvious effect is a distortion of the white noise level in the power density spectrum (PDS) that cannot be easily modeled with standard techniques due to the variable nature of the dead time. In this paper, we show that it is possible to exploitmore » the presence of two completely independent focal planes and use the cospectrum, the real part of the cross PDS, to obtain a good proxy of the white-noise-subtracted PDS. Thereafter, one can use a Monte Carlo approach to estimate the remaining effects of dead time, namely, a frequency-dependent modulation of the variance and a frequency-independent drop of the sensitivity to variability. In this way, most of the standard timing analysis can be performed, albeit with a sacrifice in signal-to-noise ratio relative to what would be achieved using more standard techniques. We apply this technique to NuSTAR observations of the black hole binaries GX 339–4, Cyg X-1, and GRS 1915+105.« less
Advanced electrical power, distribution and control for the Space Transportation System
NASA Astrophysics Data System (ADS)
Hansen, Irving G.; Brandhorst, Henry W., Jr.
1990-08-01
High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.
Advanced electrical power, distribution and control for the Space Transportation System
NASA Technical Reports Server (NTRS)
Hansen, Irving G.; Brandhorst, Henry W., Jr.
1990-01-01
High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.
Liu, Xiao; Demosthenous, Andreas; Vanhoestenberghe, Anne; Jiang, Dai; Donaldson, Nick
2012-06-01
This paper presents an integrated stimulator that can be embedded in implantable electrode books for interfacing with nerve roots at the cauda equina. The Active Book overcomes the limitation of conventional nerve root stimulators which can only support a small number of stimulating electrodes due to cable count restriction through the dura. Instead, a distributed stimulation system with many tripole electrodes can be configured using several Active Books which are addressed sequentially. The stimulator was fabricated in a 0.6-μm high-voltage CMOS process and occupies a silicon area of 4.2 × 6.5 mm(2). The circuit was designed to deliver up to 8 mA stimulus current to tripole electrodes from an 18 V power supply. Input pad count is limited to five (two power and three control lines) hence requiring a specific procedure for downloading stimulation commands to the chip and extracting information from it. Supported commands include adjusting the amplitude of stimulus current, varying the current ratio at the two anodes in each channel, and measuring relative humidity inside the chip package. In addition to stimulation mode, the chip supports quiescent mode, dissipating less than 100 nA current from the power supply. The performance of the stimulator chip was verified with bench tests including measurements using tripoles in saline.
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Teymoori, Gholamhasan; Pahari, Bholanath; Viswanathan, Elumalai; Edén, Mattias
2017-03-01
The authors regret that an inappropriate NMR data processing, not known to all authors at the time of publication, was used to produce the multiple-quantum coherence (MQC) spin counting data presented in our article: this lead to artificially enhanced results, particularly concerning those obtained at long MQC excitation intervals (τexc). Here we reproduce Figs. 4-7 with correctly processed data.
Test Operations Procedure (TOP) 01-2-603 Rotorcraft Laboratory Vibration Test Schedules
2017-06-12
for all rotary wing aircraft platforms. Tonal amplitudes are tabular based solely on engine revolutions per minute (RPM) and blade count. (4...Power Spectral Density (PSD) format with superimposed sinusoidal components that are associated with the rotor speeds and blade count of each...harmonics are not limited to the 3rd harmonic of the blade passage as in MIL-STD- TOP 01-2-603 12 June 2017 5 810. In addition, attempts were
Comparative studies of silicon photomultipliers and traditional vacuum photomultiplier tubes
NASA Astrophysics Data System (ADS)
Shi, Feng; Lü, Jun-Guang; Lu, Hong; Wang, Huan-Yu; Ma, Yu-Qian; Hu, Tao; Zhou, Li; Cai, Xiao; Sun, Li-Jun; Yu, Bo-Xiang; Fang, Jian; Xie, Yu-Guang; An, Zheng-Hua; Wang, Zhi-Gang; Gao, Min; Li, Xin-Qiao; Xu, Yan-Bing; Wang, Ping; Sun, Xi-Lei; Zhang, Ai-Wu; Xue, Zhen; Liu, Hong-Bang; Wang, Xiao-Dong; Zhao, Xiao-Yun; Zheng, Yang-Heng; Meng, Xiang-Cheng; Wang, Hui
2011-01-01
Silicon photomultipliers (SiPMs) are a new generation of semiconductor-based photon counting devices with the merits of low weight, low power consumption and low voltage operation, promising to meet the needs of space particle physics experiments. In this paper, comparative studies of SiPMs and traditional vacuum photomultiplier tubes (PMTs) have been performed regarding the basic properties of dark currents, dark counts and excess noise factors. The intrinsic optical crosstalk effect of SiPMs was evaluated.
Fractal analysis of scatter imaging signatures to distinguish breast pathologies
NASA Astrophysics Data System (ADS)
Eguizabal, Alma; Laughney, Ashley M.; Krishnaswamy, Venkataramanan; Wells, Wendy A.; Paulsen, Keith D.; Pogue, Brian W.; López-Higuera, José M.; Conde, Olga M.
2013-02-01
Fractal analysis combined with a label-free scattering technique is proposed for describing the pathological architecture of tumors. Clinicians and pathologists are conventionally trained to classify abnormal features such as structural irregularities or high indices of mitosis. The potential of fractal analysis lies in the fact of being a morphometric measure of the irregular structures providing a measure of the object's complexity and self-similarity. As cancer is characterized by disorder and irregularity in tissues, this measure could be related to tumor growth. Fractal analysis has been probed in the understanding of the tumor vasculature network. This work addresses the feasibility of applying fractal analysis to the scattering power map (as a physical modeling) and principal components (as a statistical modeling) provided by a localized reflectance spectroscopic system. Disorder, irregularity and cell size variation in tissue samples is translated into the scattering power and principal components magnitude and its fractal dimension is correlated with the pathologist assessment of the samples. The fractal dimension is computed applying the box-counting technique. Results show that fractal analysis of ex-vivo fresh tissue samples exhibits separated ranges of fractal dimension that could help classifier combining the fractal results with other morphological features. This contrast trend would help in the discrimination of tissues in the intraoperative context and may serve as a useful adjunct to surgeons.
Current developments in electrochemical storage systems for satellites
NASA Technical Reports Server (NTRS)
Gutmann, G.
1986-01-01
The need for batteries with greater power capacity and service life for power satellites is examined. The Ni/Cd and Ni/H batteries now being used must be upgraded to meet advanced space requirements. Improvements in power capacity, service life, and cycle count for various satellites in LEO and GEO orbits are discussed. The Ni/Cd and Ni/H cell reactions are explained, and the solubility and volume changes for various charged and uncharged masses are described. A chart of the energy content and cycle count for various cell systems is presented, and the factors which cause aging and failure in the Ni/Cd and Ni/H cell systems are discussed. The advantages of the Ni/H battery are given and the need for more developed electrochemical storage systems because of an increase in the mass of satellites is explained. The requirements for space batteries and the work currently done by NASA and West Germany on advanced batteries are discussed.
Sasagawa, Yohei; Danno, Hiroki; Takada, Hitomi; Ebisawa, Masashi; Tanaka, Kaori; Hayashi, Tetsutaro; Kurisaki, Akira; Nikaido, Itoshi
2018-03-09
High-throughput single-cell RNA-seq methods assign limited unique molecular identifier (UMI) counts as gene expression values to single cells from shallow sequence reads and detect limited gene counts. We thus developed a high-throughput single-cell RNA-seq method, Quartz-Seq2, to overcome these issues. Our improvements in the reaction steps make it possible to effectively convert initial reads to UMI counts, at a rate of 30-50%, and detect more genes. To demonstrate the power of Quartz-Seq2, we analyzed approximately 10,000 transcriptomes from in vitro embryonic stem cells and an in vivo stromal vascular fraction with a limited number of reads.
White Blood Cells, Neutrophils, and Reactive Oxygen Metabolites among Asymptomatic Subjects.
Kotani, Kazuhiko; Sakane, Naoki
2012-06-01
Chronic inflammation and oxidative stress are associated with health and the disease status. The objective of the present study was to investigate the association among white blood cell (WBC) counts, neutrophil counts as a WBC subpopulation, and diacron reactive oxygen metabolites (d-ROMs) levels in an asymptomatic population. The clinical data, including general cardiovascular risk variables and high-sensitivity C-reactive protein (hs-CRP), were collected from 100 female subjects (mean age, 62 years) in outpatient clinics. The correlation of the d-ROMs with hs-CRP, WBC, and neutrophil counts was examined. The mean/median levels were WBC counts 5.9 × 10(9)/L, neutrophil counts 3.6 × 10(9)/L, hs-CRP 0.06 mg/dL, and d-ROMs 359 CURR U. A simple correlation analysis showed a significant positive correlation of the d-ROMs with the WBC counts, neutrophil counts, or hs-CRP levels. The correlation between d-ROMs and neutrophil counts (β = 0.22, P < 0.05), as well as that between d-ROMs and hs-CRP (β = 0.28, P < 0.01), remained significant and independent in a multiple linear regression analysis adjusted for other variables. A multiple linear regression analysis showed that WBC counts had only a positive correlation tendency to the d-ROMs. Neutrophils may be slightly but more involved in the oxidative stress status, as assessed by d-ROMs, in comparison to the overall WBC. Further studies are needed to clarify the biologic mechanism(s) of the observed relationship.
Windsperger, Karin; Lehner, Rainer
2013-02-01
The aim of this study was to determine if the fibrinogen/C-reactive protein (CRP) ratio could be used in obstetrics as a predictor for a disseminated intravascular coagulation. One hundred eleven patients with hemolysis, elevated liver enzymes, and low platelet count syndrome at the Department of Obstetrics and Fetomaternal Medicine (General Hospital, Vienna, Austria) were selected and divided into 2 groups (overt disseminated intravascular coagulation, no overt disseminated intravascular coagulation). The classical parameters and the fibrinogen/CRP ratio were compared. The analysis was carried out using IBM SPSS statistical package (SPSS, Inc, Cary, NC). The fibrinogen/CRP ratio showed significant differences. The receiver-operating characteristic analysis showed for the ratio (area under the curve, 0.74) significantly better discriminative power than for fibrinogen (area under curve, 0.59). The odds ratio for the fibrinogen/CRP ratio was 7.04. Finally, significant correlations between the ratio and the neonatal outcome were found. We suggest the implementation of the fibrinogen/CRP ratio within patients with hemolysis, elevated liver enzymes, and low platelet count syndrome as a diagnostic and prognostic factor for the occurrence of disseminated intravascular coagulation. Copyright © 2013 Mosby, Inc. All rights reserved.
Gene coexpression measures in large heterogeneous samples using count statistics.
Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan
2014-11-18
With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.
An Exploratory Analysis of Waterfront Force Protection Measures Using Simulation
2002-03-01
LEFT BLANK 75 APPENDIX B. DESIGN POINT DATA Table 16. Design Point One Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.002469 0.006237 27.63104 7144.875 0.155223 76 Table 17. Design Point Two Data breach - count leakers- count numberAv ailablePBs- mean numberInI...0.001163 4.67E-12 29.80891 6393.874 0.188209 77 Table 18. Design Point Three Data breach - count leakers- count numberAv ailablePBs- mean
Sample to answer visualization pipeline for low-cost point-of-care blood cell counting
NASA Astrophysics Data System (ADS)
Smith, Suzanne; Naidoo, Thegaran; Davies, Emlyn; Fourie, Louis; Nxumalo, Zandile; Swart, Hein; Marais, Philip; Land, Kevin; Roux, Pieter
2015-03-01
We present a visualization pipeline from sample to answer for point-of-care blood cell counting applications. Effective and low-cost point-of-care medical diagnostic tests provide developing countries and rural communities with accessible healthcare solutions [1], and can be particularly beneficial for blood cell count tests, which are often the starting point in the process of diagnosing a patient [2]. The initial focus of this work is on total white and red blood cell counts, using a microfluidic cartridge [3] for sample processing. Analysis of the processed samples has been implemented by means of two main optical visualization systems developed in-house: 1) a fluidic operation analysis system using high speed video data to determine volumes, mixing efficiency and flow rates, and 2) a microscopy analysis system to investigate homogeneity and concentration of blood cells. Fluidic parameters were derived from the optical flow [4] as well as color-based segmentation of the different fluids using a hue-saturation-value (HSV) color space. Cell count estimates were obtained using automated microscopy analysis and were compared to a widely accepted manual method for cell counting using a hemocytometer [5]. The results using the first iteration microfluidic device [3] showed that the most simple - and thus low-cost - approach for microfluidic component implementation was not adequate as compared to techniques based on manual cell counting principles. An improved microfluidic design has been developed to incorporate enhanced mixing and metering components, which together with this work provides the foundation on which to successfully implement automated, rapid and low-cost blood cell counting tests.
Vajawat, Mayuri; Deepika, P. C.; Kumar, Vijay; Rajeshwari, P.
2015-01-01
Aim: To compare the efficacy of powered toothbrushes in improving gingival health and reducing salivary red complex counts as compared to manual toothbrushes, among autistic individuals. Materials and Methods: Forty autistics was selected. Test group received powered toothbrushes, and control group received manual toothbrushes. Plaque index and gingival index were recorded. Unstimulated saliva was collected for analysis of red complex organisms using polymerase chain reaction. Results: A statistically significant reduction in the plaque scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.002 for controls). This reduction was statistically more significant in the test group (P = 0.024). A statistically significant reduction in the gingival scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.001 for controls). This reduction was statistically more significant in the test group (P = 0.042). No statistically significant reduction in the detection rate of red complex organisms were seen at 4 weeks in both the groups. Conclusion: Powered toothbrushes result in a significant overall improvement in gingival health when constant reinforcement of oral hygiene instructions is given. PMID:26681855
Graph theory applied to noise and vibration control in statistical energy analysis models.
Guasch, Oriol; Cortés, Lluís
2009-06-01
A fundamental aspect of noise and vibration control in statistical energy analysis (SEA) models consists in first identifying and then reducing the energy flow paths between subsystems. In this work, it is proposed to make use of some results from graph theory to address both issues. On the one hand, linear and path algebras applied to adjacency matrices of SEA graphs are used to determine the existence of any order paths between subsystems, counting and labeling them, finding extremal paths, or determining the power flow contributions from groups of paths. On the other hand, a strategy is presented that makes use of graph cut algorithms to reduce the energy flow from a source subsystem to a receiver one, modifying as few internal and coupling loss factors as possible.
1983-08-01
LOOK DOWN F-4J/AWG-10 • ADDED PULSE DOPPLER • GOOD HEAD ON PERFORMANCE • POOR TAIL PERFORMANCE F-14/AWG-9 • ADDED TWS • HIGHER POWER • INCREASED...returned to NAS Lemoore for I level repair retested good . 45B/8-2 ^° F/A-18 YUMA DEPLOYMENT 45A/3-9 • MOST RECENT OF MANY NAVY DEPLOYMENTS...THERMAL ANALYSIS AND DESIGN Following the design process to minimize the parts count, and a selection/screening process to obtain good quality
Amano, Hikaru; Sakamoto, Hideaki; Shiga, Norikatsu; Suzuki, Kaori
2016-06-01
A screening method for measuring (90)Sr in edible plant samples by focusing on (90)Y in equilibrium with (90)Sr is reported. (90)Y was extracted from samples with acid, co-precipitated with iron hydroxide, and precipitated with oxalic acid. The dissolved oxalate precipitate was loaded on an extraction chromatography resin, and the (90)Y-enriched eluate was analyzed by Cherenkov counting with a TDCR liquid scintillation counter. (90)Sr ((90)Y) concentration was determined in plant samples collected near the damaged Fukushima Daiichi Nuclear Power Plants with this method. Copyright © 2016 Elsevier Ltd. All rights reserved.
Taylor, Sean; Landman, Michael J; Ling, Nicholas
2009-09-01
Enumeration of invertebrate hemocytes is a potentially powerful tool for the determination of physiological effects of extrinsic stressors, such as hypoxia, disease, and toxicant exposure. A detailed flow cytometric method of broad application was developed for the objective characterization and enumeration of the hemocytes of New Zealand freshwater crayfish Paranephrops planifrons for the purpose of physiological health assessment. Hemocyte populations were isolated by flow cytometric sorting based on differential light scatter properties followed by morphological characterization via light microscopy and software image analysis. Cells were identified as hyaline, semigranular, and granular hemocytes based on established invertebrate hemocyte classification. A characteristic decrease in nuclear size, an increase in granularity between the hyaline and granular cells, and the eccentric location of nuclei in granular cells were also observed. The granulocyte subpopulations were observed to possess varying degrees of granularity. The developed methodology was used to perform total and differential hemocyte counts from three lake populations and between wild and captive crayfish specimens. Differences in total and differential hemocyte counts were not observed among the wild populations. However, specimens held in captivity for 14 d exhibited a significant 63% reduction in total hemocyte count, whereas the relative hemocyte proportions remained the same. These results demonstrate the utility of this method for the investigation of subacute stressor effects in selected decapod crustaceans.
Neelon, Brian; O'Malley, A James; Smith, Valerie A
2016-11-30
This article is the second installment of a two-part tutorial on the analysis of zero-modified count and semicontinuous data. Part 1, which appears as a companion piece in this issue of Statistics in Medicine, provides a general background and overview of the topic, with particular emphasis on applications to health services research. Here, we present three case studies highlighting various approaches for the analysis of zero-modified data. The first case study describes methods for analyzing zero-inflated longitudinal count data. Case study 2 considers the use of hurdle models for the analysis of spatiotemporal count data. The third case study discusses an application of marginalized two-part models to the analysis of semicontinuous health expenditure data. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry
NASA Technical Reports Server (NTRS)
Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul
2003-01-01
Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.
Long, Imogen; Malone, Stephanie A; Tolan, Anne; Burgoyne, Kelly; Heron-Delaney, Michelle; Witteveen, Kate; Hulme, Charles
2016-12-01
Following on from ideas developed by Gerstmann, a body of work has suggested that impairments in finger gnosis may be causally related to children's difficulties in learning arithmetic. We report a study with a large sample of typically developing children (N=197) in which we assessed finger gnosis and arithmetic along with a range of other relevant cognitive predictors of arithmetic skills (vocabulary, counting, and symbolic and nonsymbolic magnitude judgments). Contrary to some earlier claims, we found no meaningful association between finger gnosis and arithmetic skills. Counting and symbolic magnitude comparison were, however, powerful predictors of arithmetic skills, replicating a number of earlier findings. Our findings seriously question theories that posit either a simple association or a causal connection between finger gnosis and the development of arithmetic skills. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Waveguide integrated low noise NbTiN nanowire single-photon detectors with milli-Hz dark count rate
Schuck, Carsten; Pernice, Wolfram H. P.; Tang, Hong X.
2013-01-01
Superconducting nanowire single-photon detectors are an ideal match for integrated quantum photonic circuits due to their high detection efficiency for telecom wavelength photons. Quantum optical technology also requires single-photon detection with low dark count rate and high timing accuracy. Here we present very low noise superconducting nanowire single-photon detectors based on NbTiN thin films patterned directly on top of Si3N4 waveguides. We systematically investigate a large variety of detector designs and characterize their detection noise performance. Milli-Hz dark count rates are demonstrated over the entire operating range of the nanowire detectors which also feature low timing jitter. The ultra-low dark count rate, in combination with the high detection efficiency inherent to our travelling wave detector geometry, gives rise to a measured noise equivalent power at the 10−20 W/Hz1/2 level. PMID:23714696
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akiba, M., E-mail: akiba@nict.go.jp; Tsujino, K.
This paper offers a theoretical explanation of the temperature and temporal dependencies of transient dark count rates (DCRs) measured for a linear-mode silicon avalanche photodiode (APD) and the dependencies of afterpulsing that were measured in Geiger-mode Si and InGaAs/InP APDs. The temporal dependencies exhibit power-law behavior, at least to some extent. For the transient DCR, the value of the DCR for a given time period increases with decreases in temperature, while the power-law behavior remains unchanged. The transient DCR is attributed to electron emissions from traps in the multiplication layer of the APD with a high electric field, and itsmore » temporal dependence is explained by a continuous change in the electron emission rate as a function of the electric field strength. The electron emission rate is calculated using a quantum model for phonon-assisted tunnel emission. We applied the theory to the temporal dependence of afterpulsing that was measured for Si and InGaAs/InP APDs. The power-law temporal dependence is attributed to the power-law function of the electron emission rate from the traps as a function of their position across the p–n junction of the APD. Deviations from the power-law temporal dependence can be derived from the upper and lower limits of the electric field strength.« less
Needleman, Ian G; Hirsch, Nicholas P; Leemans, Michele; Moles, David R; Wilson, Michael; Ready, Derren R; Ismail, Salim; Ciric, Lena; Shaw, Michael J; Smith, Martin; Garner, Anne; Wilson, Sally
2011-03-01
To investigate the effect of a powered toothbrush on colonization of dental plaque by ventilator-associated pneumonia (VAP)-associated organisms and dental plaque removal. Parallel-arm, single-centre, examiner- and analyst-masked randomized controlled trial. Forty-six adults were recruited within 48 h of admission. Test intervention: powered toothbrush, control intervention: sponge toothette, both used four times per day for 2 min. Groups received 20 ml, 0.2% chlorhexidine mouthwash at each time point. The results showed a low prevalence of respiratory pathogens throughout with no statistically significant differences between groups. A highly statistically significantly greater reduction in dental plaque was produced by the powered toothbrush compared with the control treatment; mean plaque index at day 5, powered toothbrush 0.75 [95% confidence interval (CI) 0.53, 1.00], sponge toothette 1.35 (95% CI 0.95, 1.74), p=0.006. Total bacterial viable count was also highly statistically significantly lower in the test group at day 5; Log(10) mean total bacterial counts: powered toothbrush 5.12 (95% CI 4.60, 5.63), sponge toothette 6.61 (95% CI 5.93, 7.28), p=0.002. Powered toothbrushes are highly effective for plaque removal in intubated patients in a critical unit and should be tested for their potential to reduce VAP incidence and health complications. © 2011 John Wiley & Sons A/S.
Automated cell counts on CSF samples: A multicenter performance evaluation of the GloCyte system.
Hod, E A; Brugnara, C; Pilichowska, M; Sandhaus, L M; Luu, H S; Forest, S K; Netterwald, J C; Reynafarje, G M; Kratz, A
2018-02-01
Automated cell counters have replaced manual enumeration of cells in blood and most body fluids. However, due to the unreliability of automated methods at very low cell counts, most laboratories continue to perform labor-intensive manual counts on many or all cerebrospinal fluid (CSF) samples. This multicenter clinical trial investigated if the GloCyte System (Advanced Instruments, Norwood, MA), a recently FDA-approved automated cell counter, which concentrates and enumerates red blood cells (RBCs) and total nucleated cells (TNCs), is sufficiently accurate and precise at very low cell counts to replace all manual CSF counts. The GloCyte System concentrates CSF and stains RBCs with fluorochrome-labeled antibodies and TNCs with nucleic acid dyes. RBCs and TNCs are then counted by digital image analysis. Residual adult and pediatric CSF samples obtained for clinical analysis at five different medical centers were used for the study. Cell counts were performed by the manual hemocytometer method and with the GloCyte System following the same protocol at all sites. The limits of the blank, detection, and quantitation, as well as precision and accuracy of the GloCyte, were determined. The GloCyte detected as few as 1 TNC/μL and 1 RBC/μL, and reliably counted as low as 3 TNCs/μL and 2 RBCs/μL. The total coefficient of variation was less than 20%. Comparison with cell counts obtained with a hemocytometer showed good correlation (>97%) between the GloCyte and the hemocytometer, including at very low cell counts. The GloCyte instrument is a precise, accurate, and stable system to obtain red cell and nucleated cell counts in CSF samples. It allows for the automated enumeration of even very low cell numbers, which is crucial for CSF analysis. These results suggest that GloCyte is an acceptable alternative to the manual method for all CSF samples, including those with normal cell counts. © 2017 John Wiley & Sons Ltd.
Non-Markovian full counting statistics in quantum dot molecules
Xue, Hai-Bin; Jiao, Hu-Jun; Liang, Jiu-Qing; Liu, Wu-Ming
2015-01-01
Full counting statistics of electron transport is a powerful diagnostic tool for probing the nature of quantum transport beyond what is obtainable from the average current or conductance measurement alone. In particular, the non-Markovian dynamics of quantum dot molecule plays an important role in the nonequilibrium electron tunneling processes. It is thus necessary to understand the non-Markovian full counting statistics in a quantum dot molecule. Here we study the non-Markovian full counting statistics in two typical quantum dot molecules, namely, serially coupled and side-coupled double quantum dots with high quantum coherence in a certain parameter regime. We demonstrate that the non-Markovian effect manifests itself through the quantum coherence of the quantum dot molecule system, and has a significant impact on the full counting statistics in the high quantum-coherent quantum dot molecule system, which depends on the coupling of the quantum dot molecule system with the source and drain electrodes. The results indicated that the influence of the non-Markovian effect on the full counting statistics of electron transport, which should be considered in a high quantum-coherent quantum dot molecule system, can provide a better understanding of electron transport through quantum dot molecules. PMID:25752245
Baruah, Bhaskarjyoti; Kumar, Tarun; Das, Prasenjit; Thakur, Bhaskar; Sreenivas, Vishnubatla; Ahuja, Vineet; Gupta, Siddhartha Datta; Makharia, Govind K
2017-09-01
Eosinophilic esophagitis (EoE) is being recognized increasingly all over the globe; Indian data is however sparse. We screened patients with symptoms of gastroesophageal reflux disease (GERD) for presence of EoE in them. Consecutive patients with symptoms suggestive of GERD underwent gastroduodenoscopy and esophageal biopsies, obtained from both the upper esophagus (5 cm below the upper esophageal sphincter) and lower esophagus (5 cm above gastroesophageal junction), as well as from any other endoscopically visible abnormal mucosa. Demographic and clinical characteristics, endoscopic findings, peripheral blood eosinophilic count, and history of use of proton-pump inhibitors (PPIs) were analyzed. Stool examination was done to rule out parasitoids. EoE was diagnosed if number of mucosal eosinophil infiltrate was >20 per high-power field. In the latter, Warthin-Starry stain was performed to rule out presence of H elicobacter pylori. Of 190 consecutive patients with symptoms of GERD screened, esophageal biopsies were available in 185 cases. Of them, 6 had EoE, suggesting a prevalence of 3.2% among patients with GERD. On univariate analysis, history of allergy, non-response to PPI, and absolute eosinophil counts and on multivariable analysis, history of allergy and no response to PPIs were significant predictors of EoE. Presence of EOE did not correlate with severity of reflux symptoms. In this hospital-based study from northern part of India, prevalence of EoE in patients with GERD was 3.2%. EoE should be considered as a diagnostic possibility, especially in those with history of allergy, no-response to PPI, and absolute eosinophil count of ≥250/cumm.
Ogungbenro, Kayode; Aarons, Leon
2011-08-01
In the recent years, interest in the application of experimental design theory to population pharmacokinetic (PK) and pharmacodynamic (PD) experiments has increased. The aim is to improve the efficiency and the precision with which parameters are estimated during data analysis and sometimes to increase the power and reduce the sample size required for hypothesis testing. The population Fisher information matrix (PFIM) has been described for uniresponse and multiresponse population PK experiments for design evaluation and optimisation. Despite these developments and availability of tools for optimal design of population PK and PD experiments much of the effort has been focused on repeated continuous variable measurements with less work being done on repeated discrete type measurements. Discrete data arise mainly in PDs e.g. ordinal, nominal, dichotomous or count measurements. This paper implements expressions for the PFIM for repeated ordinal, dichotomous and count measurements based on analysis by a mixed-effects modelling technique. Three simulation studies were used to investigate the performance of the expressions. Example 1 is based on repeated dichotomous measurements, Example 2 is based on repeated count measurements and Example 3 is based on repeated ordinal measurements. Data simulated in MATLAB were analysed using NONMEM (Laplace method) and the glmmML package in R (Laplace and adaptive Gauss-Hermite quadrature methods). The results obtained for Examples 1 and 2 showed good agreement between the relative standard errors obtained using the PFIM and simulations. The results obtained for Example 3 showed the importance of sampling at the most informative time points. Implementation of these expressions will provide the opportunity for efficient design of population PD experiments that involve discrete type data through design evaluation and optimisation.
Optimisation of nasal swab analysis by liquid scintillation counting.
Dai, Xiongxin; Liblong, Aaron; Kramer-Tremblay, Sheila; Priest, Nicholas; Li, Chunsheng
2012-06-01
When responding to an emergency radiological incident, rapid methods are needed to provide the physicians and radiation protection personnel with an early estimation of possible internal dose resulting from the inhalation of radionuclides. This information is needed so that appropriate medical treatment and radiological protection control procedures can be implemented. Nasal swab analysis, which employs swabs swiped inside a nostril followed by liquid scintillation counting of alpha and beta activity on the swab, could provide valuable information to quickly identify contamination of the affected population. In this study, various parameters (such as alpha/beta discrimination, swab materials, counting time and volume of scintillation cocktail etc) were evaluated in order to optimise the effectiveness of the nasal swab analysis method. An improved nasal swab procedure was developed by replacing cotton swabs with polyurethane-tipped swabs. Liquid scintillation counting was performed using a Hidex 300SL counter with alpha/beta pulse shape discrimination capability. Results show that the new method is more reliable than existing methods using cotton swabs and effectively meets the analysis requirements for screening personnel in an emergency situation. This swab analysis procedure is also applicable to wipe tests of surface contamination to minimise the source self-absorption effect on liquid scintillation counting.
32. EXTERIOR VIEW LOOKING INTO THE SEVENTH TAILRACE (COUNTING FROM ...
32. EXTERIOR VIEW LOOKING INTO THE SEVENTH TAILRACE (COUNTING FROM THE DOWNSTREAM END TO THE UPSTREAM END - SOUTHEAST TO NORTHWEST). THIS AREA IS THE PORTION OF THE PULP MILL THAT WAS NEVER REBUILT AFTER A DEVASTATING FIRE IN 1925 AND SUBSEQUENT END TO PULP PRODUCTION AT THIS SITE. NOTE THE DRIVE SHAFT AND OTHER REMNANTS FROM THE PULP MILLING OPERATION. - Potomac Power Plant, On West Virginia Shore of Potomac River, about 1 mile upriver from confluence with Shenandoah River, Harpers Ferry, Jefferson County, WV
Tsuruta, James K; Dayton, Paul A; Gallippi, Caterina M; O'Rand, Michael G; Streicker, Michael A; Gessner, Ryan C; Gregory, Thomas S; Silva, Erick J R; Hamil, Katherine G; Moser, Glenda J; Sokal, David C
2012-01-30
Studies published in the 1970s by Mostafa S. Fahim and colleagues showed that a short treatment with ultrasound caused the depletion of germ cells and infertility. The goal of the current study was to determine if a commercially available therapeutic ultrasound generator and transducer could be used as the basis for a male contraceptive. Sprague-Dawley rats were anesthetized and their testes were treated with 1 MHz or 3 MHz ultrasound while varying power, duration and temperature of treatment. We found that 3 MHz ultrasound delivered with 2.2 Watt per square cm power for fifteen minutes was necessary to deplete spermatocytes and spermatids from the testis and that this treatment significantly reduced epididymal sperm reserves. 3 MHz ultrasound treatment reduced total epididymal sperm count 10-fold lower than the wet-heat control and decreased motile sperm counts 1,000-fold lower than wet-heat alone. The current treatment regimen provided nominally more energy to the treatment chamber than Fahim's originally reported conditions of 1 MHz ultrasound delivered at 1 Watt per square cm for ten minutes. However, the true spatial average intensity, effective radiating area and power output of the transducers used by Fahim were not reported, making a direct comparison impossible. We found that germ cell depletion was most uniform and effective when we rotated the therapeutic transducer to mitigate non-uniformity of the beam field. The lowest sperm count was achieved when the coupling medium (3% saline) was held at 37 degrees C and two consecutive 15-minute treatments of 3 MHz ultrasound at 2.2 Watt per square cm were separated by 2 days. The non-invasive nature of ultrasound and its efficacy in reducing sperm count make therapeutic ultrasound a promising candidate for a male contraceptive. However, further studies must be conducted to confirm its efficacy in providing a contraceptive effect, to test the result of repeated use, to verify that the contraceptive effect is reversible and to demonstrate that there are no detrimental, long-term effects from using ultrasound as a method of male contraception.
Understanding poisson regression.
Hayat, Matthew J; Higgins, Melinda
2014-04-01
Nurse investigators often collect study data in the form of counts. Traditional methods of data analysis have historically approached analysis of count data either as if the count data were continuous and normally distributed or with dichotomization of the counts into the categories of occurred or did not occur. These outdated methods for analyzing count data have been replaced with more appropriate statistical methods that make use of the Poisson probability distribution, which is useful for analyzing count data. The purpose of this article is to provide an overview of the Poisson distribution and its use in Poisson regression. Assumption violations for the standard Poisson regression model are addressed with alternative approaches, including addition of an overdispersion parameter or negative binomial regression. An illustrative example is presented with an application from the ENSPIRE study, and regression modeling of comorbidity data is included for illustrative purposes. Copyright 2014, SLACK Incorporated.
Libiger, Ondrej; Schork, Nicholas J.
2015-01-01
It is now feasible to examine the composition and diversity of microbial communities (i.e., “microbiomes”) that populate different human organs and orifices using DNA sequencing and related technologies. To explore the potential links between changes in microbial communities and various diseases in the human body, it is essential to test associations involving different species within and across microbiomes, environmental settings and disease states. Although a number of statistical techniques exist for carrying out relevant analyses, it is unclear which of these techniques exhibit the greatest statistical power to detect associations given the complexity of most microbiome datasets. We compared the statistical power of principal component regression, partial least squares regression, regularized regression, distance-based regression, Hill's diversity measures, and a modified test implemented in the popular and widely used microbiome analysis methodology “Metastats” across a wide range of simulated scenarios involving changes in feature abundance between two sets of metagenomic samples. For this purpose, simulation studies were used to change the abundance of microbial species in a real dataset from a published study examining human hands. Each technique was applied to the same data, and its ability to detect the simulated change in abundance was assessed. We hypothesized that a small subset of methods would outperform the rest in terms of the statistical power. Indeed, we found that the Metastats technique modified to accommodate multivariate analysis and partial least squares regression yielded high power under the models and data sets we studied. The statistical power of diversity measure-based tests, distance-based regression and regularized regression was significantly lower. Our results provide insight into powerful analysis strategies that utilize information on species counts from large microbiome data sets exhibiting skewed frequency distributions obtained on a small to moderate number of samples. PMID:26734061
High-sensitivity fast neutron detector KNK-2-7M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koshelev, A. S., E-mail: alexsander.coshelev@yandex.ru; Dovbysh, L. Ye.; Ovchinnikov, M. A.
2015-12-15
The construction of the fast neutron detector KNK-2-7M is briefly described. The results of the study of the detector in the pulse-counting mode are given for the fissions of {sup 237}Np nuclei in the radiator of the neutron-sensitive section and in the current mode with the separation of sectional currents of functional sections. The possibilities of determining the effective number of {sup 237}Np nuclei in the radiator of the neutronsensitive section are considered. The diagnostic possibilities of the detector in the counting mode are shown by example of the analysis of the reference data from the neutron-field characteristics in themore » working hall of the BR-K1 reactor. The diagnostic possibilities of the detector in the current operating mode are shown by example of the results of measuring the {sup 237}Np-fission intensity in the BR-K1 reactor power start-ups implemented in the mode of fission-pulse generation on delayed neutrons at the detector arrangement inside the reactor core cavity under conditions of a wide variation of the reactor radiation field.« less
NASA Astrophysics Data System (ADS)
Blake, Samantha L.; Walker, S. Hunter; Muddiman, David C.; Hinks, David; Beck, Keith R.
2011-12-01
Color Index Disperse Yellow 42 (DY42), a high-volume disperse dye for polyester, was used to compare the capabilities of the LTQ-Orbitrap XL and the LTQ-FT-ICR with respect to mass measurement accuracy (MMA), spectral accuracy, and sulfur counting. The results of this research will be used in the construction of a dye database for forensic purposes; the additional spectral information will increase the confidence in the identification of unknown dyes found in fibers at crime scenes. Initial LTQ-Orbitrap XL data showed MMAs greater than 3 ppm and poor spectral accuracy. Modification of several Orbitrap installation parameters (e.g., deflector voltage) resulted in a significant improvement of the data. The LTQ-FT-ICR and LTQ-Orbitrap XL (after installation parameters were modified) exhibited MMA ≤ 3 ppm, good spectral accuracy (χ2 values for the isotopic distribution ≤ 2), and were correctly able to ascertain the number of sulfur atoms in the compound at all resolving powers investigated for AGC targets of 5.00 × 105 and 1.00 × 106.
Geiger mode avalanche photodiodes for microarray systems
NASA Astrophysics Data System (ADS)
Phelan, Don; Jackson, Carl; Redfern, R. Michael; Morrison, Alan P.; Mathewson, Alan
2002-06-01
New Geiger Mode Avalanche Photodiodes (GM-APD) have been designed and characterized specifically for use in microarray systems. Critical parameters such as excess reverse bias voltage, hold-off time and optimum operating temperature have been experimentally determined for these photon-counting devices. The photon detection probability, dark count rate and afterpulsing probability have been measured under different operating conditions. An active- quench circuit (AQC) is presented for operating these GM- APDs. This circuit is relatively simple, robust and has such benefits as reducing average power dissipation and afterpulsing. Arrays of these GM-APDs have already been designed and together with AQCs open up the possibility of having a solid-state microarray detector that enables parallel analysis on a single chip. Another advantage of these GM-APDs over current technology is their low voltage CMOS compatibility which could allow for the fabrication of an AQC on the same device. Small are detectors have already been employed in the time-resolved detection of fluorescence from labeled proteins. It is envisaged that operating these new GM-APDs with this active-quench circuit will have numerous applications for the detection of fluorescence in microarray systems.
On the constituent counting rule for hard exclusive processes involving multi-quark states
NASA Astrophysics Data System (ADS)
Guo, Feng-Kun; Meißner, Ulf-G.; Wang, Wei
2017-05-01
At high energy, the cross section at finite scattering angle of a hard exclusive process falls off as a power of the Manderstam variable s. If all involved quark-gluon compositions undergo hard momentum transfers, the fall-off scaling is determined by the underlying valence structures of the initial and final hadrons, known as the constituent counting rule. In spite of the complication due to helicity conservation, it has been argued that when applied to exclusive process with exotic multiquark states, the counting rule is a powerful way to determine the valence degrees of freedom inside hadron exotics. In this work, we demonstrate that for hadrons with hidden flavors, the naive application of the constituent counting rule to exclusive process with hadron exotic multiquark states is problematic, since it is not mandatory for all components to participate in hard scattering at the scale . We illustrate the problems in the viewpoint based on effective field theory. We clarify the misleading results that may be obtained from the constituent counting rule in exclusive processes with exotic candidates such as , , X(3872), etc. Supported in part by DFG and NSFC through funds provided to the Sino-German CRC 110 “Symmetries and the Emergence of Structure in QCD” (NSFC Grant No. 11261130311), Thousand Talents Plan for Young Professionals, Chinese Academy of Sciences (CAS) President’s International Fellowship Initiative (PIFI) (2015VMA076), National Natural Science Foundation of China (11575110, 11655002), Natural Science Foundation of Shanghai (15DZ2272100, 15ZR1423100), Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF111CJ1), and by Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education.
NASA Astrophysics Data System (ADS)
Zhong, Jia; Trevisi, Letizia; Urch, Bruce; Lin, Xinyi; Speck, Mary; Coull, Brent A.; Liss, Gary; Thompson, Aaron; Wu, Shaowei; Wilson, Ander; Koutrakis, Petros; Silverman, Frances; Gold, Diane R.; Baccarelli, Andrea A.
2017-04-01
Ambient fine particle (PM2.5) pollution triggers acute cardiovascular events. Individual-level preventions are proposed to complement regulation in reducing the global burden of PM2.5-induced cardiovascular diseases. We determine whether B vitamin supplementation mitigates PM2.5 effects on cardiac autonomic dysfunction and inflammation in a single-blind placebo-controlled crossover pilot trial. Ten healthy adults received two-hour controlled-exposure-experiment to sham under placebo, PM2.5 (250 μg/m3) under placebo, and PM2.5 (250 μg/m3) under B-vitamin supplementation (2.5 mg/d folic acid, 50 mg/d vitamin B6, and 1 mg/d vitamin B12), respectively. At pre-, post-, 24 h-post-exposure, we measured resting heart rate (HR) and heart rate variability (HRV) with electrocardiogram, and white blood cell (WBC) counts with hematology analyzer. Compared to sham, PM2.5 exposure increased HR (3.8 bpm, 95% CI: 0.3, 7.4; P = 0.04), total WBC count (11.5%, 95% CI: 0.3%, 24.0%; P = 0.04), lymphocyte count (12.9%, 95% CI: 4.4%, 22.1%; P = 0.005), and reduced low-frequency power (57.5%, 95% CI: 2.5%, 81.5%; P = 0.04). B-vitamin supplementation attenuated PM2.5 effect on HR by 150% (P = 0.003), low-frequency power by 90% (P = 0.01), total WBC count by 139% (P = 0.006), and lymphocyte count by 106% (P = 0.02). In healthy adults, two-hour PM2.5 exposure substantially increases HR, reduces HRV, and increases WBC. These effects are reduced by B vitamin supplementation.
Evaluating language environment analysis system performance for Chinese: a pilot study in Shanghai.
Gilkerson, Jill; Zhang, Yiwen; Xu, Dongxin; Richards, Jeffrey A; Xu, Xiaojuan; Jiang, Fan; Harnsberger, James; Topping, Keith
2015-04-01
The purpose of this study was to evaluate performance of the Language Environment Analysis (LENA) automated language-analysis system for the Chinese Shanghai dialect and Mandarin (SDM) languages. Volunteer parents of 22 children aged 3-23 months were recruited in Shanghai. Families provided daylong in-home audio recordings using LENA. A native speaker listened to 15 min of randomly selected audio samples per family to label speaker regions and provide Chinese character and SDM word counts for adult speakers. LENA segment labeling and counts were compared with rater-based values. LENA demonstrated good sensitivity in identifying adult and child; this sensitivity was comparable to that of American English validation samples. Precision was strong for adults but less so for children. LENA adult word count correlated strongly with both Chinese characters and SDM word counts. LENA conversational turn counts correlated similarly with rater-based counts after the exclusion of three unusual samples. Performance related to some degree to child age. LENA adult word count and conversational turn provided reasonably accurate estimates for SDM over the age range tested. Theoretical and practical considerations regarding LENA performance in non-English languages are discussed. Despite the pilot nature and other limitations of the study, results are promising for broader cross-linguistic applications.
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Transformerless dc-Isolated Converter
NASA Technical Reports Server (NTRS)
Rippel, Wally E.
1987-01-01
Efficient voltage converter employs capacitive instead of transformer coupling to provide dc isolation. Offers buck/boost operation, minimal filtering, and low parts count, with possible application in photovoltaic power inverters, power supplies and battery charges. In photovoltaic inverter circuit with transformerless converter, Q2, Q3, Q4, and Q5 form line-commutated inverter. Switching losses and stresses nil because switching performed when current is zero.
NASA Technical Reports Server (NTRS)
Fritts, D. C.; Janches, D.; Hocking, W. K.; Mitchell, N. J.; Taylor, M. J.
2011-01-01
Measurement capabilities of five meteor radars are assessed and compared to determine how well radars having different transmitted power and antenna configurations perform in defining mean winds, tidal amplitudes, and gravity wave (GW) momentum fluxes. The five radars include two new-generation meteor radars on Tierra del Fuego, Argentina (53.8 deg S) and on King George Island in the Antarctic (62.1 deg S) and conventional meteor radars at Socorro, New Mexico (34.1 deg N, 106.9 deg W), Bear Lake Observatory, Utah (approx 41.9 deg N, 111.4 deg W), and Yellowknife, Canada (62.5 deg N, 114.3 deg W). Our assessment employs observed meteor distributions for June of 2009, 2010, or 2011 for each radar and a set of seven test motion fields including various superpositions of mean winds, constant diurnal tides, constant and variable semidiurnal tides, and superposed GWs having various amplitudes, scales, periods, directions of propagation, momentum fluxes, and intermittencies. Radars having higher power and/or antenna patterns yielding higher meteor counts at small zenith angles perform well in defining monthly and daily mean winds, tidal amplitudes, and GW momentum fluxes, though with expected larger uncertainties in the daily estimates. Conventional radars having lower power and a single transmitting antenna are able to describe monthly mean winds and tidal amplitudes reasonably well, especially at altitudes having the highest meteor counts. They also provide qualitative estimates of GW momentum fluxes at the altitudes having the highest meteor counts; however, these estimates are subject to uncertainties of approx 20 to 50% and uncertainties rapidly become excessive at higher and lower altitudes. Estimates of all quantities degrade somewhat for more complex motion fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, A.P.; Bradbury, S.; Arnsberg, B.D.
2008-11-25
Redd counts are routinely used to document the spawning distribution of fall Chinook salmon (Oncorhynchus tshawytscha) in the Snake River basin upriver of Lower Granite Dam. The first reported redd counts were from aerial searches conducted intermittently between 1959 and 1978 (Irving and Bjornn 1981, Witty 1988; Groves and Chandler 1996)(Appendix 1). In 1986, the Washington Department of Fish and Wildlife began an annual monitoring program that, in addition to the Snake River, included aerial searches of the Grande Ronde River the first year (Seidel and Bugert 1987), and the Imnaha River in subsequent years (Seidel et al. 1988; Bugertmore » et al. 1989-1991; Mendel et al. 1992). The U. S. Fish and Wildlife Service and Idaho Power Company began contributing to this effort in 1991 by increasing the number of aerial searches conducted each year and adding underwater searches in areas of the Snake River that were too deep to be searched from the air (Connor et al. 1993; Garcia et al. 1994a, 1994b, 1996-2007; Groves 1993; Groves and Chandler 1996). The Nez Perce Tribe added aerial searches in the Clearwater River basin beginning in 1988 (Arnsberg et. al 1992), and the Salmon River beginning in 1992. Currently searches are conducted cooperatively by the Nez Perce Tribe, Idaho Power Company, and U. S. Fish and Wildlife Service. Our objective for this report was to consolidate the findings from annual redd searches counted upstream of Lower Granite Dam into a single document, containing detailed information about the searches from the most recent spawning season, and summary information from previous years. The work conducted in 2007 was funded by the Bonneville Power Administration and Idaho Power Company.« less
Sallmon, Hannes; Weber, Sven C; Dirks, Juliane; Schiffer, Tamara; Klippstein, Tamara; Stein, Anja; Felderhoff-Müser, Ursula; Metze, Boris; Hansmann, Georg; Bührer, Christoph; Cremer, Malte; Koehne, Petra
2018-01-01
The role of platelets for mediating closure of the ductus arteriosus in human preterm infants is controversial. Especially, the effect of low platelet counts on pharmacological treatment failure is still unclear. In this retrospective study of 471 preterm infants [<1,500 g birth weight (BW)], who were treated for a patent ductus arteriosus (PDA) with indomethacin or ibuprofen, we investigated whether platelet counts before or during pharmacological treatment had an impact on the successful closure of a hemodynamically significant PDA. The effects of other factors, such as sepsis, preeclampsia, gestational age, BW, and gender, were also evaluated. Platelet counts before initiation of pharmacological PDA treatment did not differ between infants with later treatment success or failure. However, we found significant associations between low platelet counts during pharmacological PDA therapy and treatment failure ( p < 0.05). Receiver operating characteristic (ROC) curve analysis showed that platelet counts after the first, and before and after the second cyclooxygenase inhibitor (COXI) cycle were significantly associated with treatment failure (area under the curve of >0.6). However, ROC curve analysis did not reveal a specific platelet cutoff-value that could predict PDA treatment failure. Multivariate logistic regression analysis showed that lower platelet counts, a lower BW, and preeclampsia were independently associated with COXI treatment failure. We provide further evidence for an association between low platelet counts during pharmacological therapy for symptomatic PDA and treatment failure, while platelet counts before initiation of therapy did not affect treatment outcome.
In-medium Chiral Perturbation Theory beyond the Mean-Field Approximation
NASA Astrophysics Data System (ADS)
Meißner, Ulf-G.; Oller, José A.; Wirzba, Andreas
2002-04-01
An explicit expression for the generating functional of two-flavor low-energy QCD with external sources in the presence of nonvanishing nucleon densities was derived recently (J. A. Oller, Phys. Rev. C65 (2002) 025204). Within this approach we derive power counting rules for the calculation of in-medium pion properties. We develop the so-called standard rules for residual nucleon energies of the order of the pion mass and a modified scheme (nonstandard counting) for vanishing residual nucleon energies. We also establish the different scales for the range of applicability of this perturbative expansion, which are 6πfπ≃0.7 GeV for standard and 6π2fπ2/2mN≃0.27 GeV for nonstandard counting, respectively. We have performed a systematic analysis of n-point in-medium Green functions up to and including next-to-leading order when the standard rules apply. These include the in-medium contributions to quark condensates, pion propagators, pion masses, and couplings of the axial-vector, vector, and pseudoscalar currents to pions. In particular, we find a mass shift for negatively charged pions in heavy nuclei, ΔMπ-=(18±m 5) MeV, that agrees with recent determinations from deeply bound pionic 207Pb. We have also established the absence of in-medium renormalization in the π0→γγ decay amplitude up to the same order. The study of ππ scattering requires the use of the nonstandard counting and the calculation is done at leading order. Even at that order we establish new contributions not considered so far. We also point toward further possible improvements of this scheme and touch upon its relation to more conventional many-body approaches.
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Timothy, J. G.; Bybee, R. L.
1978-01-01
The paper describes a new type of continuous channel multiplier (CEM) fabricated from a low-resistance glass to produce a high-conductivity channel section and thereby obtain a high count-rate capability. The flat-cone cathode configuration of the CEM is specifically designed for the detection of astigmatic exit images from grazing-incidence spectrometers at the optimum angle of illumination for high detection efficiencies at XUV wavelengths. Typical operating voltages are in the range of 2500-2900 V with stable counting plateau slopes in the range 3-6% per 100-V increment. The modal gain at 2800 V was typically in the range (50-80) million. The modal gain falls off at count rates in excess of about 20,000 per sec. The detection efficiency remains essentially constant to count rates in excess of 2 million per sec. Higher detection efficiencies (better than 20%) are obtained by coating the CEM with MgF2. In life tests of coated CEMs, no measurable change in detection efficiency was measured to a total accumulated signal of 2 times 10 to the 11th power counts.
NASA Astrophysics Data System (ADS)
Uttley, P.; Gendreau, K.; Markwardt, C.; Strohmayer, T. E.; Bult, P.; Arzoumanian, Z.; Pottschmidt, K.; Ray, P. S.; Remillard, R.; Pasham, D.; Steiner, J.; Neilsen, J.; Homan, J.; Miller, J. M.; Iwakiri, W.; Fabian, A. C.
2018-03-01
NICER observed the new X-ray transient MAXI J1820+070 (ATel #11399, #11400, #11403, #11404, #11406, #11418, #11420, #11421) on multiple occasions from 2018 March 12 to 14. & nbsp;During this time the source brightened rapidly, from a total NICER mean count rate of 880 count/s on March 12 to 2800 count/s by March 14 17:00 & nbsp;UTC, corresponding to a change in 2-10 keV modelled flux (see below) from 1.9E-9 to 5E-9 erg cm-2 s-1. & nbsp; The broadband X-ray spectrum is absorbed by a low column density (fitting the model given below, we obtain 1.5E21 cm-2), in keeping with the low Galactic column in the direction of the source (ATel #11418; Dickey & Lockman, 1990, ARAA, 28, 215; Kalberla et al. 2005, A &A, 440, 775) and consists of a hard power-law component with weak reflection features (broad iron line and narrow 6.4 keV line core) and an additional soft X-ray component.
Logistic regression for dichotomized counts.
Preisser, John S; Das, Kalyan; Benecha, Habtamu; Stamm, John W
2016-12-01
Sometimes there is interest in a dichotomized outcome indicating whether a count variable is positive or zero. Under this scenario, the application of ordinary logistic regression may result in efficiency loss, which is quantifiable under an assumed model for the counts. In such situations, a shared-parameter hurdle model is investigated for more efficient estimation of regression parameters relating to overall effects of covariates on the dichotomous outcome, while handling count data with many zeroes. One model part provides a logistic regression containing marginal log odds ratio effects of primary interest, while an ancillary model part describes the mean count of a Poisson or negative binomial process in terms of nuisance regression parameters. Asymptotic efficiency of the logistic model parameter estimators of the two-part models is evaluated with respect to ordinary logistic regression. Simulations are used to assess the properties of the models with respect to power and Type I error, the latter investigated under both misspecified and correctly specified models. The methods are applied to data from a randomized clinical trial of three toothpaste formulations to prevent incident dental caries in a large population of Scottish schoolchildren. © The Author(s) 2014.
Triple-Label β Liquid Scintillation Counting
Bukowski, Thomas R.; Moffett, Tyler C.; Revkin, James H.; Ploger, James D.; Bassingthwaighte, James B.
2010-01-01
The detection of radioactive compounds by liquid scintillation has revolutionized modern biology, yet few investigators make full use of the power of this technique. Even though multiple isotope counting is considerably more difficult than single isotope counting, many experimental designs would benefit from using more than one isotope. The development of accurate isotope counting techniques enabling the simultaneous use of three β-emitting tracers has facilitated studies in our laboratory using the multiple tracer indicator dilution technique for assessing rates of transmembrane transport and cellular metabolism. The details of sample preparation, and of stabilizing the liquid scintillation spectra of the tracers, are critical to obtaining good accuracy. Reproducibility is enhanced by obtaining detailed efficiency/quench curves for each particular set of tracers and solvent media. The numerical methods for multiple-isotope quantitation depend on avoiding error propagation (inherent to successive subtraction techniques) by using matrix inversion. Experimental data obtained from triple-label β counting illustrate reproducibility and good accuracy even when the relative amounts of different tracers in samples of protein/electrolyte solutions, plasma, and blood are changed. PMID:1514684
Neutron-induced reactions in the hohlraum to study reaction in flight neutrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boswell, M. S.; Elliott, S. R.; Tybo, J.
2013-04-19
We are currently developing the physics necessary to measure the Reaction In Flight (RIF) neutron flux from a NIF capsule. A measurement of the RIF neutron flux from a NIF capsule could be used to deduce the stopping power in the cold fuel of the NIF capsule. A foil irradiated at the Omega laser at LLE was counted at the LANL low-background counting facility at WIPP. The estimated production rate of {sup 195}Au was just below our experimental sensitivity. We have made several improvements to our counting facility in recent months. These improvements are designed to increase our sensitivity, andmore » include installing two new low-background detectors, and taking steps to reduce noise in the signals.« less
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
Probiotics reduce mutans streptococci counts in humans: a systematic review and meta-analysis.
Laleman, Isabelle; Detailleur, Valentine; Slot, Dagmar Else; Slomka, Vera; Quirynen, Marc; Teughels, Wim
2014-07-01
Systematically review the available literature regarding the caries-preventive effect of probiotics. An electronic search was conducted in three databases (PubMed MEDLINE, ISI Web of Science and Cochrane Library) to identify all suitable studies. The outcomes had to be presented as the effect of probiotics on the incidence of caries or on the levels of mutans streptococci and/or Lactobacillus species. Human studies, written in English, with at least 15 participants, comparing a probiotic product with a placebo/no probiotic were included. Where possible, a meta-analysis was performed to obtain quantitative data. Since only two articles presented useful data on the caries incidence, we focused on the surrogate endpoints: mutans streptococci and/or Lactobacillus counts. The meta-analysis showed that when the probiotic and control group are compared after treatment, significantly more patients in the probiotic group had low mutans streptococci (<10(5) CFU/ml) counts and significantly less patients had high (>10(6) CFU/ml) counts. Regarding the Lactobacillus counts, comparing the probiotic and control group at the end of the probiotic use, no significant differences could be observed, neither in low (<10(4) CFU/ml) nor in high Lactobacillus (>10(6) CFU/ml) counts. Within the limitations of the available data, it may be concluded that probiotics decrease the mutans streptococci counts. This suggests that probiotics could have a positive effect in the prevention of caries. There is insufficient evidence that probiotics can prevent caries, but they can reduce the mutans streptococci counts.
Design and characterization of the PREC (Prototype Readout Electronics for Counting particles)
NASA Astrophysics Data System (ADS)
Assis, P.; Brogueira, P.; Ferreira, M.; Luz, R.; Mendes, L.
2016-08-01
The design, tests and performance of a novel, low noise, acquisition system—the PREC (Prototype Readout Electronics for Counting particles) is presented in this article. PREC is a system developed using discrete electronics for particle counting applications using RPCs (Resistive Plate Chamber) detectors. PREC can, however, be used with other kind of detectors that present fast pulses, e.g. Silicon Photomultipliers. The PREC system consists in several Front-End boards that transmit data to a purely digital Motherboard. The amplification and discrimination of the signal is performed in the Front-End boards, making them the critical component of the system. In this paper, the Front-End was tested extensively by measuring the gain, noise level, crosstalk, trigger efficiency, propagation time and power consumption. The gain shows a decrease with the working temperature and an increase with the power supply voltage. The Front-End board shows a low noise level (<= 1.6 mV at 3σ level) and no crosstalk is detected above this level. The s-curve of the trigger efficiency is characterized by a 3 mV gap from the region where most of the signals are triggered to almost no signal is triggered. The signal transit time between the Front-End input and the digital Motherboard is estimated to be 5.82 ns. The maximum power consumption is 3.372 W for the Motherboard and 3.576 W and 1.443 W for each Front-End analogue circuitry and digital part, respectively.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Impact of donor- and collection-related variables on product quality in ex utero cord blood banking.
Askari, Sabeen; Miller, John; Chrysler, Gayl; McCullough, Jeffrey
2005-02-01
Optimizing product quality is a current focus in cord blood banking. This study evaluates the role of selected donor- and collection-related variables. Retrospective review was performed of cord blood units (CBUs) collected ex utero between February 1, 2000, and February 28, 2002. Preprocessing volume and total nucleated cell (TNC) counts and postprocessing CD34 cell counts were used as product quality indicators. Of 2084 CBUs, volume determinations and TNC counts were performed on 1628 and CD34+ counts on 1124 CBUs. Mean volume and TNC and CD34+ counts were 85.2 mL, 118.9 x 10(7), and 5.2 x 10(6), respectively. In univariate analysis, placental weight of greater than 500 g and meconium in amniotic fluid correlated with better volume and TNC and CD34+ counts. Greater than 40 weeks' gestation predicted enhanced volume and TNC count. Cesarean section, two- versus one-person collection, and not greater than 5 minutes between placental delivery and collection produced superior volume. Increased TNC count was also seen in Caucasian women, primigravidae, female newborns, and collection duration of more than 5 minutes. A time between delivery of newborn and placenta of not greater than 10 minutes predicted better volume and CD34+ count. By regression analysis, collection within not greater than 5 minutes of placental delivery produced superior volume and TNC count. Donor selection and collection technique modifications may improve product quality. TNC count appears to be more affected by different variables than CD34+ count.
2015-01-01
Single molecule fluorescence spectroscopy holds the promise of providing direct measurements of protein folding free energy landscapes and conformational motions. However, fulfilling this promise has been prevented by technical limitations, most notably, the difficulty in analyzing the small packets of photons per millisecond that are typically recorded from individual biomolecules. Such limitation impairs the ability to accurately determine conformational distributions and resolve sub-millisecond processes. Here we develop an analytical procedure for extracting the conformational distribution and dynamics of fast-folding proteins directly from time-stamped photon arrival trajectories produced by single molecule FRET experiments. Our procedure combines the maximum likelihood analysis originally developed by Gopich and Szabo with a statistical mechanical model that describes protein folding as diffusion on a one-dimensional free energy surface. Using stochastic kinetic simulations, we thoroughly tested the performance of the method in identifying diverse fast-folding scenarios, ranging from two-state to one-state downhill folding, as a function of relevant experimental variables such as photon count rate, amount of input data, and background noise. The tests demonstrate that the analysis can accurately retrieve the original one-dimensional free energy surface and microsecond folding dynamics in spite of the sub-megahertz photon count rates and significant background noise levels of current single molecule fluorescence experiments. Therefore, our approach provides a powerful tool for the quantitative analysis of single molecule FRET experiments of fast protein folding that is also potentially extensible to the analysis of any other biomolecular process governed by sub-millisecond conformational dynamics. PMID:25988351
Ramanathan, Ravishankar; Muñoz, Victor
2015-06-25
Single molecule fluorescence spectroscopy holds the promise of providing direct measurements of protein folding free energy landscapes and conformational motions. However, fulfilling this promise has been prevented by technical limitations, most notably, the difficulty in analyzing the small packets of photons per millisecond that are typically recorded from individual biomolecules. Such limitation impairs the ability to accurately determine conformational distributions and resolve sub-millisecond processes. Here we develop an analytical procedure for extracting the conformational distribution and dynamics of fast-folding proteins directly from time-stamped photon arrival trajectories produced by single molecule FRET experiments. Our procedure combines the maximum likelihood analysis originally developed by Gopich and Szabo with a statistical mechanical model that describes protein folding as diffusion on a one-dimensional free energy surface. Using stochastic kinetic simulations, we thoroughly tested the performance of the method in identifying diverse fast-folding scenarios, ranging from two-state to one-state downhill folding, as a function of relevant experimental variables such as photon count rate, amount of input data, and background noise. The tests demonstrate that the analysis can accurately retrieve the original one-dimensional free energy surface and microsecond folding dynamics in spite of the sub-megahertz photon count rates and significant background noise levels of current single molecule fluorescence experiments. Therefore, our approach provides a powerful tool for the quantitative analysis of single molecule FRET experiments of fast protein folding that is also potentially extensible to the analysis of any other biomolecular process governed by sub-millisecond conformational dynamics.
voom: precision weights unlock linear model analysis tools for RNA-seq read counts
2014-01-01
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods. PMID:24485249
voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.
Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K
2014-02-03
New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.
Development of a multikilowatt ion thruster power processor
NASA Technical Reports Server (NTRS)
Schoenfeld, A. D.; Goldin, D. S.; Biess, J. J.
1972-01-01
A feasibility study was made of the application of silicon-controlled, rectifier series, resonant inverter, power conditioning technology to electric propulsion power processing operating from a 200 to 400 Vdc solar array bus. A power system block diagram was generated to meet the electrical requirements of a 20 CM hollow cathode, mercury bombardment, ion engine. The SCR series resonant inverter was developed as a primary means of power switching and conversion, and the analog signal-to-discrete-time-interval converter control system was applied to achieve good regulation. A complete breadboard was designed, fabricated, and tested with a resistive load bank, and critical power processor areas relating to efficiency, weight, and part count were identified.
Lindsey, D.A.; Langer, W.H.; Van Gosen, B. S.
2007-01-01
Clast populations in piedmont fluvial systems are products of complex histories that complicate provenance interpretation. Although pebble counts of lithology are widely used, the information provided by a pebble count has been filtered by a potentially large number of processes and circumstances. Counts of pebble lithology and roundness together offer more power than lithology alone for the interpretation of provenance. In this study we analyze pebble counts of lithology and roundness in two contrasting fluvial systems of Pleistocene age to see how provenance varies with drainage size. The two systems are 1) a group of small high-gradient incised streams that formed alluvial fans and terraces and 2) a piedmont river that formed terraces in response to climate-driven cycles of aggradation and incision. We first analyze the data from these systems within their geographic and geologic context. After this is done, we employ contingency table analysis to complete the interpretation of pebble provenance. Small tributary streams that drain rugged mountains on both sides of the Santa Cruz River, southeast Arizona, deposited gravel in fan and terrace deposits of Pleistocene age. Volcanic, plutonic and, to a lesser extent, sedimentary rocks are the predominant pebble lithologies. Large contrasts in gravel lithology are evident among adjacent fans. Subangular to subrounded pebbles predominate. Contingency table analysis shows that hard volcanic rocks tend to remain angular and, even though transport distances have been short, soft tuff and sedimentary rocks tend to become rounded. The Wind River, a major piedmont stream in Wyoming, drains rugged mountains surrounding the northwest part of the Wind River basin. Under the influence of climate change and glaciation during the Pleistocene, the river deposited an extensive series of terrace gravels. In contrast to Santa Cruz tributary gravel, most of the Wind River gravel is relatively homogenous in lithology and is rounded to well-rounded. Detailed analysis reveals a multitude of sources in the headwaters and the basin itself, but lithologies from these sources are combined downstream. Well-rounded volcanic and recycled quartzite clasts were derived from the headwaters. Precambrian igneous and metamorphic clasts were brought down tributary valleys to the Wind River by glaciers, and sandstone was added where the river enters the Wind River structural basin.
Traffic effects on bird counts on North American Breeding Bird Survey routes
Griffith, Emily H.; Sauer, John R.; Royle, J. Andrew
2010-01-01
The North American Breeding Bird Survey (BBS) is an annual roadside survey used to estimate population change in >420 species of birds that breed in North America. Roadside sampling has been criticized, in part because traffic noise can interfere with bird counts. Since 1997, data have been collected on the numbers of vehicles that pass during counts at each stop. We assessed the effect of traffic by modeling total vehicles as a covariate of counts in hierarchical Poisson regression models used to estimate population change. We selected species for analysis that represent birds detected at low and high abundance and birds with songs of low and high frequencies. Increases in vehicle counts were associated with decreases in bird counts in most of the species examined. The size and direction of these effects remained relatively constant between two alternative models that we analyzed. Although this analysis indicated only a small effect of incorporating traffic effects when modeling roadside counts of birds, we suggest that continued evaluation of changes in traffic at BBS stops should be a component of future BBS analyses.
Validation of the RT3 triaxial accelerometer for the assessment of physical activity.
Rowlands, Ann V; Thomas, Philip W M; Eston, Roger G; Topping, Rodney
2004-03-01
The aims of this study were to assess and compare the validity of the RT3 accelerometer for the assessment of physical activity in boys and men, to compare RT3 and Tritrac accelerometer counts, and to determine count cut-off values for moderate (> or =3 < 6 METs) and vigorous (> or =6 METs) activity. Nineteen boys (age: 9.5 +/- 0.8 yr) and 15 men (age: 20.7 +/- 1.4 yr) walked and ran on a treadmill, kicked a ball to and fro, played hopscotch, and sat quietly. An RT3 was worn on the right hip; boys also wore a Tritrac on the left hip. Oxygen consumption was expressed as a ratio of body mass raised to the power of 0.75 (S VO2). RT3 counts correlated significantly with S VO2 in boys (r = 0.87, P < 0.01) and men (r = 0.85, P < 0.01). However, during treadmill activities, RT3 counts were significantly higher for boys (P < 0.05). RT3 counts corresponding to "moderate" and "vigorous" activity were similar for boys and men for all activities (moderate = 970.2 for boys and 984.0 for men; vigorous = 2333.0 for boys and 2340.8 for men) but approximately 400 counts lower in men when only treadmill activities were considered. Tritrac counts correlated significantly with S VO2 in boys (r = 0.87, P < 0.01), but were significantly lower than RT3 counts across most activities (P < 0.05). The RT3 accelerometer is a good measure of physical activity for boys and men. However, moderate and vigorous intensity count thresholds differ for boys and men when the predominant activities are walking and running. RT3 counts are significantly higher than Tritrac counts for a number of activities. These findings have implications when comparing activity counts between studies using the different instruments.
Activity patterns and monitoring numbers of Horned Puffins and Parakeet Auklets
Hatch, Shyla A.
2002-01-01
Nearshore counts of birds on the water and time-lapse photography were used to monitor seasonal activity patterns and interannual variation in numbers of Horned Puffins (Fratercula corniculata) and Parakeet Auklets (Aethia psittacula) at the Semidi Islands, Alaska. The best period for over-water counts was mid egg-laying through hatching in auklets and late prelaying through early hatching in puffins. Daily counts (07.00 h-09.30 h) varied widely, with peak numbers and days with few or no birds present occurring throughout the census period. Variation among annual means in four years amounted to 26% and 72% of total count variation in puffins and auklets, respectively. Time-lapse photography of nesting habitat in early incubation revealed a morning (08.00 h-12.00 h) peak in the number of puffins loitering on study plots. Birds recorded in time-lapse images never comprised more than a third of the estimated breeding population on a plot. Components of variance in the time-lapse study were 29% within hours, 9% among hours (08.00 h-12.00 h), and 62% among days (8-29 June). Variability of overwater and land-based counts is reduced by standardizing the time of day when counts are made, but weather conditions had little influence on either type of count. High international variation of population indices implies low power to detect numerical trends in crevice-nesting auklets and puffins.
Bernstein, P S; Minior, V K; Divon, M Y
1997-11-01
The presence of elevated nucleated red blood cell counts in neonatal blood has been associated with fetal hypoxia. We sought to determine whether small-for-gestational-age fetuses with abnormal umbilical artery Doppler velocity waveforms have elevated nucleated red blood cell counts. Hospital charts of neonates with the discharge diagnosis of small for gestational age (birth weight < 10th percentile) who were delivered between October 1988 and June 1995 were reviewed for antepartum testing, delivery conditions, and neonatal outcome. We studied fetuses who had an umbilical artery systolic/diastolic ratio within 3 days of delivery and a complete blood cell count on the first day of life. Multiple gestations, anomalous fetuses, and infants of diabetic mothers were excluded. Statistical analysis included the Student t test, chi 2 analysis, analysis of variance, and simple and stepwise regression. Fifty-two infants met the inclusion criteria. Those with absent or reversed end-diastolic velocity (n = 19) had significantly greater nucleated red blood cell counts than did those with end-diastolic velocity present (n = 33) (nucleated red blood cells/100 nucleated cells +/- SD: 135.5 +/- 138 vs 17.4 +/- 23.7, p < 0.0001). These infants exhibited significantly longer time intervals for clearance of nucleated red blood cells from their circulation (p < 0.0001). They also had lower birth weights (p < 0.05), lower initial platelet count (p = 0.0006), lower arterial cord blood pH (p < 0.05), higher cord blood base deficit (p < 0.05), and an increased likelihood of cesarean section for "fetal distress" (p < 0.05). Multivariate analysis demonstrated that absent or reversed end-diastolic velocity (p < 0.0001) and low birth weight (p < 0.0001) contributed to the elevation of the nucleated red blood cell count, whereas gestational age at delivery was not a significant contributor. We observed significantly greater nucleated red blood cell counts and lower platelet counts in small-for-gestational-age fetuses with abnormal umbilical artery Doppler studies. This may suggest that antenatal thrombotic events lead to an increased placental impedance. Fetal response to this chronic condition may result in an increased nucleated red blood cell count.
An Extremely Low Power Quantum Optical Communication Link for Autonomous Robotic Explorers
NASA Technical Reports Server (NTRS)
Lekki, John; Nguyen, Quang-Viet; Bizon, Tom; Nguyen, Binh; Kojima, Jun
2007-01-01
One concept for planetary exploration involves using many small robotic landers that can cover more ground than a single conventional lander. In addressing this vision, NASA has been challenged in the National Nanotechnology Initiative to research the development of miniature robots built from nano-sized components. These robots have very significant challenges, such as mobility and communication, given the small size and limited power generation capability. The research presented here has been focused on developing a communications system that has the potential for providing ultra-low power communications for robots such as these. In this paper an optical communications technique that is based on transmitting recognizable sets of photons is presented. Previously pairs of photons that have an entangled quantum state have been shown to be recognizable in ambient light. The main drawback to utilizing entangled photons is that they can only be generated through a very energy inefficient nonlinear process. In this paper a new technique that generates sets of photons from pulsed sources is described and an experimental system demonstrating this technique is presented. This technique of generating photon sets from pulsed sources has the distinct advantage in that it is much more flexible and energy efficient, and is well suited to take advantage of the very high energy efficiencies that are possible when using nano scale sources. For these reasons the communication system presented in this paper is well suited for use in very small, low power landers and rovers. In this paper a very low power optical communications system for miniature robots, as small as 1 cu cm is addressed. The communication system is a variant of photon counting communications. Instead of counting individual photons the system only counts the arrival of time coincident sets of photons. Using sets of photons significantly decreases the bit error rate because they are highly identifiable in the presence of ambient light. An experiment demonstrating reliable communication over a distance of 70 meters using less than a billionth of a watt of radiated power is presented. The components used in this system were chosen so that they could in the future be integrated into a cubic centimeter device.
Takahashi, Kazuhiro; Kurokawa, Tomohiro; Oshiro, Yukio; Fukunaga, Kiyoshi; Sakashita, Shingo; Ohkohchi, Nobuhiro
2016-05-01
Peripheral platelet counts decrease after partial hepatectomy; however, the implications of this phenomenon are unclear. We assessed if the observed decrease in platelet counts was associated with postoperative liver function and morbidity (complications grade ≤ II according to the Clavien-Dindo classification). We enrolled 216 consecutive patients who underwent partial hepatectomy for primary liver cancers, metastatic liver cancers, benign tumors, and donor hepatectomy. We classified patients as either low or high platelet percentage (postoperative platelet count/preoperative platelet count) using the optimal cutoff value calculated by a receiver operating characteristic (ROC) curve analysis, and analyzed risk factors for delayed liver functional recovery and morbidity after hepatectomy. Delayed liver function recovery and morbidity were significantly correlated with the lowest value of platelet percentage based on ROC analysis. Using a cutoff value of 60% acquired by ROC analysis, univariate and multivariate analysis determined that postoperative lowest platelet percentage ≤ 60% was identified as an independent risk factor of delayed liver function recovery (odds ratio (OR) 6.85; P < 0.01) and morbidity (OR, 4.90; P < 0.01). Furthermore, patients with the lowest platelet percentage ≤ 60% had decreased postoperative prothrombin time ratio and serum albumin level and increased serum bilirubin level when compared with patients with platelet percentage ≥ 61%. A greater than 40% decrease in platelet count after partial hepatectomy was an independent risk factor for delayed liver function recovery and postoperative morbidity. In conclusion, the decrease in platelet counts is an early marker to predict the liver function recovery and complications after hepatectomy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-26
The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less
González Parrado, Zulima; Valencia Barrera, Rosa M; Fuertes Rodríguez, Carmen R; Vega Maray, Ana M; Pérez Romero, Rafael; Fraile, Roberto; Fernández González, Delia
2009-01-01
This paper reports on the behaviour of Alnus glutinosa (alder) pollen grains in the atmosphere of Ponferrada (León, NW Spain) from 1995 to 2006. The study, which sought to determine the effects of various weather-related parameters on Alnus pollen counts, was performed using a volumetric method. The main pollination period for this taxon is January-February. Alder pollen is one of the eight major airborne pollen allergens found in the study area. An analysis was made of the correlation between pollen counts and major weather-related parameters over each period. In general, the strongest positive correlation was with temperature, particularly maximum temperature. During each period, peak pollen counts occurred when the maximum temperature fell within the range 9 degrees C-14 degrees C. Finally, multivariate analysis showed that the parameter exerting the greatest influence was temperature, a finding confirmed by Spearman correlation tests. Principal components analysis suggested that periods with high pollen counts were characterised by high maximum temperature, low rainfall and an absolute humidity of around 6 g m(-3). Use of this type of analysis in conjunction with other methods is essential for obtaining an accurate record of pollen-count variations over a given period.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Triadic Closure in Configuration Models with Unbounded Degree Fluctuations
NASA Astrophysics Data System (ADS)
van der Hofstad, Remco; van Leeuwaarden, Johan S. H.; Stegehuis, Clara
2018-01-01
The configuration model generates random graphs with any given degree distribution, and thus serves as a null model for scale-free networks with power-law degrees and unbounded degree fluctuations. For this setting, we study the local clustering c(k), i.e., the probability that two neighbors of a degree-k node are neighbors themselves. We show that c(k) progressively falls off with k and the graph size n and eventually for k=Ω (√{n}) settles on a power law c(k)˜ n^{5-2τ }k^{-2(3-τ )} with τ \\in (2,3) the power-law exponent of the degree distribution. This fall-off has been observed in the majority of real-world networks and signals the presence of modular or hierarchical structure. Our results agree with recent results for the hidden-variable model and also give the expected number of triangles in the configuration model when counting triangles only once despite the presence of multi-edges. We show that only triangles consisting of triplets with uniquely specified degrees contribute to the triangle counting.
NASA Astrophysics Data System (ADS)
Granja, Carlos; Polansky, Stepan; Vykydal, Zdenek; Pospisil, Stanislav; Owens, Alan; Kozacek, Zdenek; Mellab, Karim; Simcak, Marek
2016-06-01
The Space Application of Timepix based Radiation Monitor (SATRAM) is a spacecraft platform radiation monitor on board the Proba-V satellite launched in an 820 km altitude low Earth orbit in 2013. The is a technology demonstration payload is based on the Timepix chip equipped with a 300 μm silicon sensor with signal threshold of 8 keV/pixel to low-energy X-rays and all charged particles including minimum ionizing particles. For X-rays the energy working range is 10-30 keV. Event count rates can be up to 106 cnt/(cm2 s) for detailed event-by-event analysis or over 1011 cnt/(cm2 s) for particle-counting only measurements. The single quantum sensitivity (zero-dark current noise level) combined with per-pixel spectrometry and micro-scale pattern recognition analysis of single particle tracks enables the composition (particle type) and spectral characterization (energy loss) of mixed radiation fields to be determined. Timepix's pixel granularity and particle tracking capability also provides directional sensitivity for energetic charged particles. The payload detector response operates in wide dynamic range in terms of absorbed dose starting from single particle doses in the pGy level, particle count rate up to 106-10 /cm2/s and particle energy loss (threshold at 150 eV/μm). The flight model in orbit was successfully commissioned in 2013 and has been sampling the space radiation field in the satellite environment along its orbit at a rate of several frames per minute of varying exposure time. This article describes the design and operation of SATRAM together with an overview of the response and resolving power to the mixed radiation field including summary of the principal data products (dose rate, equivalent dose rate, particle-type count rate). The preliminary evaluation of response of the embedded Timepix detector to space radiation in the satellite environment is presented together with first results in the form of a detailed visualization of the mixed radiation field at the position of the payload and resulting spatial- and time-correlated radiation maps of cumulative dose rate along the satellite orbit.
Can Detectability Analysis Improve the Utility of Point Counts for Temperate Forest Raptors?
Temperate forest breeding raptors are poorly represented in typical point count surveys because these birds are cryptic and typically breed at low densities. In recent years, many new methods for estimating detectability during point counts have been developed, including distanc...
Controlling for varying effort in count surveys --an analysis of Christmas Bird Count Data
Link, W.A.; Sauer, J.R.
1999-01-01
The Christmas Bird Count (CBC) is a valuable source of information about midwinter populations of birds in the continental U.S. and Canada. Analysis of CBC data is complicated by substantial variation among sites and years in effort expended in counting; this feature of the CBC is common to many other wildlife surveys. Specification of a method for adjusting counts for effort is a matter of some controversy. Here, we present models for longitudinal count surveys with varying effort; these describe the effect of effort as proportional to exp(B effortp), where B and p are parameters. For any fixed p, our models are loglinear in the transformed explanatory variable (effort)p and other covariables. Hence we fit a collection of loglinear models corresponding to a range of values of p, and select the best effort adjustment from among these on the basis of fit statistics. We apply this procedure to data for six bird species in five regions, for the period 1959-1988.
Grimaldi, E; Del Vecchio, L; Scopacasa, F; Lo Pardo, C; Capone, F; Pariante, S; Scalia, G; De Caterina, M
2009-04-01
The Abbot Cell-Dyn Sapphire is a new generation haematology analyser. The system uses optical/fluorescence flow cytometry in combination with electronic impedance to produce a full blood count. Optical and impedance are the default methods for platelet counting while automated CD61-immunoplatelet analysis can be run as selectable test. The aim of this study was to determine the platelet count performance of the three counting methods available on the instrument and to compare the results with those provided by Becton Dickinson FACSCalibur flow cytometer used as reference method. A lipid interference experiment was also performed. Linearity, carryover and precision were good, and satisfactory agreement with reference method was found for the impedance, optical and CD61-immunoplatelet analysis, although this latter provided the closest results in comparison with flow cytometry. In the lipid interference experiment, a moderate inaccuracy of optical and immunoplatelet counts was observed starting from a very high lipid value.
NASA Technical Reports Server (NTRS)
Susko, M.
1979-01-01
The purpose of this experimental research was to compare Marshall Space Flight Center's electrets with Thiokol's fixed flow air samplers during the Space Shuttle Solid Rocket Booster Demonstration Model-3 static test firing on October 19, 1978. The measurement of rocket exhaust effluents by Thiokol's samplers and MSFC's electrets indicated that the firing of the Solid Rocket Booster had no significant effect on the quality of the air sampled. The highest measurement by Thiokol's samplers was obtained at Plant 3 (site 11) approximately 8 km at a 113 degree heading from the static test stand. At sites 11, 12, and 5, Thiokol's fixed flow air samplers measured 0.0048, 0.00016, and 0.00012 mg/m3 of CI. Alongside the fixed flow measurements, the electret counts from X-ray spectroscopy were 685, 894, and 719 counts. After background corrections, the counts were 334, 543, and 368, or an average of 415 counts. An additional electred, E20, which was the only measurement device at a site approximately 20 km northeast from the test site where no power was available, obtained 901 counts. After background correction, the count was 550. Again this data indicate there was no measurement of significant rocket exhaust effluents at the test site.
Jiménez-Banzo, Ana; Ragàs, Xavier; Kapusta, Peter; Nonell, Santi
2008-09-01
Two recent advances in optoelectronics, namely novel near-IR sensitive photomultipliers and inexpensive yet powerful diode-pumped solid-state lasers working at kHz repetition rate, enable the time-resolved detection of singlet oxygen (O2(a1Deltag)) phosphorescence in photon counting mode, thereby boosting the time-resolution, sensitivity, and dynamic range of this well-established detection technique. Principles underlying this novel approach and selected examples of applications are provided in this perspective, which illustrate the advantages over the conventional analog detection mode.
Microradiography with Semiconductor Pixel Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri
High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.
Photon counting statistics analysis of biophotons from hands.
Jung, Hyun-Hee; Woo, Won-Myung; Yang, Joon-Mo; Choi, Chunho; Lee, Jonghan; Yoon, Gilwon; Yang, Jong S; Soh, Kwang-Sup
2003-05-01
The photon counting statistics of biophotons emitted from hands is studied with a view to test its agreement with the Poisson distribution. The moments of observed probability up to seventh order have been evaluated. The moments of biophoton emission from hands are in good agreement while those of dark counts of photomultiplier tube show large deviations from the theoretical values of Poisson distribution. The present results are consistent with the conventional delta-value analysis of the second moment of probability.
Dubelaar, G B; Gerritzen, P L; Beeker, A E; Jonker, R R; Tangen, K
1999-12-01
The high costs of microscopical determination and counting of phytoplankton often limit sampling frequencies below an acceptable level for the monitoring of dynamic ecosystems. Although having a limited discrimination power, flow cytometry allows the analysis of large numbers of samples to a level that is sufficient for many basic monitoring jobs. For this purpose, flow cytometers should not be restricted to research laboratories. We report here on the development of an in situ flow cytometer for autonomous operation inside a small moored buoy or on other platforms. Operational specifications served a wide range of applications in the aquatic field. Specific conditions had to be met with respect to the operation platform and autonomy. A small, battery-operated flow cytometer resulted, requiring no external sheath fluid supply. Because it was designed to operate in a buoy, we call it CytoBuoy. Sampling, analysis, and radio transmission of the data proceed automatically at user-defined intervals. A powerful feature is the acquisition and radio transmission of full detector pulse shapes of each particle. This provides valuable morphological information for particles larger than the 5-microm laser focus. CytoBuoy allows on-line in situ particle analysis, estimation of phytoplankton biomass, and discrimination between different phytoplankton groups. This will increase the applicability of flow cytometry in the field of environmental monitoring. Copyright 1999 Wiley-Liss, Inc.
Modification of Poisson Distribution in Radioactive Particle Counting.
ERIC Educational Resources Information Center
Drotter, Michael T.
This paper focuses on radioactive practicle counting statistics in laboratory and field applications, intended to aid the Health Physics technician's understanding of the effect of indeterminant errors on radioactive particle counting. It indicates that although the statistical analysis of radioactive disintegration is best described by a Poisson…
NASA Technical Reports Server (NTRS)
Ryu, J. Y.; Wada, M.
1985-01-01
In order to examine the stability of neutron monitor observation, each of the monthly average counting rates of a neutron monitors is correlated to those of Kiel neutron monitor. The regression coefficients thus obtained are compared with the coupling coefficients of isotropic intensity radiation. The results of the comparisons for five year periods during 1963 to 1982, and for whole period are given. The variation spectrum with a single power law with an exponent of -0.75 up to 50 GV is not so unsatisfactory one. More than one half of the stations show correlations with the coefficient greater than 0.9. Some stations have shifted the level of mean counting rates by changing the instrumental characteristics which can be adjusted.
Photon Counting System for High-Sensitivity Detection of Bioluminescence at Optical Fiber End.
Iinuma, Masataka; Kadoya, Yutaka; Kuroda, Akio
2016-01-01
The technique of photon counting is widely used for various fields and also applicable to a high-sensitivity detection of luminescence. Thanks to recent development of single photon detectors with avalanche photodiodes (APDs), the photon counting system with an optical fiber has become powerful for a detection of bioluminescence at an optical fiber end, because it allows us to fully use the merits of compactness, simple operation, highly quantum efficiency of the APD detectors. This optical fiber-based system also has a possibility of improving the sensitivity to a local detection of Adenosine triphosphate (ATP) by high-sensitivity detection of the bioluminescence. In this chapter, we are introducing a basic concept of the optical fiber-based system and explaining how to construct and use this system.
Radioactivities of Long Duration Exposure Facility (LDEF) materials: Baggage and bonanzas
NASA Technical Reports Server (NTRS)
Smith, Alan R.; Hurley, Donna L.
1991-01-01
Radioactivities in materials onboard the returned Long Duration Exposure Facility (LDEF) satellite were studied by a variety of techniques. Among the most powerful is low background Ge semiconductor detector gamma ray spectrometry. The observed radioactivities are of two origins: those radionuclides produced by nuclear reactions with the radiation field in orbit and radionuclides present initially as contaminants in materials used for construction of the spacecraft and experimental assemblies. In the first category are experiment related monitor foils and tomato seeds, and such spacecraft materials as Al, stainless steel, and Ti. In the second category are Al, Be, Ti, Va, and some special glasses. Consider that measured peak-area count rates from both categories range from a high value of about 1 count per minute down to less than 0.001 count per minute. Successful measurement of count rates toward the low end of this range can be achieved only through low background techniques, such as used to obtain the results presented here.
Radioactivities of Long Duration Exposure Facility (LDEF) materials: Baggage and bonanzas
NASA Astrophysics Data System (ADS)
Smith, Alan R.; Hurley, Donna L.
1991-06-01
Radioactivities in materials onboard the returned Long Duration Exposure Facility (LDEF) satellite were studied by a variety of techniques. Among the most powerful is low background Ge semiconductor detector gamma ray spectrometry. The observed radioactivities are of two origins: those radionuclides produced by nuclear reactions with the radiation field in orbit and radionuclides present initially as contaminants in materials used for construction of the spacecraft and experimental assemblies. In the first category are experiment related monitor foils and tomato seeds, and such spacecraft materials as Al, stainless steel, and Ti. In the second category are Al, Be, Ti, Va, and some special glasses. Consider that measured peak-area count rates from both categories range from a high value of about 1 count per minute down to less than 0.001 count per minute. Successful measurement of count rates toward the low end of this range can be achieved only through low background techniques, such as used to obtain the results presented here.
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
Schistosoma haematobium Infection That Mimics Bladder Cancer in a 66-Year-Old Ethnic Egyptian Man.
Zepeda, Celenne Morfin; Coffey, Kristen H
2015-01-01
66-year-old ethnic Egyptian man. Hematuria. The patient had a history of multiple episodes of gross hematuria for the past 5 years. Because the hematuria usually resolved on its own, he did not seek medical attention during that time. Bladder cancer was suspected. The patient had a history of coronary artery disease, hypertension, nephrolithiasis, congestive heart failure, lifelong smoking, and ischemic cardiomyopathy. He has been taking the anticoagulants clopidogrel (Plavix) and warfarin (Coumadin). The patient is originally from Egypt and has been living in the United States for the past 10 years. A complete blood count showed a hemoglobin of 13.0 g per dL (reference range, 14.0 to 18.0 g per dL), hematocrit 40% (40% to 54%), red blood cell count (RBC) 4.65 × 10(9) per L (4.60 to 6.00), and platelet count 179 × 10(9) per L (150 to 450). The urinalysis results showed 3+ protein, 4+ blood, and urine RBC of greater than 100 per high power field (hpf). The urinalysis results did not indicate the presence of parasitic ova or adult parasites. Based on these results, the physician ordered cystoscopic testing, suspecting bladder cancer. Analysis of the bladder tissue showed inflammation (Image 1) and several ova that were consistent with developing Schistosoma (Image 2). Many of the ova were calcified and surrounded by severely inflamed tissue (Image 3). Copyright© by the American Society for Clinical Pathology (ASCP).
Medland, Sarah E; Loesch, Danuta Z; Mdzewski, Bogdan; Zhu, Gu; Montgomery, Grant W; Martin, Nicholas G
2007-01-01
The finger ridge count (a measure of pattern size) is one of the most heritable complex traits studied in humans and has been considered a model human polygenic trait in quantitative genetic analysis. Here, we report the results of the first genome-wide linkage scan for finger ridge count in a sample of 2,114 offspring from 922 nuclear families. Both univariate linkage to the absolute ridge count (a sum of all the ridge counts on all ten fingers), and multivariate linkage analyses of the counts on individual fingers, were conducted. The multivariate analyses yielded significant linkage to 5q14.1 (Logarithm of odds [LOD] = 3.34, pointwise-empirical p-value = 0.00025) that was predominantly driven by linkage to the ring, index, and middle fingers. The strongest univariate linkage was to 1q42.2 (LOD = 2.04, point-wise p-value = 0.002, genome-wide p-value = 0.29). In summary, the combination of univariate and multivariate results was more informative than simple univariate analyses alone. Patterns of quantitative trait loci factor loadings consistent with developmental fields were observed, and the simple pleiotropic model underlying the absolute ridge count was not sufficient to characterize the interrelationships between the ridge counts of individual fingers. PMID:17907812
The ultraviolet behavior of quantum gravity
NASA Astrophysics Data System (ADS)
Anselmi, Damiano; Piva, Marco
2018-05-01
A theory of quantum gravity has been recently proposed by means of a novel quantization prescription, which is able to turn the poles of the free propagators that are due to the higher derivatives into fakeons. The classical Lagrangian contains the cosmological term, the Hilbert term, √{-g}{R}_{μ ν }{R}^{μ ν } and √{-g}{R}^2 . In this paper, we compute the one-loop renormalization of the theory and the absorptive part of the graviton self energy. The results illustrate the mechanism that makes renormalizability compatible with unitarity. The fakeons disentangle the real part of the self energy from the imaginary part. The former obeys a renormalizable power counting, while the latter obeys the nonrenormalizable power counting of the low energy expansion and is consistent with unitarity in the limit of vanishing cosmological constant. The value of the absorptive part is related to the central charge c of the matter fields coupled to gravity.
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
Hong, Hyo-Lim; Kim, Sung-Han; Huh, Jin Won; Sung, Heungsup; Lee, Sang-Oh; Kim, Mi-Na; Jeong, Jin-Yong; Lim, Chae-Man; Kim, Yang Soo; Woo, Jun Hee; Koh, Younsuck
2014-01-01
Background The usefulness of bronchoalveolar lavage (BAL) fluid cellular analysis in pneumonia has not been adequately evaluated. This study investigated the ability of cellular analysis of BAL fluid to differentially diagnose bacterial pneumonia from viral pneumonia in adult patients who are admitted to intensive care unit. Methods BAL fluid cellular analysis was evaluated in 47 adult patients who underwent bronchoscopic BAL following less than 24 hours of antimicrobial agent exposure. The abilities of BAL fluid total white blood cell (WBC) counts and differential cell counts to differentiate between bacterial and viral pneumonia were evaluated using receiver operating characteristic (ROC) curve analysis. Results Bacterial pneumonia (n = 24) and viral pneumonia (n = 23) were frequently associated with neutrophilic pleocytosis in BAL fluid. BAL fluid median total WBC count (2,815/µL vs. 300/µL, P<0.001) and percentage of neutrophils (80.5% vs. 54.0%, P = 0.02) were significantly higher in the bacterial pneumonia group than in the viral pneumonia group. In ROC curve analysis, BAL fluid total WBC count showed the best discrimination, with an area under the curve of 0.855 (95% CI, 0.750–0.960). BAL fluid total WBC count ≥510/µL had a sensitivity of 83.3%, specificity of 78.3%, positive likelihood ratio (PLR) of 3.83, and negative likelihood ratio (NLR) of 0.21. When analyzed in combination with serum procalcitonin or C-reactive protein, sensitivity was 95.8%, specificity was 95.7%, PLR was 8.63, and NLR was 0.07. BAL fluid total WBC count ≥510/µL was an independent predictor of bacterial pneumonia with an adjusted odds ratio of 13.5 in multiple logistic regression analysis. Conclusions Cellular analysis of BAL fluid can aid early differential diagnosis of bacterial pneumonia from viral pneumonia in critically ill patients. PMID:24824328
The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum
NASA Astrophysics Data System (ADS)
Frith, W. J.; Outram, P. J.; Shanks, T.
2005-06-01
We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.
Rusin, Spencer; Covey, Shannon; Perjar, Irina; Hollyfield, Johnny; Speck, Olga; Woodward, Kimberly; Woosley, John T.; Dellon, Evan S.
2017-01-01
Summary Many studies of eosinophilic esophagitis (EoE) utilize expert pathology review, but it is unknown whether less experienced pathologists can reliably assess EoE histology. We aimed to determine whether trainee pathologists can accurately quantify esophageal eosinophil counts and identify associated histologic features of EoE, as compared to expert pathologists. We used a set of 40 digitized slides from patients with varying degrees of esophageal eosinophilia. Each of six trainee pathologists underwent a teaching session and used our validated protocol to determine eosinophil counts and associated EoE findings. The same slides had previously been evaluated by expert pathologists, and these results comprised the gold standard. Eosinophil counts were correlated, and agreement was calculated for the diagnostic threshold of 15 eosinophils per high-power field (eos/hpf) as well as for associated EoE findings. Peak eosinophil counts were highly correlated between the trainees and the gold standard (Rho ranged from 0.87–0.92; p<0.001 for all). Peak counts were also highly correlated between trainees (0.75–0.91; p<0.001), and results were similar for mean counts. Agreement was excellent for determining if a count exceeded the diagnostic threshold (kappa ranged from 0.83 to 0.89; p<0.001). Agreement was very good for eosinophil degranulation (kappa 0.54 to 0.83; p<0.01) and spongiosis (kappa 0.44–0.87; p<0.01), but was lower for eosinophil microabscesses (kappa 0.37–0.64; p<0.01). In conclusion, using a teaching session, digitized slide set, and validated protocol, the agreement between pathology trainees and expert pathologists for determining eosinophil counts was excellent. Agreement was very good for eosinophil degranulation and spongiosis, but less so for microabscesses. PMID:28041975
Rusin, Spencer; Covey, Shannon; Perjar, Irina; Hollyfield, Johnny; Speck, Olga; Woodward, Kimberly; Woosley, John T; Dellon, Evan S
2017-04-01
Many studies of eosinophilic esophagitis (EoE) use expert pathology review, but it is unknown whether less experienced pathologists can reliably assess EoE histology. We aimed to determine whether trainee pathologists can accurately quantify esophageal eosinophil counts and identify associated histologic features of EoE, as compared with expert pathologists. We used a set of 40 digitized slides from patients with varying degrees of esophageal eosinophilia. Each of 6 trainee pathologists underwent a teaching session and used our validated protocol to determine eosinophil counts and associated EoE findings. The same slides had previously been evaluated by expert pathologists, and these results comprised the criterion standard. Eosinophil counts were correlated, and agreement was calculated for the diagnostic threshold of 15 eosinophils per high-power field as well as for associated EoE findings. Peak eosinophil counts were highly correlated between the trainees and the criterion standard (ρ ranged from 0.87 to 0.92; P<.001 for all). Peak counts were also highly correlated between trainees (0.75-0.91; P<.001), and results were similar for mean counts. Agreement was excellent for determining if a count exceeded the diagnostic threshold (κ ranged from 0.83 to 0.89; P<.001). Agreement was very good for eosinophil degranulation (κ = 0.54-0.83; P<.01) and spongiosis (κ = 0.44-0.87; P<.01) but was lower for eosinophil microabscesses (κ = 0.37-0.64; P<.01). In conclusion, using a teaching session, digitized slide set, and validated protocol, the agreement between pathology trainees and expert pathologists for determining eosinophil counts was excellent. Agreement was very good for eosinophil degranulation and spongiosis but less so for microabscesses. Copyright © 2016 Elsevier Inc. All rights reserved.
2017-01-01
Summary The present study was done to optimize the power ultrasound processing for maximizing diastase activity of and minimizing hydroxymethylfurfural (HMF) content in honey using response surface methodology. Experimental design with treatment time (1-15 min), amplitude (20-100%) and volume (40-80 mL) as independent variables under controlled temperature conditions was studied and it was concluded that treatment time of 8 min, amplitude of 60% and volume of 60 mL give optimal diastase activity and HMF content, i.e. 32.07 Schade units and 30.14 mg/kg, respectively. Further thermal profile analyses were done with initial heating temperatures of 65, 75, 85 and 95 ºC until temperature of honey reached up to 65 ºC followed by holding time of 25 min at 65 ºC, and the results were compared with thermal profile of honey treated with optimized power ultrasound. The quality characteristics like moisture, pH, diastase activity, HMF content, colour parameters and total colour difference were least affected by optimized power ultrasound treatment. Microbiological analysis also showed lower counts of aerobic mesophilic bacteria and in ultrasonically treated honey than in thermally processed honey samples complete destruction of coliforms, yeasts and moulds. Thus, it was concluded that power ultrasound under suggested operating conditions is an alternative nonthermal processing technique for honey. PMID:29540991
Guaifenesin and increased sperm motility: a preliminary case report.
Means, Gary; Berry-Cabán, Cristóbal S; Hammermeuller, Kurt
2010-12-20
A review of the literature and an extensive Medline search revealed that this is the first case report of the use of guaifenesin to increase sperm motility. A 32-year-old male presented for an infertility evaluation. He reported an inability to conceive with his wife after 18 months of unprotected intercourse. A semen analysis was performed that included spermatozoa count, liquefaction, morphology, motility, viscosity and volume. Initial results of the semen analysis demonstrated low sperm count and motility. The provider offered treatment with guaifenesin 600 mg extended release tablets twice daily. Two months after guaifenesin therapy the semen analysis was repeated that demonstrated marked improvement in both total sperm count and motility. Evidence for the effectiveness of guaifenesin is almost entirely anecdotal. Given the mechanism of action of guaifenesin, it is not clear from this case why the patient demonstrated such a large improvement in both sperm count and motility. Additional studies of the effects of guaifenesin on male fertility could yield information of the medication's effect on men with normal or decreased total sperm counts.
Guaifenesin and increased sperm motility: a preliminary case report
Means, Gary; Berry-Cabán, Cristóbal S; Hammermeuller, Kurt
2011-01-01
Background A review of the literature and an extensive Medline search revealed that this is the first case report of the use of guaifenesin to increase sperm motility. Case A 32-year-old male presented for an infertility evaluation. He reported an inability to conceive with his wife after 18 months of unprotected intercourse. A semen analysis was performed that included spermatozoa count, liquefaction, morphology, motility, viscosity and volume. Initial results of the semen analysis demonstrated low sperm count and motility. The provider offered treatment with guaifenesin 600 mg extended release tablets twice daily. Two months after guaifenesin therapy the semen analysis was repeated that demonstrated marked improvement in both total sperm count and motility. Conclusion Evidence for the effectiveness of guaifenesin is almost entirely anecdotal. Given the mechanism of action of guaifenesin, it is not clear from this case why the patient demonstrated such a large improvement in both sperm count and motility. Additional studies of the effects of guaifenesin on male fertility could yield information of the medication’s effect on men with normal or decreased total sperm counts. PMID:21403786
Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O
2016-06-01
Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.
Cosmological measurements with general relativistic galaxy correlations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Montanari, Francesco; Durrer, Ruth
We investigate the cosmological dependence and the constraining power of large-scale galaxy correlations, including all redshift-distortions, wide-angle, lensing and gravitational potential effects on linear scales. We analyze the cosmological information present in the lensing convergence and in the gravitational potential terms describing the so-called ''relativistic effects'', and we find that, while smaller than the information contained in intrinsic galaxy clustering, it is not negligible. We investigate how neglecting them does bias cosmological measurements performed by future spectroscopic and photometric large-scale surveys such as SKA and Euclid. We perform a Fisher analysis using the CLASS code, modified to include scale-dependent galaxymore » bias and redshift-dependent magnification and evolution bias. Our results show that neglecting relativistic terms, especially lensing convergence, introduces an error in the forecasted precision in measuring cosmological parameters of the order of a few tens of percent, in particular when measuring the matter content of the Universe and primordial non-Gaussianity parameters. The analysis suggests a possible substantial systematic error in cosmological parameter constraints. Therefore, we argue that radial correlations and integrated relativistic terms need to be taken into account when forecasting the constraining power of future large-scale number counts of galaxy surveys.« less
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Littoral Combat Ship Manpower, an Overview of Officer Characteristics and Placement
2013-03-01
15. NUMBER OF PAGES 103 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE...maritime force: 1.) Networks should be the central organizing principle of the fleet, and its sensing and fighting power should be distributed across...assured access” force; and 4.) Numbers of hulls count (quantity had its own quality) and consequently the fleet’s combat power should be
Bogren, Anna; Teles, Ricardo P; Torresyap, Gay; Haffajee, Anne D; Socransky, Sigmund S; Jönsson, Kerstin; Wennström, Jan L
2008-02-01
To test the hypothesis of a superior clinical and microbiological effect of the combined use of powered toothbrush+triclosan-containing dentifrice compared with manual toothbrush+regular fluoride-containing dentifrice in periodontal maintenance patients. A total of 128 periodontitis subjects involved in recall programmes were randomized to use either powered toothbrush with triclosan-dentifrice (test) or manual toothbrush and standard dentifrice (control). Supportive periodontal treatment was provided at baseline and every 6 months. Plaque, bleeding on probing (BoP), probing pocket depth (PPD) and relative attachment level (RAL) were scored at baseline, 1, 2 and 3 years. Subgingival plaque samples were taken and analysed for their content of 40 bacterial species at each examination interval. All analyses were performed by "intention-to-treat" protocol. Both groups showed significant reduction in BoP, PPD and in mean total counts of the 40 bacterial species between baseline and 3 years, while plaque score and RAL remained almost unchanged. No significant differences between the two prevention programmes were found for any of the clinical outcome variables or in mean counts of the various bacterial species. The study failed to demonstrate superior clinical and microbiological effects of powered toothbrush+triclosan dentifrice compared with manual toothbrush+standard fluoride-dentifrice in periodontitis-susceptible patients on regular maintenance therapy.
Modeling the frequency-dependent detective quantum efficiency of photon-counting x-ray detectors.
Stierstorfer, Karl
2018-01-01
To find a simple model for the frequency-dependent detective quantum efficiency (DQE) of photon-counting detectors in the low flux limit. Formula for the spatial cross-talk, the noise power spectrum and the DQE of a photon-counting detector working at a given threshold are derived. Parameters are probabilities for types of events like single counts in the central pixel, double counts in the central pixel and a neighboring pixel or single count in a neighboring pixel only. These probabilities can be derived in a simple model by extensive use of Monte Carlo techniques: The Monte Carlo x-ray propagation program MOCASSIM is used to simulate the energy deposition from the x-rays in the detector material. A simple charge cloud model using Gaussian clouds of fixed width is used for the propagation of the electric charge generated by the primary interactions. Both stages are combined in a Monte Carlo simulation randomizing the location of impact which finally produces the required probabilities. The parameters of the charge cloud model are fitted to the spectral response to a polychromatic spectrum measured with our prototype detector. Based on the Monte Carlo model, the DQE of photon-counting detectors as a function of spatial frequency is calculated for various pixel sizes, photon energies, and thresholds. The frequency-dependent DQE of a photon-counting detector in the low flux limit can be described with an equation containing only a small set of probabilities as input. Estimates for the probabilities can be derived from a simple model of the detector physics. © 2017 American Association of Physicists in Medicine.
Multi-Parameter Linear Least-Squares Fitting to Poisson Data One Count at a Time
NASA Technical Reports Server (NTRS)
Wheaton, W.; Dunklee, A.; Jacobson, A.; Ling, J.; Mahoney, W.; Radocinski, R.
1993-01-01
A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multi-component linear model, with underlying physical count rates or fluxes which are to be estimated from the data.
Bonomi, Alberto G; Westerterp, Klaas R
2016-01-01
Background Physical activity is recommended to promote healthy aging. Defining the importance of activities such as walking in achieving higher levels of physical activity might provide indications for interventions. Objective To describe the importance of walking in achieving higher levels of physical activity in older adults. Methods The study included 42 healthy subjects aged between 51 and 84 years (mean body mass index 25.6 kg/m2 [SD 2.6]). Physical activity, walking, and nonwalking activity were monitored with an accelerometer for 2 weeks. Physical activity was quantified by accelerometer-derived activity counts. An algorithm based on template matching and signal power was developed to classify activity counts into nonwalking counts, short walk counts, and long walk counts. Additionally, in a subgroup of 31 subjects energy expenditure was measured using doubly labeled water to derive physical activity level (PAL). Results Subjects had a mean PAL of 1.84 (SD 0.19, range 1.43-2.36). About 20% of the activity time (21% [SD 8]) was spent walking, which accounted for about 40% of the total counts (43% [SD 11]). Short bouts composed 83% (SD 9) of walking time, providing 81% (SD 11) of walking counts. A stepwise regression model to predict PAL included nonwalking counts and short walk counts, explaining 58% of the variance of PAL (standard error of the estimate=0.12). Walking activities produced more counts per minute than nonwalking activities (P<.001). Long walks produced more counts per minute than short walks (P=.001). Nonwalking counts were independent of walking counts (r=−.05, P=.38). Conclusions Walking activities are a major contributor to physical activity in older adults. Walking activities occur at higher intensities than nonwalking activities, which might prevent individuals from engaging in more walking activity. Finally, subjects who engage in more walking activities do not tend to compensate by limiting nonwalking activities. Trial Registration ClinicalTrials.gov NCT01609764; https://clinicaltrials.gov/ct2/show/NCT01609764 (Archived by WebCite at http://www.webcitation.org/6grls0wAp) PMID:27268471
Simplified power processing for ion-thruster subsystems
NASA Technical Reports Server (NTRS)
Wessel, F. J.; Hancock, D. J.
1983-01-01
A design for a greatly simplified power-processing unit (SPPU) for the 8-cm diameter mercury-ion-thruster subsystem is discussed. This SPPU design will provide a tenfold reduction in parts count, a decrease in system mass and cost, and an increase in system reliability compared to the existing power-processing unit (PPU) used in the Hughes/NASA Lewis Research Center Ion Auxiliary Propulsion Subsystem. The simplifications achieved in this design will greatly increase the attractiveness of ion propulsion in near-term and future spacecraft propulsion applications. A description of a typical ion-thruster subsystem is given. An overview of the thruster/power-processor interface requirements is given. Simplified thruster power processing is discussed.
Inactivation of Viruses by Coherent Excitations with a Low Power Visible Femtosecond Laser
2007-06-05
visible femtosecond laser having a wavelength of 425 nm and a pulse width of 100 fs, we show that M13 phages were inactivated when the laser power density...was greater than or equal to 50 MW/cm2. The inactivation of M13 phages was determined by plaque counts and had been found to depend on the pulse width...visible femtosecond laser having a wavelength of 425 nm and a pulse width of 100 fs, we show that M13 phages were inactivated when the laser power
How many fish in a tank? Constructing an automated fish counting system by using PTV analysis
NASA Astrophysics Data System (ADS)
Abe, S.; Takagi, T.; Takehara, K.; Kimura, N.; Hiraishi, T.; Komeyama, K.; Torisawa, S.; Asaumi, S.
2017-02-01
Because escape from a net cage and mortality are constant problems in fish farming, health control and management of facilities are important in aquaculture. In particular, the development of an accurate fish counting system has been strongly desired for the Pacific Bluefin tuna farming industry owing to the high market value of these fish. The current fish counting method, which involves human counting, results in poor accuracy; moreover, the method is cumbersome because the aquaculture net cage is so large that fish can only be counted when they move to another net cage. Therefore, we have developed an automated fish counting system by applying particle tracking velocimetry (PTV) analysis to a shoal of swimming fish inside a net cage. In essence, we treated the swimming fish as tracer particles and estimated the number of fish by analyzing the corresponding motion vectors. The proposed fish counting system comprises two main components: image processing and motion analysis, where the image-processing component abstracts the foreground and the motion analysis component traces the individual's motion. In this study, we developed a Region Extraction and Centroid Computation (RECC) method and a Kalman filter and Chi-square (KC) test for the two main components. To evaluate the efficiency of our method, we constructed a closed system, placed an underwater video camera with a spherical curved lens at the bottom of the tank, and recorded a 360° view of a swimming school of Japanese rice fish (Oryzias latipes). Our study showed that almost all fish could be abstracted by the RECC method and the motion vectors could be calculated by the KC test. The recognition rate was approximately 90% when more than 180 individuals were observed within the frame of the video camera. These results suggest that the presented method has potential application as a fish counting system for industrial aquaculture.
Excreta Sampling as an Alternative to In Vivo Measurements at the Hanford Site.
Carbaugh, Eugene H; Antonio, Cheryl L; Lynch, Timothy P
2015-08-01
The capabilities of indirect radiobioassay by urine and fecal sample analysis were compared with the direct radiobioassay methods of whole body counting and lung counting for the most common radionuclides and inhalation exposure scenarios encountered by Hanford workers. Radionuclides addressed by in vivo measurement included 137Cs, 60Co, 154Eu, and 241Am as an indicator for plutonium mixtures. The same radionuclides were addressed using gamma energy analysis of urine samples, augmented by radiochemistry and alpha spectrometry methods for plutonium in urine and fecal samples. It was concluded that in vivo whole body counting and lung counting capability should be maintained at the Hanford Site for the foreseeable future, however, urine and fecal sample analysis could provide adequate, though degraded, monitoring capability for workers as a short-term alternative, should in vivo capability be lost due to planned or unplanned circumstances.
How to normalize metatranscriptomic count data for differential expression analysis.
Klingenberg, Heiner; Meinicke, Peter
2017-01-01
Differential expression analysis on the basis of RNA-Seq count data has become a standard tool in transcriptomics. Several studies have shown that prior normalization of the data is crucial for a reliable detection of transcriptional differences. Until now it has not been clear whether and how the transcriptomic approach can be used for differential expression analysis in metatranscriptomics. We propose a model for differential expression in metatranscriptomics that explicitly accounts for variations in the taxonomic composition of transcripts across different samples. As a main consequence the correct normalization of metatranscriptomic count data under this model requires the taxonomic separation of the data into organism-specific bins. Then the taxon-specific scaling of organism profiles yields a valid normalization and allows us to recombine the scaled profiles into a metatranscriptomic count matrix. This matrix can then be analyzed with statistical tools for transcriptomic count data. For taxon-specific scaling and recombination of scaled counts we provide a simple R script. When applying transcriptomic tools for differential expression analysis directly to metatranscriptomic data with an organism-independent (global) scaling of counts the resulting differences may be difficult to interpret. The differences may correspond to changing functional profiles of the contributing organisms but may also result from a variation of taxonomic abundances. Taxon-specific scaling eliminates this variation and therefore the resulting differences actually reflect a different behavior of organisms under changing conditions. In simulation studies we show that the divergence between results from global and taxon-specific scaling can be drastic. In particular, the variation of organism abundances can imply a considerable increase of significant differences with global scaling. Also, on real metatranscriptomic data, the predictions from taxon-specific and global scaling can differ widely. Our studies indicate that in real data applications performed with global scaling it might be impossible to distinguish between differential expression in terms of transcriptomic changes and differential composition in terms of changing taxonomic proportions. As in transcriptomics, a proper normalization of count data is also essential for differential expression analysis in metatranscriptomics. Our model implies a taxon-specific scaling of counts for normalization of the data. The application of taxon-specific scaling consequently removes taxonomic composition variations from functional profiles and therefore provides a clear interpretation of the observed functional differences.
Schwarz, Daniel A.; Arman, Krikor G.; Kakwan, Mehreen S.; Jamali, Ameen M.; Elmeligy, Ayman A.; Buchman, Steven R.
2015-01-01
Background The authors’ goal was to ascertain regenerate bone-healing metrics using quantitative histomorphometry at a single consolidation period. Methods Rats underwent either mandibular distraction osteogenesis (n=7) or partially reduced fractures (n=7); their contralateral mandibles were used as controls (n=11). External fixators were secured and unilateral osteotomies performed, followed by either mandibular distraction osteogenesis (4 days’ latency, then 0.3 mm every 12 hours for 8 days; 5.1 mm) or partially reduced fractures (fixed immediately postoperatively; 2.1 mm); both groups underwent 4 weeks of consolidation. After tissue processing, bone volume/tissue volume ratio, osteoid volume/tissue volume ratio, and osteocyte count per high-power field were analyzed by means of quantitative histomorphometry. Results Contralateral mandibles had statistically greater bone volume/tissue volume ratio and osteocyte count per high-power field compared with both mandibular distraction osteogenesis and partially reduced fractures by almost 50 percent, whereas osteoid volume/tissue volume ratio was statistically greater in both mandibular distraction osteogenesis specimens and partially reduced fractures compared with contralateral mandibles. No statistical difference in bone volume/tissue volume ratio, osteoid volume/tissue volume ratio, or osteocyte count per high-power field was found between mandibular distraction osteogenesis specimens and partially reduced fractures. Conclusions The authors’ findings demonstrate significantly decreased bone quantity and maturity in mandibular distraction osteogenesis specimens and partially reduced fractures compared with contralateral mandibles using the clinically analogous protocols. If these results are extrapolated clinically, treatment strategies may require modification to ensure reliable, predictable, and improved outcomes. PMID:20463629
Power monitoring in space nuclear reactors using silicon carbide radiation detectors
NASA Technical Reports Server (NTRS)
Ruddy, Frank H.; Patel, Jagdish U.; Williams, John G.
2005-01-01
Space reactor power monitors based on silicon carbide (SiC) semiconductor neutron detectors are proposed. Detection of fast leakage neutrons using SiC detectors in ex-core locations could be used to determine reactor power: Neutron fluxes, gamma-ray dose rates and ambient temperatures have been calculated as a function of distance from the reactor core, and the feasibility of power monitoring with SiC detectors has been evaluated at several ex-core locations. Arrays of SiC diodes can be configured to provide the required count rates to monitor reactor power from startup to full power Due to their resistance to temperature and the effects of neutron and gamma-ray exposure, SiC detectors can be expected to provide power monitoring information for the fill mission of a space reactor.
Characterization of Sphinx1 ASIC X-ray detector using photon counting and charge integration
NASA Astrophysics Data System (ADS)
Habib, A.; Arques, M.; Moro, J.-L.; Accensi, M.; Stanchina, S.; Dupont, B.; Rohr, P.; Sicard, G.; Tchagaspanian, M.; Verger, L.
2018-01-01
Sphinx1 is a novel pixel architecture adapted for X-ray imaging, it detects radiation by photon counting and charge integration. In photon counting mode, each photon is compensated by one or more counter-charges typically consisting of 100 electrons (e-) each. The number of counter-charges required gives a measure of the incoming photon energy, thus allowing spectrometric detection. Pixels can also detect radiation by integrating the charges deposited by all incoming photons during one image frame and converting this analog value into a digital response with a 100 electrons least significant bit (LSB), based on the counter-charge concept. A proof of concept test chip measuring 5 mm × 5 mm, with 200 μm × 200 μm pixels has been produced and characterized. This paper provides details on the architecture and the counter-charge design; it also describes the two modes of operation: photon counting and charge integration. The first performance measurements for this test chip are presented. Noise was found to be ~80 e-rms in photon counting mode with a power consumption of only 0.9 μW/pixel for the static analog part and 0.3 μW/pixel for the static digital part.
Use of burrow entrances to indicate densities of Townsend's ground squirrels
Van Horne, Beatrice; Schooley, Robert L.; Knick, Steven T.; Olson, G.S.; Burnham, K.P.
1997-01-01
Counts of burrow entrances have been positively correlated with densities of semi-fossorial rodents and used as an index of densities. We evaluated their effectiveness in indexing densities of Townsend's ground squirrels (Spermophilus townsendii) in the Snake River Birds of Prey National Conservation Area (SRBOPNCA), Idaho, by comparing burrow entrance densities to densities of ground squirrels estimated from livetrapping in 2 consecutive years over which squirrel populations declined by >75%. We did not detect a consistent relation between burrow entrance counts and ground squirrel density estimates within or among habitat types. Scatter plots indicated that burrow entrances had little predictive power at intermediate densities. Burrow entrance counts did not reflect the magnitude of a between-year density decline. Repeated counts of entrances late in the squirrels' active season varied in a manner that would be difficult to use for calibration of transects sampled only once during this period. Annual persistence of burrow entrances varied between habitats. Trained observers were inconsistent in assigning active-inactive status to entrances. We recommend that burrow entrance counts not be used as measures or indices of ground squirrel densities in shrubsteppe habitats, and that the method be verified thoroughly before being used in other habitats.
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
NASA Technical Reports Server (NTRS)
Hill, Gerald M.; Evans, Richard K.
2009-01-01
A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.
Frank R. Thompson; Monica J. Schwalbach
1995-01-01
We report results of a point count survey of breeding birds on Hoosier National Forest in Indiana. We determined sample size requirements to detect differences in means and the effects of count duration and plot size on individual detection rates. Sample size requirements ranged from 100 to >1000 points with Type I and II error rates of <0.1 and 0.2. Sample...
Gordia, Alex Pinheiro; Quadros, Teresa Maria Bianchini de; Silva, Luciana Rodrigues; Mota, Jorge
2016-09-01
The use of step count and TV viewing time to discriminate youngsters with hyperglycaemia is still a matter of debate. To establish cut-off values for step count and TV viewing time in children and adolescents using glycaemia as the reference criterion. A cross-sectional study was conducted on 1044 schoolchildren aged 6-18 years from Northeastern Brazil. Daily step counts were assessed with a pedometer over 1 week and TV viewing time by self-report. The area under the curve (AUC) ranged from 0.52-0.61 for step count and from 0.49-0.65 for TV viewing time. The daily step count with the highest discriminatory power for hyperglycaemia was 13 884 (sensitivity = 77.8; specificity = 51.8) for male children and 12 371 (sensitivity = 55.6; specificity = 55.5) and 11 292 (sensitivity = 57.7; specificity = 48.6) for female children and adolescents respectively. The cut-off for TV viewing time with the highest discriminatory capacity for hyperglycaemia was 3 hours/day (sensitivity = 57.7-77.8; specificity = 48.6-53.2). This study represents the first step for the development of criteria based on cardiometabolic risk factors for step count and TV viewing time in youngsters. However, the present cut-off values have limited practical application because of their poor accuracy and low sensitivity and specificity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peronio, P.; Acconcia, G.; Rech, I.
Time-Correlated Single Photon Counting (TCSPC) has been long recognized as the most sensitive method for fluorescence lifetime measurements, but often requiring “long” data acquisition times. This drawback is related to the limited counting capability of the TCSPC technique, due to pile-up and counting loss effects. In recent years, multi-module TCSPC systems have been introduced to overcome this issue. Splitting the light into several detectors connected to independent TCSPC modules proportionally increases the counting capability. Of course, multi-module operation also increases the system cost and can cause space and power supply problems. In this paper, we propose an alternative approach basedmore » on a new detector and processing electronics designed to reduce the overall system dead time, thus enabling efficient photon collection at high excitation rate. We present a fast active quenching circuit for single-photon avalanche diodes which features a minimum dead time of 12.4 ns. We also introduce a new Time-to-Amplitude Converter (TAC) able to attain extra-short dead time thanks to the combination of a scalable array of monolithically integrated TACs and a sequential router. The fast TAC (F-TAC) makes it possible to operate the system towards the upper limit of detector count rate capability (∼80 Mcps) with reduced pile-up losses, addressing one of the historic criticisms of TCSPC. Preliminary measurements on the F-TAC are presented and discussed.« less
Data reduction for cough studies using distribution of audio frequency content
2012-01-01
Background Recent studies suggest that objectively quantifying coughing in audio recordings offers a novel means to understand coughing and assess treatments. Currently, manual cough counting is the most accurate method for quantifying coughing. However, the demand of manually counting cough records is substantial, demonstrating a need to reduce record lengths prior to counting whilst preserving the coughs within them. This study tested the performance of an algorithm developed for this purpose. Methods 20 subjects were recruited (5 healthy smokers and non-smokers, 5 chronic cough, 5 chronic obstructive pulmonary disease and 5 asthma), fitted with an ambulatory recording system and recorded for 24 hours. The recordings produced were divided into 15 min segments and counted. Periods of inactive audio in each segment were removed using the median frequency and power of the audio signal and the resulting files re-counted. Results The median resultant segment length was 13.9 s (IQR 56.4 s) and median 24 hr recording length 62.4 min (IQR 100.4). A median of 0.0 coughs/h (IQR 0.0-0.2) were erroneously removed and the variability in the resultant cough counts was comparable to that between manual cough counts. The largest error was seen in asthmatic patients, but still only 1.0% coughs/h were missed. Conclusions These data show that a system which measures signal activity using the median audio frequency can substantially reduce record lengths without significantly compromising the coughs contained within them. PMID:23231789
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data
Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172
Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.
Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei
2015-01-01
Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.
Nishitani, Naoko; Sakakibara, Hisataka
2014-01-01
Relationships between work-related psychological and physical stress responses and counts of white blood cells (WBCs), neutrophils, and lymphocytes were investigated in 101 daytime workers. Counts of WBCs and neutrophils were positively associated with smoking and inversely correlated with high density lipoprotein (HDL)-cholesterol levels. Additionally, general fatigue score as measured by the profile of mood state was positively correlated with WBC and neutrophil counts whereas lymphocyte counts was not significantly associated with fatigue score. Multiple regression analysis showed that WBC count was significantly related to general fatigue, age, and HDL-cholesterol levels. Neutrophil count was significantly related to HDL-cholesterol levels and fatigue score. Among various psychological stress response variables, general fatigue may be a key determinant of low-grade inflammation as represented by increases of WBC and neutrophil counts.
Iwasaki, Takeshi; Matsushita, Michiko; Nonaka, Daisuke; Kato, Masako; Nagata, Keiko; Murakami, Ichiro; Hayashi, Kazuhiko
2015-08-01
Merkel cell carcinomas (MCCs) associated with Merkel cell polyomavirus (MCPyV) have better prognosis than those without MCPyV. The relationship between mitotic index (MI) and MCC outcome has remained elusive because of the difficulty in differentiating mitotic cells from apoptotic ones. We evaluated the role of phosphohistone-H3 (PHH3) (Ser10), a new mitotic count biomarker, in MCPyV-positive or -negative MCC patients, and assessed its prognostic value in comparison to Ki-67 labeling index or MI using hematoxylin and eosin (HE) staining. We compared the prognostic value of PHH3 mitotic index with that of MI by HE in 19 MCPyV-positive and 9 MCPyV-negative MCC patients. PHH3-positive immunoreactivity was mostly observed in mitotic figures. Multivariate analysis significantly showed that MCPyV status (HR, 0.004; 95% CI 0.0003-0.058) and the American Joint Committee of Cancer (AJCC) stage (HR, 5.02; 95% CI 1.23-20.51) were observed as significantly independent prognostic factors for OS. PHH3-positive cell counts/10 HPF was a slightly significant independent prognostic factor for OS (HR, 4.96; 95% CI 0.93-26.55). PHH3-positive MI and MCPyV status in MCC patients are useful in prognostication, although MCPyV-infection is a more powerful prognostic factor in MCCs than the AJCC scheme on proliferation or mitotic indices. © 2015 Japanese Society of Pathology and Wiley Publishing Asia Pty Ltd.
ERIC Educational Resources Information Center
Goethals, Susan
1997-01-01
Describes a study that included classroom lessons on hydroelectric power, the history and construction of a nearby lake, data recording, the use of field guides, and methods of counting natural populations. The study culminated in a field trip to the lake. (JRH)
Astronomical community: The power of being counted
NASA Astrophysics Data System (ADS)
Tuttle, Sarah
2017-06-01
Using a sample of more than 200,000 publications over a 65-year period, it is found that astronomy papers led by women receive 10% fewer citations than those led by men, consistent with studies in other related disciplines.
NASA Technical Reports Server (NTRS)
Rinard, G. A.; Steffen, D. A.; Sturm, R. E.
1979-01-01
Circuit with high common-mode rejection has ability to filter and amplify accepted analog electrocardiogram (ECG) signals of varying amplitude, shape, and polarity. In addition, low power circuit develops standardized pulses that can be counted and averaged by heart/breath rate processor.
Concept report: Microprocessor control of electrical power system
NASA Technical Reports Server (NTRS)
Perry, E.
1977-01-01
An electrical power system which uses a microprocessor for systems control and monitoring is described. The microprocessor controlled system permits real time modification of system parameters for optimizing a system configuration, especially in the event of an anomaly. By reducing the components count, the assembling and testing of the unit is simplified, and reliability is increased. A resuable modular power conversion system capable of satisfying a large percentage of space applications requirements is examined along with the programmable power processor. The PC global controller which handles systems control and external communication is analyzed, and a software description is given. A systems application summary is also included.
Park, S D; Kim, J S; Han, S H; Ha, Y K; Song, K S; Jee, K Y
2009-09-01
In this paper a relatively simple and low cost analysis procedure to apply to a routine analysis of (129)I in low and intermediate level radioactive wastes (LILWs), cement and paraffin solidified evaporated bottom and spent resin, which are produced from nuclear power plants (NPPs), pressurized water reactors (PWR), is presented. The (129)I is separated from other nuclides in LILWs using an anion exchange adsorption and solvent extraction by controlling the oxidation and reduction state and is then precipitated as silver iodide for counting the beta activity with a low background gas proportional counter (GPC). The counting efficiency of GPC was varied from 4% to 8% and it was reversely proportional to the weight of AgI by a self absorption of the beta activity. Compared to a higher pH, the chemical recovery of iodide as AgI was lowered at pH 4. It was found that the chemical recovery of iodide for the cement powder showed a lower trend by increasing the cement powder weight, but it was not affected for the paraffin sample. In this experiment, the overall chemical recovery yield of the cement and paraffin solidified LILW samples and the average weight of them were 67+/-3% and 5.43+/-0.53 g, 70+/-7% and 10.40+/-1.60 g, respectively. And the minimum detectable activity (MDA) of (129)I for the cement and paraffin solidified LILW samples was calculated as 0.070 and 0.036 Bq/g, respectively. Among the analyzed cement solidified LILW samples, (129)I activity concentration of four samples was slightly higher than the MDA and their ranges were 0.076-0.114 Bq/g. Also of the analyzed paraffin solidified LILW samples, five samples contained a little higher (129)I activity concentration than the MDA and their ranges were 0.036-0.107 Bq/g.
A technology review of time-of-flight photon counting for advanced remote sensing
NASA Astrophysics Data System (ADS)
Lamb, Robert A.
2010-04-01
Time correlated single photon counting (TCSPC) has made tremendous progress during the past ten years enabling improved performance in precision time-of-flight (TOF) rangefinding and lidar. In this review the development and performance of several ranging systems is presented that use TCSPC for accurate ranging and range profiling over distances up to 17km. A range resolution of a few millimetres is routinely achieved over distances of several kilometres. These systems include single wavelength devices operating in the visible; multi-wavelength systems covering the visible and near infra-red; the use of electronic gating to reduce in-band solar background and, most recently, operation at high repetition rates without range aliasing- typically 10MHz over several kilometres. These systems operate at very low optical power (<100μW). The technique therefore has potential for eye-safe lidar monitoring of the environment and obvious military, security and surveillance sensing applications. The review will highlight the theoretical principles of photon counting and progress made in developing absolute ranging techniques that enable high repetition rate data acquisition that avoids range aliasing. Technology trends in TCSPC rangefinding are merging with those of quantum cryptography and its future application to revolutionary quantum imaging provides diverse and exciting research into secure covert sensing, ultra-low power active imaging and quantum rangefinding.
Casimir meets Poisson: improved quark/gluon discrimination with counting observables
Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse; ...
2017-09-19
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less
Casimir meets Poisson: improved quark/gluon discrimination with counting observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frye, Christopher; Larkoski, Andrew J.; Thaler, Jesse
Charged track multiplicity is among the most powerful observables for discriminating quark- from gluon-initiated jets. Despite its utility, it is not infrared and collinear (IRC) safe, so perturbative calculations are limited to studying the energy evolution of multiplicity moments. While IRC-safe observables, like jet mass, are perturbatively calculable, their distributions often exhibit Casimir scaling, such that their quark/gluon discrimination power is limited by the ratio of quark to gluon color factors. In this paper, we introduce new IRC-safe counting observables whose discrimination performance exceeds that of jet mass and approaches that of track multiplicity. The key observation is that trackmore » multiplicity is approximately Poisson distributed, with more suppressed tails than the Sudakov peak structure from jet mass. By using an iterated version of the soft drop jet grooming algorithm, we can define a “soft drop multiplicity” which is Poisson distributed at leading-logarithmic accuracy. In addition, we calculate the next-to-leading-logarithmic corrections to this Poisson structure. If we allow the soft drop groomer to proceed to the end of the jet branching history, we can define a collinear-unsafe (but still infrared-safe) counting observable. Exploiting the universality of the collinear limit, we define generalized fragmentation functions to study the perturbative energy evolution of collinear-unsafe multiplicity.« less
Power limits for microbial life.
LaRowe, Douglas E; Amend, Jan P
2015-01-01
To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).
Curcic, Marijana; Buha, Aleksandra; Stankovic, Sanja; Milovanovic, Vesna; Bulat, Zorica; Đukić-Ćosić, Danijela; Antonijević, Evica; Vučinić, Slavica; Matović, Vesna; Antonijevic, Biljana
2017-02-01
The objective of this study was to assess toxicity of Cd and BDE-209 mixture on haematological parameters in subacutely exposed rats and to determine the presence and type of interactions between these two chemicals using multiple factorial regression analysis. Furthermore, for the assessment of interaction type, an isobologram based methodology was applied and compared with multiple factorial regression analysis. Chemicals were given by oral gavage to the male Wistar rats weighing 200-240g for 28days. Animals were divided in 16 groups (8/group): control vehiculum group, three groups of rats were treated with 2.5, 7.5 or 15mg Cd/kg/day. These doses were chosen on the bases of literature data and reflect relatively high Cd environmental exposure, three groups of rats were treated with 1000, 2000 or 4000mg BDE-209/kg/bw/day, doses proved to induce toxic effects in rats. Furthermore, nine groups of animals were treated with different mixtures of Cd and BDE-209 containing doses of Cd and BDE-209 stated above. Blood samples were taken at the end of experiment and red blood cells, white blood cells and platelets counts were determined. For interaction assessment multiple factorial regression analysis and fitted isobologram approach were used. In this study, we focused on multiple factorial regression analysis as a method for interaction assessment. We also investigated the interactions between Cd and BDE-209 by the derived model for the description of the obtained fitted isobologram curves. Current study indicated that co-exposure to Cd and BDE-209 can result in significant decrease in RBC count, increase in WBC count and decrease in PLT count, when compared with controls. Multiple factorial regression analysis used for the assessment of interactions type between Cd and BDE-209 indicated synergism for the effect on RBC count and no interactions i.e. additivity for the effects on WBC and PLT counts. On the other hand, isobologram based approach showed slight antagonism for the effects on RBC and WBC while no interactions were proved for the joint effect on PLT count. These results confirm that the assessment of interactions between chemicals in the mixture greatly depends on the concept or method used for this evaluation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Preisser, John S; Long, D Leann; Stamm, John W
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.
Preisser, John S.; Long, D. Leann; Stamm, John W.
2017-01-01
Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962
ACHCAR, J. A.; MARTINEZ, E. Z.; RUFFINO-NETTO, A.; PAULINO, C. D.; SOARES, P.
2008-01-01
SUMMARY We considered a Bayesian analysis for the prevalence of tuberculosis cases in New York City from 1970 to 2000. This counting dataset presented two change-points during this period. We modelled this counting dataset considering non-homogeneous Poisson processes in the presence of the two-change points. A Bayesian analysis for the data is considered using Markov chain Monte Carlo methods. Simulated Gibbs samples for the parameters of interest were obtained using WinBugs software. PMID:18346287
An INAR(1) Negative Multinomial Regression Model for Longitudinal Count Data.
ERIC Educational Resources Information Center
Bockenholt, Ulf
1999-01-01
Discusses a regression model for the analysis of longitudinal count data in a panel study by adapting an integer-valued first-order autoregressive (INAR(1)) Poisson process to represent time-dependent correlation between counts. Derives a new negative multinomial distribution by combining INAR(1) representation with a random effects approach.…
DOT National Transportation Integrated Search
1981-10-01
Two statistical procedures have been developed to estimate hourly or daily aircraft counts. These counts can then be transformed into estimates of instantaneous air counts. The first procedure estimates the stable (deterministic) mean level of hourly...
Minior, V K; Bernstein, P S; Divon, M Y
2000-01-01
To determine the utility of the neonatal nucleated red blood cell (NRBC) count as an independent predictor of short-term perinatal outcome in growth-restricted fetuses. Hospital charts of neonates with a discharge diagnosis indicating a birth weight <10th percentile were reviewed for perinatal outcome. We studied all eligible neonates who had a complete blood count on the first day of life. After multiple gestations, anomalous fetuses and diabetic pregnancies were excluded; 73 neonates comprised the study group. Statistical analysis included ANOVA, simple and stepwise regression. Elevated NRBC counts were significantly associated with cesarean section for non-reassuring fetal status, neonatal intensive care unit admission and duration of neonatal intensive care unit stay, respiratory distress and intubation, thrombocytopenia, hyperbilirubinemia, intraventricular hemorrhage and neonatal death. Stepwise regression analysis including gestational age at birth, birth weight and NRBC count demonstrated that in growth-restricted fetuses, NRBC count was the strongest predictor of neonatal intraventricular hemorrhage, neonatal respiratory distress and neonatal death. An elevated NRBC count independently predicts adverse perinatal outcome in growth-restricted fetuses. Copyright 2000 S. Karger AG, Basel.
Note: Fully integrated time-to-amplitude converter in Si-Ge technology.
Crotti, M; Rech, I; Ghioni, M
2010-10-01
Over the past years an always growing interest has arisen about the measurement technique of time-correlated single photon counting TCSPC), since it allows the analysis of extremely fast and weak light waveforms with a picoseconds resolution. Consequently, many applications exploiting TCSPC have been developed in several fields such as medicine and chemistry. Moreover, the development of multianode PMT and of single photon avalanche diode arrays led to the realization of acquisition systems with several parallel channels to employ the TCSPC technique in even more applications. Since TCSPC basically consists of the measurement of the arrival time of a photon, the most important part of an acquisition chain is the time measurement block, which must have high resolution and low differential nonlinearity, and in order to realize multidimensional systems, it has to be integrated to reduce both cost and area. In this paper we present a fully integrated time-to-amplitude converter, built in 0.35 μm Si-Ge technology, characterized by a good time resolution (60 ps), low differential nonlinearity (better than 3% peak to peak), high counting rate (16 MHz), low and constant power dissipation (40 mW), and low area occupation (1.38×1.28 mm(2)).
Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Jürgen; Gregor, Ingo; Erdmann, Rainer; Enderlein, Jörg
2007-03-01
Time-correlated single photon counting is a powerful method for sensitive time-resolved fluorescence measurements down to the single molecule level. The method is based on the precisely timed registration of single photons of a fluorescence signal. Historically, its primary goal was the determination of fluorescence lifetimes upon optical excitation by a short light pulse. This goal is still important today and therefore has a strong influence on instrument design. However, modifications and extensions of the early designs allow for the recovery of much more information from the detected photons and enable entirely new applications. Here, we present a new instrument that captures single photon events on multiple synchronized channels with picosecond resolution and over virtually unlimited time spans. This is achieved by means of crystal-locked time digitizers with high resolution and very short dead time. Subsequent event processing in programmable logic permits classical histogramming as well as time tagging of individual photons and their streaming to the host computer. Through the latter, any algorithms and methods for the analysis of fluorescence dynamics can be implemented either in real time or offline. Instrument test results from single molecule applications will be presented.
Imaging of blood cells based on snapshot Hyper-Spectral Imaging systems
NASA Astrophysics Data System (ADS)
Robison, Christopher J.; Kolanko, Christopher; Bourlai, Thirimachos; Dawson, Jeremy M.
2015-05-01
Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering coregistered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera; attached to a microscope at varying objective powers and illumination intensity. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyper-spectral data cube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting.
Radiation analysis devices, radiation analysis methods, and articles of manufacture
Roybal, Lyle Gene
2010-06-08
Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.
Opto-fluidics based microscopy and flow cytometry on a cell phone for blood analysis.
Zhu, Hongying; Ozcan, Aydogan
2015-01-01
Blood analysis is one of the most important clinical tests for medical diagnosis. Flow cytometry and optical microscopy are widely used techniques to perform blood analysis and therefore cost-effective translation of these technologies to resource limited settings is critical for various global health as well as telemedicine applications. In this chapter, we review our recent progress on the integration of imaging flow cytometry and fluorescent microscopy on a cell phone using compact, light-weight and cost-effective opto-fluidic attachments integrated onto the camera module of a smartphone. In our cell-phone based opto-fluidic imaging cytometry design, fluorescently labeled cells are delivered into the imaging area using a disposable micro-fluidic chip that is positioned above the existing camera unit of the cell phone. Battery powered light-emitting diodes (LEDs) are butt-coupled to the sides of this micro-fluidic chip without any lenses, which effectively acts as a multimode slab waveguide, where the excitation light is guided to excite the fluorescent targets within the micro-fluidic chip. Since the excitation light propagates perpendicular to the detection path, an inexpensive plastic absorption filter is able to reject most of the scattered light and create a decent dark-field background for fluorescent imaging. With this excitation geometry, the cell-phone camera can record fluorescent movies of the particles/cells as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the solution under test. With a similar opto-fluidic design, we have recently demonstrated imaging and automated counting of stationary blood cells (e.g., labeled white blood cells or unlabeled red blood cells) loaded within a disposable cell counting chamber. We tested the performance of this cell-phone based imaging cytometry and blood analysis platform by measuring the density of red and white blood cells as well as hemoglobin concentration in human blood samples, which showed a good match to our measurement results obtained using a commercially available hematology analyzer. Such a cell-phone enabled opto-fluidics microscopy, flow cytometry, and blood analysis platform could be especially useful for various telemedicine applications in remote and resource-limited settings.
Photon spectroscopy by picoseconds differential Geiger-mode Si photomultiplier
NASA Astrophysics Data System (ADS)
Yamamoto, Masanobu; Hernandez, Keegan; Robinson, J. Paul
2018-02-01
The pixel array silicon photomultiplier (SiPM) is known as an excellent photon sensor with picoseconds avalanche process with the capacity for millions amplification of photoelectrons. In addition, a higher quantum efficiency(QE), small size, low bias voltage, light durability are attractive features for biological applications. The primary disadvantage is the limited dynamic range due to the 50ns recharge process and a high dark count which is an additional hurdle. We have developed a wide dynamic Si photon detection system applying ultra-fast differentiation signal processing, temperature control by thermoelectric device and Giga photon counter with 9 decimal digits dynamic range. The tested performance is six orders of magnitude with 600ps pulse width and sub-fW sensitivity. Combined with 405nm laser illumination and motored monochromator, Laser Induced Fluorescence Photon Spectrometry (LIPS) has been developed with a scan range from 200 900nm at maximum of 500nm/sec and 1nm FWHM. Based on the Planck equation E=hν, this photon counting spectrum provides a fundamental advance in spectral analysis by digital processing. Advantages include its ultimate sensitivity, theoretical linearity, as well as quantitative and logarithmic analysis without use of arbitrary units. Laser excitation is also useful for evaluation of photobleaching or oxidation in materials by higher energy illumination. Traditional typical photocurrent detection limit is about 1pW which includes millions of photons, however using our system it is possible to evaluate the photon spectrum and determine background noise and auto fluorescence(AFL) in optics in any cytometry or imaging system component. In addition, the photon-stream digital signal opens up a new approach for picosecond time-domain analysis. Photon spectroscopy is a powerful method for analysis of fluorescence and optical properties in biology.
NASA Astrophysics Data System (ADS)
Nishizawa, Yukiyasu; Sugita, Takeshi; Sanada, Yukihisa; Torii, Tatsuo
2015-04-01
Since 2011, MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan) have been conducting aerial monitoring to investigate the distribution of radioactive cesium dispersed into the atmosphere after the accident at the Fukushima Dai-ichi Nuclear Power Plant (FDNPP), Tokyo Electric Power Company. Distribution maps of the air dose-rate at 1 m above the ground and the radioactive cesium deposition concentration on the ground are prepared using spectrum obtained by aerial monitoring. The radioactive cesium deposition is derived from its dose rate, which is calculated by excluding the dose rate of the background radiation due to natural radionuclides from the air dose-rate at 1 m above the ground. The first step of the current method of calculating the dose rate due to natural radionuclides is calculate the ratio of the total count rate of areas where no radioactive cesium is detected and the count rate of regions with energy levels of 1,400 keV or higher (BG-Index). Next, calculate the air dose rate of radioactive cesium by multiplying the BG-Index and the integrated count rate of 1,400 keV or higher for the area where the radioactive cesium is distributed. In high dose-rate areas, however, the count rate of the 1,365-keV peak of Cs-134, though small, is included in the integrated count rate of 1,400 keV or higher, which could cause an overestimation of the air dose rate of natural radionuclides. We developed a method for accurately evaluating the distribution maps of natural air dose-rate by excluding the effect of radioactive cesium, even in contaminated areas, and obtained the accurate air dose-rate map attributed the radioactive cesium deposition on the ground. Furthermore, the natural dose-rate distribution throughout Japan has been obtained by this method.
LPT. Low power test (TAN640 and 641) floor plan. Cells ...
LPT. Low power test (TAN-640 and -641) floor plan. Cells 101 and 102, control rooms, shielded counting room, generator room, list of room numbers and names. Door details. Ralph M. Parsons 1229-12 ANP/GE-7-640-A-1. November 1956. Approved by INEEL Classification Office for public release. INEEL index code no. 038-0640-00-693-107274 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation
2013-06-01
exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power consumption...grow to exascale levels of performance, the smallest elements of a single processor can greatly affect the entire computer system (e.g. its power...Warp Speed 10.0. 2.0 INTRODUCTION As supercomputer systems approach exascale , the core count will exceed 1024 and number of transistors used in
Direct Power Injection of Microcontrollers in PCB Environments (Postprint)
2012-09-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Air Force Research Laboratory 8. PERFORMING ORGANIZATION REPORT...and model development. The Atmel AT89LP2052, 8-bit microcontroller has been programmed to complete a binary count from 20 to 28. A 20 pin SOIC has...onto the custom board ( SOIC ). LabVIEW has been used to control the power level and timing of the RF source (MXG), and data acquisition using the
Lu, Li-Fen; Wang, Chao-Ping; Tsai, I-Ting; Hung, Wei-Chin; Yu, Teng-Hung; Wu, Cheng-Ching; Hsu, Chia-Chang; Lu, Yung-Chuan; Chung, Fu-Mei; Jean, Mei-Chu Yen
2016-01-01
Even though shift work has been suspected to be a risk factor for cardiovascular disease, little research has been done to determine the logical underlying inflammation mechanisms. This study investigated the association between shift work and circulating total and differential leukocyte counts among Chinese steel workers. The subjects were 1,654 line workers in a steel plant, who responded to a cross-sectional survey with a questionnaire on basic attributes, life style, and sleep. All workers in the plant received a periodic health checkup. Total and differential leukocytes counts were also examined in the checkup. Shift workers had higher rates of alcohol use, smoking, poor sleep, poor physical exercise, and obesity than daytime workers. In further analysis, we found that the peripheral total WBC, monocyte, neutrophil, and lymphocyte counts were also greater in shift workers than in daytime workers. When subjects were divided into quartiles according to total WBC, neutrophil, monocyte, and lymphocyte counts, increased leukocyte count was associated with shift work. Using stepwise linear regression analysis, smoking, obesity, and shift work were independently associated with total WBC, monocyte, neutrophil, and lymphocyte counts. This study indicates that peripheral total and differential leukocyte counts are significantly higher in shift workers, which suggests that shift work may be a risk factor of cardiovascular disease. Applicable intervention strategies are needed for prevention of cardiovascular disease for shift workers.
Rowlands, G J; Musoke, A J; Morzaria, S P; Nagda, S M; Ballingall, K T; McKeever, D J
2000-04-01
A statistically derived disease reaction index based on parasitological, clinical and haematological measurements observed in 309 5 to 8-month-old Boran cattle following laboratory challenge with Theileria parva is described. Principal component analysis was applied to 13 measures including first appearance of schizonts, first appearance of piroplasms and first occurrence of pyrexia, together with the duration and severity of these symptoms, and white blood cell count. The first principal component, which was based on approximately equal contributions of the 13 variables, provided the definition for the disease reaction index, defined on a scale of 0-10. As well as providing a more objective measure of the severity of the reaction, the continuous nature of the index score enables more powerful statistical analysis of the data compared with that which has been previously possible through clinically derived categories of non-, mild, moderate and severe reactions.
Genetic analysis of circulating tumor cells in pancreatic cancer patients: A pilot study.
Görner, Karin; Bachmann, Jeannine; Holzhauer, Claudia; Kirchner, Roland; Raba, Katharina; Fischer, Johannes C; Martignoni, Marc E; Schiemann, Matthias; Alunni-Fabbroni, Marianna
2015-07-01
Pancreatic cancer is one of the most aggressive malignant tumors, mainly due to an aggressive metastasis spreading. In recent years, circulating tumor cells became associated to tumor metastasis. Little is known about their expression profiles. The aim of this study was to develop a complete workflow making it possible to isolate circulating tumor cells from patients with pancreatic cancer and their genetic characterization. We show that the proposed workflow offers a technical sensitivity and specificity high enough to detect and isolate single tumor cells. Moreover our approach makes feasible to genetically characterize single CTCs. Our work discloses a complete workflow to detect, count and genetically analyze individual CTCs isolated from blood samples. This method has a central impact on the early detection of metastasis development. The combination of cell quantification and genetic analysis provides the clinicians with a powerful tool not available so far. Copyright © 2015. Published by Elsevier Inc.
Singular Spectrum Analysis for Astronomical Time Series: Constructing a Parsimonious Hypothesis Test
NASA Astrophysics Data System (ADS)
Greco, G.; Kondrashov, D.; Kobayashi, S.; Ghil, M.; Branchesi, M.; Guidorzi, C.; Stratta, G.; Ciszak, M.; Marino, F.; Ortolan, A.
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with 1/f β power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
NASA Technical Reports Server (NTRS)
Swimm, Randall; Garrett, Henry B.; Jun, Insoo; Evans, Robin W.
2004-01-01
In this study we examine ten-minute omni-directional averages of energetic electron data measured by the Galileo spacecraft Energetic Particle Detector (EPD). Count rates from electron channels B1, DC2, and DC3 are evaluated using a power law model to yield estimates of the differential electron fluxes from 1 MeV to 11 MeV at distances between 8 and 51 Jupiter radii. Whereas the orbit of the Galileo spacecraft remained close to the rotational equatorial plane of Jupiter, the approximately 11 degree tilt of the magnetic axis of Jupiter relative to its rotational axis allowed the EPD instrument to sample high energy electrons at limited distances normal to the magnetic equatorial plane. We present a Fourier analysis of the semi-diurnal variation of electron fluxes with longitude.
Should the Standard Count Be Excluded from Neutron Probe Calibration?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Z. Fred
About 6 decades after its introduction, the neutron probe remains one of the most accurate methods for indirect measurement of soil moisture content. Traditionally, the calibration of a neutron probe involves the ratio of the neutron count in the soil to a standard count, which is the neutron count in the fixed environment such as the probe shield or a specially-designed calibration tank. The drawback of this count-ratio-based calibration is that the error in the standard count is carried through to all the measurements. An alternative calibration is to use the neutron counts only, not the ratio, with proper correctionmore » for radioactive decay and counting time. To evaluate both approaches, the shield counts of a neutron probe used for three decades were analyzed. The results show that the surrounding conditions have a substantial effect on the standard count. The error in the standard count also impacts the calculation of water storage and could indicate false consistency among replicates. The analysis of the shield counts indicates negligible aging effect of the instrument over a period of 26 years. It is concluded that, by excluding the standard count, the use of the count-based calibration is appropriate and sometimes even better than ratio-based calibration. The count-based calibration is especially useful for historical data when the standard count was questionable or absent« less
Risović, Dubravko; Pavlović, Zivko
2013-01-01
Processing of gray scale images in order to determine the corresponding fractal dimension is very important due to widespread use of imaging technologies and application of fractal analysis in many areas of science, technology, and medicine. To this end, many methods for estimation of fractal dimension from gray scale images have been developed and routinely used. Unfortunately different methods (dimension estimators) often yield significantly different results in a manner that makes interpretation difficult. Here, we report results of comparative assessment of performance of several most frequently used algorithms/methods for estimation of fractal dimension. To that purpose, we have used scanning electron microscope images of aluminum oxide surfaces with different fractal dimensions. The performance of algorithms/methods was evaluated using the statistical Z-score approach. The differences between performances of six various methods are discussed and further compared with results obtained by electrochemical impedance spectroscopy on the same samples. The analysis of results shows that the performance of investigated algorithms varies considerably and that systematically erroneous fractal dimensions could be estimated using certain methods. The differential cube counting, triangulation, and box counting algorithms showed satisfactory performance in the whole investigated range of fractal dimensions. Difference statistic is proved to be less reliable generating 4% of unsatisfactory results. The performances of the Power spectrum, Partitioning and EIS were unsatisfactory in 29%, 38%, and 75% of estimations, respectively. The results of this study should be useful and provide guidelines to researchers using/attempting fractal analysis of images obtained by scanning microscopy or atomic force microscopy. © Wiley Periodicals, Inc.
Automated food microbiology: potential for the hydrophobic grid-membrane filter.
Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L
1978-01-01
Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054
A novel rotometer based on a RISC microcontroller.
Heredia-López, F J; Bata-García, J L; Alvarez-Cervera, F J; Góngora-Alfaro, J L
2002-08-01
A new, low-cost rotometer, based on a reduced instruction set computer (RISC) microcontroller, is presented. Like earlier devices, it counts the number and direction of full turns for predetermined time periods during the evaluation of turning behavior induced by drug administration in rats. The present stand-alone system includes a nonvolatile memory for long-term data storage and a serial port for data transmission. It also contains a display for monitoring the experiments and has battery backup to avoid interruptions owing to power failures. A high correlation was found (r > .988, p < 2 x 10(-14)) between the counts of the rotometer and those of two trained observers. The system reflects quantitative differences in turning behavior owing to pharmacological manipulations. It provides the most common counting parameters and is inexpensive, flexible, highly reliable, and completely portable (weight including batteries, 159 g).
Counting statistics of tunneling current
NASA Astrophysics Data System (ADS)
Levitov, L. S.; Reznikov, M.
2004-09-01
The form of electron counting statistics of the tunneling current noise in a generic many-body interacting electron system is obtained and universal relations between its different moments are derived. A generalized fluctuation-dissipation theorem providing a relation between current and noise at arbitrary bias-to-temperature ratio eV/kBT is established in the tunneling Hamiltonian approximation. The third correlator of current fluctuations S3 (the skewness of the charge counting distribution) has a universal Schottky-type relation with the current and quasiparticle charge that holds in a wide bias voltage range, both at large and small eV/kBT . The insensitivity of S3 to the Nyquist-Schottky crossover represents an advantage compared to the Schottky formula for the noise power. We discuss the possibility of using the correlator S3 for detecting quasiparticle charge at high temperatures.
De Backer, A; Martinez, G T; MacArthur, K E; Jones, L; Béché, A; Nellist, P D; Van Aert, S
2015-04-01
Quantitative annular dark field scanning transmission electron microscopy (ADF STEM) has become a powerful technique to characterise nano-particles on an atomic scale. Because of their limited size and beam sensitivity, the atomic structure of such particles may become extremely challenging to determine. Therefore keeping the incoming electron dose to a minimum is important. However, this may reduce the reliability of quantitative ADF STEM which will here be demonstrated for nano-particle atom-counting. Based on experimental ADF STEM images of a real industrial catalyst, we discuss the limits for counting the number of atoms in a projected atomic column with single atom sensitivity. We diagnose these limits by combining a thorough statistical method and detailed image simulations. Copyright © 2014 Elsevier B.V. All rights reserved.
Borné, Yan; Smith, J Gustav; Nilsson, Peter M; Melander, Olle; Hedblad, Bo; Engström, Gunnar
2016-01-01
High concentrations of leukocytes in blood have been associated with diabetes mellitus. This prospective study aimed to explore whether total and differential leukocyte counts are associated with incidence of diabetes. A missense variant R262W in the SH2B3 (SH2B adaptor protein 3) gene, coding for a protein that negatively regulates hematopoietic cell proliferation, was also studied in relation to incidence of diabetes. Leukocyte count and its subtypes (neutrophils, lymphocytes and mixed cells) were analyzed in 26,667 men and women, 45-73 years old, from the population-based Malmö Diet and Cancer study. Information about the R262W polymorphism (rs3184504) in SH2B3 was genotyped in 24,489 subjects. Incidence of diabetes was studied during a mean follow-up of 14 years. Cox proportional hazards regression was used to examine incidence of diabetes by total and differential leukocyte counts. Mendelian randomization analysis using R262W as an instrumental variable was performed with two-stage least squares regression. A total of 2,946 subjects developed diabetes during the follow-up period. After taking several possible confounders into account, concentrations of total leukocyte count, neutrophils and lymphocytes were all significantly associated with incidence of diabetes. The adjusted hazard ratios (95% confidence interval; quartile 4 vs quartile 1) were 1.37 (1.22-1.53) for total leukocytes, 1.33 (1.19-1.49) for neutrophils and 1.29 (1.15-1.44) for lymphocytes. The R262W polymorphism was strongly associated with leukocytes (0.11x109 cells/l per T allele, p = 1.14 x10-12), lymphocytes (p = 4.3 x10-16), neutrophils (p = 8.0 x10-6) and mixed cells (p = 3.0 x10-6). However, there was no significant association between R262W and fasting glucose, HbA1c or incidence of diabetes. Concentrations of total leukocytes, neutrophils and lymphocytes are associated with incidence of diabetes. However, the lack of association with the R262W polymorphism suggests that the associations may not be causal, although limitations in statistical power and balancing pleiotropic effects cannot be excluded.
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
CORNAS: coverage-dependent RNA-Seq analysis of gene expression data without biological replicates.
Low, Joel Z B; Khang, Tsung Fei; Tammi, Martti T
2017-12-28
In current statistical methods for calling differentially expressed genes in RNA-Seq experiments, the assumption is that an adjusted observed gene count represents an unknown true gene count. This adjustment usually consists of a normalization step to account for heterogeneous sample library sizes, and then the resulting normalized gene counts are used as input for parametric or non-parametric differential gene expression tests. A distribution of true gene counts, each with a different probability, can result in the same observed gene count. Importantly, sequencing coverage information is currently not explicitly incorporated into any of the statistical models used for RNA-Seq analysis. We developed a fast Bayesian method which uses the sequencing coverage information determined from the concentration of an RNA sample to estimate the posterior distribution of a true gene count. Our method has better or comparable performance compared to NOISeq and GFOLD, according to the results from simulations and experiments with real unreplicated data. We incorporated a previously unused sequencing coverage parameter into a procedure for differential gene expression analysis with RNA-Seq data. Our results suggest that our method can be used to overcome analytical bottlenecks in experiments with limited number of replicates and low sequencing coverage. The method is implemented in CORNAS (Coverage-dependent RNA-Seq), and is available at https://github.com/joel-lzb/CORNAS .
NASA Technical Reports Server (NTRS)
Stoker, P. H.
1985-01-01
Recordings on relativistic solar flare protons observed at Sanae, Antarctic, show that the percentage increase in counting rates of the neutron moderated detector (4NMD) is larger than the percentage increase in counting rates of the 3NM64 neutron monitor. These relative increases are described by solar proton differential spectra j sub s(P) = AP(beta). The power beta is determined for each event and the hardnesses of the temporal variations of beta, found for the ground level events (GLE) of 7 May, 1978 and 22 November, 1977.
A review of costing methodologies in critical care studies.
Pines, Jesse M; Fager, Samuel S; Milzman, David P
2002-09-01
Clinical decision making in critical care has traditionally been based on clinical outcome measures such as mortality and morbidity. Over the past few decades, however, increasing competition in the health care marketplace has made it necessary to consider costs when making clinical and managerial decisions in critical care. Sophisticated costing methodologies have been developed to aid this decision-making process. We performed a narrative review of published costing studies in critical care during the past 6 years. A total of 282 articles were found, of which 68 met our search criteria. They involved a mean of 508 patients (range, 20-13,907). A total of 92.6% of the studies (63 of 68) used traditional cost analysis, whereas the remaining 7.4% (5 of 68) used cost-effectiveness analysis. None (0 of 68) used cost-benefit analysis or cost-utility analysis. A total of 36.7% (25 of 68) used hospital charges as a surrogate for actual costs. Of the 43 articles that actually counted costs, 37.2% (16 of 43) counted physician costs, 27.9% (12 of 43) counted facility costs, 34.9% (15 of 43) counted nursing costs, 9.3% (4 of 43) counted societal costs, and 90.7% (39 of 43) counted laboratory, equipment, and pharmacy costs. Our conclusion is that despite considerable progress in costing methodologies, critical care studies have not adequately implemented these techniques. Given the importance of financial implications in medicine, it would be prudent for critical care studies to use these more advanced techniques. Copyright 2002, Elsevier Science (USA). All rights reserved.
Bayesian Network Meta-Analysis for Unordered Categorical Outcomes with Incomplete Data
ERIC Educational Resources Information Center
Schmid, Christopher H.; Trikalinos, Thomas A.; Olkin, Ingram
2014-01-01
We develop a Bayesian multinomial network meta-analysis model for unordered (nominal) categorical outcomes that allows for partially observed data in which exact event counts may not be known for each category. This model properly accounts for correlations of counts in mutually exclusive categories and enables proper comparison and ranking of…
Four Forms of the Fourier Transform - for Freshmen, using Matlab
NASA Astrophysics Data System (ADS)
Simons, F. J.; Maloof, A. C.
2016-12-01
In 2015, a Fall "Freshman Seminar" at Princeton University (http://geoweb.princeton.edu/people/simons/FRS-SESC.html) taught students to combine field observations of the natural world with quantitative modeling and interpretation, to answer questions like: "How have Earth and human histories been recorded in the geology of Princeton, the Catskills, France and Spain?" (where we took the students on a data-gathering field trip during Fall Break), and "What experiments and analysis can a first-year (possibly non-future-major) do to query such archives of the past?" In the classroom, through problem sets, and around campus, students gained practical experience collecting geological and geophysical data in a geographic context, and analyzing these data using statistical techniques such as regression, time-series and image analysis, with the programming language Matlab. In this presentation I will detail how we instilled basic Matlab skills for quantitative geoscience data analysis through a 6-week progression of topics and exercises. In the 6 weeks after the Fall Break trip, we strengthened these competencies to make our students fully proficient for further learning, as evidenced by their end-of-term independent research work.The particular case study is focused on introducing power-spectral analysis to Freshmen, in a way that even the least quantitative among them could functionally understand. Not counting (0) "inspection", the four ways by which we have successfully instilled the concept of power-spectral analysis in a hands-on fashion are (1) "correlation", (2) "inversion", (3) "stacking", and formal (4) "Fourier transformation". These four provide the main "mappings". Along the way, of course, we also make sure that the students understand that "power-spectral density estimation" is not the same as "Fourier transformation", nor that every Fourier transform has to be "Fast". Hence, concepts from analysis-of-variance techniques, regression, and hypothesis testing, arise in this context, and will be discussed.
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
Rhythm in Number: Exploring the Affective, Social and Mathematical Dimensions of Using "TouchCounts"
ERIC Educational Resources Information Center
Sinclair, Nathalie; Chorney, Sean; Rodney, Sheree
2016-01-01
In this paper, we investigate the mathematical, social and affective nature of children's engagement with "TouchCounts," a multitouch application for counting and doing arithmetic. In order to study these dimensions of engagement in a way that recognizes their fundamental intertwinement, we use rhythm as a primary unit of analysis.…
The risks of splash injury when using power tools during orthopaedic surgery: a prospective study.
Alani, Asef; Modi, Cheaten; Almedghio, Sami; Mackie, Ian
2008-10-01
Transmissible blood-borne infection can occur at muco-cutaneous membranes. During trauma and orthopaedic surgery, the use of power tools increases spraying of bodily fluid, hence resulting in an increased risk of infectious splash injury to the face. This prospective study involved 25 patients. The visors worn by the operating team were examined postoperatively to identify any visible blood, fat and body tissue splashes. Eleven patients underwent knee arthroplasty. Splash counts to the surgeon's mouth/lip, nose/cheek and eye regions were 217, 105, and 62 respectively; they were 258, 147, and 82 for the assistant. Fourteen patients had hip replacement; splash counts to the surgeon's mouth/lip, nose/cheek and eye regions were 214, 90, and 53 respectively, and 137, 39 and 27 for the assistant. To conclude, the face is vulnerable to material and fluid strikes during joint arthroplasty surgery. The visor is a reliable barrier to blood, fat and body tissue splashes and minimises the risk of exposure to blood-borne viruses. Therefore, a visor should be worn during all joint arthroplasty procedures and any procedure that involves the use of power tools.
Shigematsu, Toru; Ueno, Shigeaki; Tsuchida, Yasuharu; Hayashi, Mayumi; Okonogi, Hiroko; Masaki, Haruhiko; Fujii, Tomoyuki
2007-12-01
Bacterial counts under liquid cultivation using 96-well microplates were performed. The counts under liquid and under solid cultivation were equivalent in foods, although the counts under liquid cultivation exceeded those under solid cultivation in seawater, suggesting that some bacteria in seawater were viable but did not form detectable colonies. Phylogenetic analysis of bacteria obtained under liquid cultivation was also performed.
Considerations for monitoring raptor population trends based on counts of migrants
Titus, K.; Fuller, M.R.; Ruos, J.L.; Meyburg, B-U.; Chancellor, R.D.
1989-01-01
Various problems were identified with standardized hawk count data as annually collected at six sites. Some of the hawk lookouts increased their hours of observation from 1979-1985, thereby confounding the total counts. Data recording and missing data hamper coding of data and their use with modern analytical techniques. Coefficients of variation among years in counts averaged about 40%. The advantages and disadvantages of various analytical techniques are discussed including regression, non-parametric rank correlation trend analysis, and moving averages.
Microbiology of beef carcasses before and after slaughterline automation.
Whelehan, O. P.; Hudson, W. R.; Roberts, T. A.
1986-01-01
The bacterial status of beef carcasses at a commercial abattoir was monitored before and after slaughterline automation. Bacterial counts did not differ significantly overall (P greater than 0.05) between the original manual line and the automated line for either morning or afternoon slaughter. On the manual line counts in the morning were lower than those from carcasses slaughtered in the afternoon, but on the automated line there was no difference between morning and afternoon counts. Due to highly significant line X sample site interaction for both morning and afternoon counts, overall differences among sample sites were not found by analysis of variance. However, principal components analysis revealed a significant shift in bacterial contamination among some sites due to slaughterline changes. The incidence of Enterobacteriaceae increased marginally following automation. PMID:3701039
Transverse myelitis caused by hepatitis E: previously undescribed in adults
Sarkar, Pamela; Morgan, Catherine; Ijaz, Samreen
2015-01-01
We report the case of a 62-year-old Caucasian woman who was admitted with urinary retention and lower limb paraesthesia following a week's prodromal illness of headache and malaise. Liver function tests showed a picture of acute hepatocellular dysfunction. She developed reduced lower limb power, brisk reflexes, extensor plantars, a sensory level at T8 and reduced anal sphincter tone, establishing a clinical diagnosis of transverse myelitis. A spinal MRI showed no evidence of cauda equina or spinal cord compression. Cerebrospinal fluid (CSF) analysis showed raised protein and raised white cell count. Hepatitis E IgM and IgG were positive and hepatitis E virus was found in her CSF. She was treated with methylprednisolone and is slowly recovering with physiotherapy. PMID:26150621
NASA Astrophysics Data System (ADS)
Bashkov, O. V.; Bryansky, A. A.; Panin, S. V.; Zaikov, V. I.
2016-11-01
Strength properties of the glass fiber reinforced polymers (GFRP) fabricated by vacuum and vacuum autoclave molding techniques were analyzed. Measurements of porosity of the GFRP parts manufactured by various molding techniques were conducted with the help of optical microscopy. On the basis of experimental data obtained by means of acoustic emission hardware/software setup, the technique for running diagnostics and forecasting the bearing capacity of polymeric composite materials based on the result of three-point bending tests has been developed. The operation principle of the technique is underlined by the evaluation of the power function index change which takes place on the dependence of the total acoustic emission counts versus the loading stress.
Statistical aspects of point count sampling
Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.
1995-01-01
The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.
Compton suppression gamma-counting: The effect of count rate
Millard, H.T.
1984-01-01
Past research has shown that anti-coincidence shielded Ge(Li) spectrometers enhanced the signal-to-background ratios for gamma-photopeaks, which are situated on high Compton backgrounds. Ordinarily, an anti- or non-coincidence spectrum (A) and a coincidence spectrum (C) are collected simultaneously with these systems. To be useful in neutron activation analysis (NAA), the fractions of the photopeak counts routed to the two spectra must be constant from sample to sample to variations must be corrected quantitatively. Most Compton suppression counting has been done at low count rate, but in NAA applications, count rates may be much higher. To operate over the wider dynamic range, the effect of count rate on the ratio of the photopeak counts in the two spectra (A/C) was studied. It was found that as the count rate increases, A/C decreases for gammas not coincident with other gammas from the same decay. For gammas coincident with other gammas, A/C increases to a maximum and then decreases. These results suggest that calibration curves are required to correct photopeak areas so quantitative data can be obtained at higher count rates. ?? 1984.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Dermatoglyphic analysis of La Liébana (Cantabria, Spain). 2. Finger ridge counts.
Martín, J; Gómez, P
1993-06-01
The results of univariate and multivariate analyses of the quantitative finger dermatoglyphic traits (i.e. ridge counts) of a sample of 109 males and 88 females from La Liébana (Cantabria, Spain) are reported. Univariate results follow the trends usually found in previous studies, e.g., ranking of finger ridge counts, bilateral asymmetry or shape of the distributions of the frequencies. However, sexual dimorphism is nearly inexistent concerning finger ridge counts. This lack of dimorphism could be related to certain characteristics of the distribution of finger dermatoglyphic patterns previously reported by the same authors. The multivariate description has been carried out by means of principal component analysis (with varimax rotation to obtain the final solution) of the correlation matrices computed from the 10 maximal finger ridge counts. Although the results do not necessarily prove the concept of developmental fields ("field theory" and later modifications), some precepts of the theory are present: field polarization and field overlapping.
Kohelet, D; Arbel, E; Ballin, A; Goldberg, M
2000-01-01
Neutrophil counts were studied in 62 preterm infants receiving mechanical ventilation for neonatal respiratory distress syndrome (NRDS). Exploratory analysis indicated that the severity of NRDS, as demonstrated by fractional inspired oxygen (FiO2), mean airway pressure (MAP), arterial-alveolar PO2 ratio (a/APO2) and oxygenation index (OI), was correlated with percentage change of neutrophil counts during the first 5 days of life. Further analysis demonstrated that infants with NRDS who subsequently developed chronic lung disease (CLD) (n = 21) had statistically significant differences in variation of neutrophil counts when compared with the remainder (n = 41) without CLD (-35.0% +/- 4.3 vs. -16.9% +/- 5.8, p < 0.02). It is concluded that significant variations in neutrophil counts during the first 5 days of life may be found in infants with NRDS who subsequently develop CLD and that these changes may have predictive value regarding the development of CLD.
Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike
2016-03-01
Radium ((226)Ra) contamination derived from military, industrial, and pharmaceutical products can be found at a number of historical sites across the world posing a risk to human health. The analysis of spectral data derived using gamma-ray spectrometry can offer a powerful tool to rapidly estimate and map the activity, depth, and lateral distribution of (226)Ra contamination covering an extensive area. Subsequently, reliable risk assessments can be developed for individual sites in a fraction of the timeframe compared to traditional labour-intensive sampling techniques: for example soil coring. However, local heterogeneity of the natural background, statistical counting uncertainty, and non-linear source response are confounding problems associated with gamma-ray spectral analysis. This is particularly challenging, when attempting to deal with enhanced concentrations of a naturally occurring radionuclide such as (226)Ra. As a result, conventional surveys tend to attribute the highest activities to the largest total signal received by a detector (Gross counts): an assumption that tends to neglect higher activities at depth. To overcome these limitations, a methodology was developed making use of Monte Carlo simulations, Principal Component Analysis and Machine Learning based algorithms to derive depth and activity estimates for (226)Ra contamination. The approach was applied on spectra taken using two gamma-ray detectors (Lanthanum Bromide and Sodium Iodide), with the aim of identifying an optimised combination of detector and spectral processing routine. It was confirmed that, through a combination of Neural Networks and Lanthanum Bromide, the most accurate depth and activity estimates could be found. The advantage of the method was demonstrated by mapping depth and activity estimates at a case study site in Scotland. There the method identified significantly higher activity (<3 Bq g(-1)) occurring at depth (>0.4m), that conventional gross counting algorithms failed to identify. It was concluded that the method could easily be employed to identify areas of high activity potentially occurring at depth, prior to intrusive investigation using conventional sampling techniques. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Tucker, George; Loh, Po-Ru; Berger, Bonnie
2013-10-04
Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.
NASA Astrophysics Data System (ADS)
Susrama, I. G.; Purnama, K. E.; Purnomo, M. H.
2016-01-01
Oligospermia is a male fertility issue defined as a low sperm concentration in the ejaculate. Normally the sperm concentration is 20-120 million/ml, while Oligospermia patients has sperm concentration less than 20 million/ml. Sperm test done in the fertility laboratory to determine oligospermia by checking fresh sperm according to WHO standards in 2010 [9]. The sperm seen in a microscope using a Neubauer improved counting chamber and manually count the number of sperm. In order to be counted automatically, this research made an automation system to analyse and count the sperm concentration called Automated Analysis of Sperm Concentration Counters (A2SC2) using Otsu threshold segmentation process and morphology. Data sperm used is the fresh sperm directly in the analysis in the laboratory from 10 people. The test results using A2SC2 method obtained an accuracy of 91%. Thus in this study, A2SC2 can be used to calculate the amount and concentration of sperm automatically
A compact 7-cell Si-drift detector module for high-count rate X-ray spectroscopy.
Hansen, K; Reckleben, C; Diehl, I; Klär, H
2008-05-01
A new Si-drift detector module for fast X-ray spectroscopy experiments was developed and realized. The Peltier-cooled module comprises a sensor with 7 × 7-mm 2 active area, an integrated circuit for amplification, shaping and detection, storage, and derandomized readout of signal pulses in parallel, and amplifiers for line driving. The compactness and hexagonal shape of the module with a wrench size of 16mm allow very short distances to the specimen and multi-module arrangements. The power dissipation is 186mW. At a shaper peaking time of 190 ns and an integration time of 450 ns an electronic rms noise of ~11 electrons was achieved. When operated at 7 °C, FWHM line widths around 260 and 460 eV (Cu-K α ) were obtained at low rates and at sum-count rates of 1.7 MHz, respectively. The peak shift is below 1% for a broad range of count rates. At 1.7-MHz sum-count rate the throughput loss amounts to 30%.
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.; ...
2017-02-17
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
Analysis of radioactive strontium-90 in food by Čerenkov liquid scintillation counting.
Pan, Jingjing; Emanuele, Kathryn; Maher, Eileen; Lin, Zhichao; Healey, Stephanie; Regan, Patrick
2017-08-01
A simple liquid scintillation counting method using DGA/TRU resins for removal of matrix/radiometric interferences, Čerenkov counting for measuring 90 Y, and EDXRF for quantifying Y recovery was validated for analyzing 90 Sr in various foods. Analysis of samples containing energetic β emitters required using TRU resin to avoid false detection and positive bias. Additional 34% increase in Y recovery was obtained by stirring the resin while eluting Y with H 2 C 2 O 4 . The method showed acceptable accuracy (±10%), precision (10%), and detectability (~0.09Bqkg -1 ). Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Richard M.; Aalseth, C. E.; Brandenberger, J. M.
Here, this paper describes the generation of 39Ar, via reactor irradiation of potassium carbonate, followed by quantitative analysis (length-compensated proportional counting) to yield two calibration standards that are respectively 50 and 3 times atmospheric background levels. Measurements were performed in Pacific Northwest National Laboratory's shallow underground counting laboratory studying the effect of gas density on beta-transport; these results are compared with simulation. The total expanded uncertainty of the specific activity for the ~50 × 39Ar in P10 standard is 3.6% (k=2).
The renormalization group method in statistical hydrodynamics
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.
1994-09-01
This paper gives a first principles formulation of a renormalization group (RG) method appropriate to study of turbulence in incompressible fluids governed by Navier-Stokes equations. The present method is a momentum-shell RG of Kadanoff-Wilson type based upon the Martin-Siggia-Rose (MSR) field-theory formulation of stochastic dynamics. A simple set of diagrammatic rules are developed which are exact within perturbation theory (unlike the well-known Ma-Mazenko prescriptions). It is also shown that the claim of Yakhot and Orszag (1986) is false that higher-order terms are irrelevant in the ɛ expansion RG for randomly forced Navier-Stokes (RFNS) with power-law force spectrum F̂(k)=D0k-d+(4-ɛ). In fact, as a consequence of Galilei covariance, there are an infinite number of higher-order nonlinear terms marginal by power counting in the RG analysis of the power-law RFNS, even when ɛ≪4. The difficulty does not occur in the Forster-Nelson-Stephen (FNS) RG analysis of thermal fluctuations in an equilibrium NS fluid, which justifies a linear regression law for d≳2. On the other hand, the problem occurs also at the nontrivial fixed point in the FNS Model A, or its Burgers analog, when d<2. The marginal terms can still be present at the strong-coupling fixed point in true NS turbulence. If so, infinitely many fixed points may exist in turbulence and be associated to a somewhat surprising phenomenon: nonuniversality of the inertial-range scaling laws depending upon the dissipation-range dynamics.
Eosinophils as a marker for invasion in vulvar squamous neoplastic lesions.
Spiegel, Gregory W
2002-04-01
A study of eosinophils directed against vulvar neoplastic squamous epithelium was undertaken to determine whether there were thresholds per high-power field (hpf) or 10 hpf that were a marker for invasion. The presence of stromal and intraepithelial eosinophils in 33 cases of vulvar grade 3 squamous intraepithelial neoplasia (carcinoma in situ) (VIN 3) was compared with that in 38 cases of vulvar invasive carcinoma with any degree of invasion (ISC). In both incisional biopsy and excisional specimens, the presence of >3 eosinophils per high-power field (eos/hpf) and the presence of >or=5 eosinophils per 10 high-power fields (eos/10 hpf) were both significantly associated with invasion, and the presence of >or=20 eos/hpf and/or >50 eos/10 hpf was limited to cases with invasion. The presence of eosinophils within the neoplastic squamous epithelium was also limited to cases with invasion. The author proposes: 1) eosinophil counts in vulvar incisional biopsy specimens of >3/hpf and/or >or=5/10 hpf warrant a note of caution that invasion may be present even when none is identified by conventional criteria; 2) eosinophil counts of >3/hpf and/or >or=5/10 hpf in excisional specimens should raise the suspicion of invasion in cases in which only VIN 3 is identified in the initial sections, and warrant additional sections and/or levels to search for invasion; 3) the above eosinophil counts provide supportive evidence for invasion in cases with equivocal invasion by conventional criteria; and 4) the presence of >or=20 eos/hpf and/or >50 eos/10 hpf, and the presence of intraepithelial eosinophils in conjunction with >3 eos/hpf and >or=5 eos/10 hpf is virtually diagnostic of invasion.
Anomaly Detection in Power Quality at Data Centers
NASA Technical Reports Server (NTRS)
Grichine, Art; Solano, Wanda M.
2015-01-01
The goal during my internship at the National Center for Critical Information Processing and Storage (NCCIPS) is to implement an anomaly detection method through the StruxureWare SCADA Power Monitoring system. The benefit of the anomaly detection mechanism is to provide the capability to detect and anticipate equipment degradation by monitoring power quality prior to equipment failure. First, a study is conducted that examines the existing techniques of power quality management. Based on these findings, and the capabilities of the existing SCADA resources, recommendations are presented for implementing effective anomaly detection. Since voltage, current, and total harmonic distortion demonstrate Gaussian distributions, effective set-points are computed using this model, while maintaining a low false positive count.
Impute DC link (IDCL) cell based power converters and control thereof
Divan, Deepakraj M.; Prasai, Anish; Hernendez, Jorge; Moghe, Rohit; Iyer, Amrit; Kandula, Rajendra Prasad
2016-04-26
Power flow controllers based on Imputed DC Link (IDCL) cells are provided. The IDCL cell is a self-contained power electronic building block (PEBB). The IDCL cell may be stacked in series and parallel to achieve power flow control at higher voltage and current levels. Each IDCL cell may comprise a gate drive, a voltage sharing module, and a thermal management component in order to facilitate easy integration of the cell into a variety of applications. By providing direct AC conversion, the IDCL cell based AC/AC converters reduce device count, eliminate the use of electrolytic capacitors that have life and reliability issues, and improve system efficiency compared with similarly rated back-to-back inverter system.
Cruz, E; Vieira, J; Gonçalves, R; Alves, H; Almeida, S; Rodrigues, P; Lacerda, R; Porto, G
2004-07-01
Variability in T-lymphocyte numbers is partially explained by a genetic regulation. From studies in animal models, it is known that the Major Histocompatibility Complex (MHC) is involved in this regulation. In humans, this has not been shown yet. The objective of the present study was to test the hypothesis that genes in the MHC region influence the regulation of T-lymphocyte numbers. Two approaches were used. Association studies between T-cell counts (CD4(+) and CD8(+)) or total lymphocyte counts and HLA class I alleles (A and B) or mutations in the HFE (C282Y and H63D), the hemochromatosis gene, in an unrelated population (n = 264). A second approach was a sibpair correlation analysis of the same T-cell counts in relation to HLA-HFE haplotypes in subjects belonging to 48 hemochromatosis families (n = 456 sibpairs). In the normal population, results showed a strong statistically significant association of the HLA-A*01 with high numbers of CD8(+) T cells and a less powerful association with the HLA-A*24 with low numbers of CD8(+) T cells. Sibpair correlations revealed the most significant correlation for CD8(+) T-cell numbers for sibpairs with HLA-HFE-identical haplotypes. This was not observed for CD4(+) T cells. These results show that the MHC region is involved in the genetic regulation of CD8(+) T-cell numbers in humans. Identification of genes responsible for this control may have important biological and clinical implications.
Kilimann, K V; Kitsubun, P; Delgado, A; Gänzle, M G; Chapleau, N; Le Bail, A; Hartmann, C
2006-07-05
The present contribution is dedicated to experimental and theoretical assessment of microbiological process heterogeneities of the high-pressure (HP) inactivation of Lactococcus lactis ssp. cremoris MG 1363. The inactivation kinetics are determined in dependence of pressure, process time, temperature and absence or presence of co-solutes in the buffer system namely 4 M sodium chloride and 1.5 M sucrose. The kinetic analysis is carried out in a 0.1-L autoclave in order to minimise thermal and convective effects. Upon these data, a deterministic inactivation model is formulated with the logistic equation. Its independent variables represent the counts of viable cells (viable but injured) and of the stress-resistant cells (viable and not injured). This model is then coupled to a thermo-fluiddynamical simulation method, high-pressure computer fluid dynamics technique (HP-CFD), which yields spatiotemporal temperature and flow fields occurring during the HP application inside any considered autoclave. Besides the thermo-fluiddynamic quantities, the coupled model predicts also the spatiotemporal distribution of both viable (VC) and stress-resistant cell counts (SRC). In order to assess the process non-uniformity of the microbial inactivation in a 3.3-L autoclave experimentally, microbial samples are placed at two distinct locations and are exposed to various process conditions. It can be shown with both, experimental and theoretical models that thermal heterogeneities induce process non-uniformities of more than one decimal power in the counts of the viable cells at the end of the treatment. (c) 2006 Wiley Periodicals, Inc.
2015-03-12
26 Table 3: Optometry Clinic Frequency Count... Optometry Clinic Frequency Count.................................................................. 86 Table 22: Probability Distribution Summary Table...Clinic, the Audiology Clinic, and the Optometry Clinic. Methodology Overview The overarching research goal is to identify feasible solutions to
Tumor angiogenesis in advanced stage ovarian carcinoma.
Hollingsworth, H C; Kohn, E C; Steinberg, S M; Rothenberg, M L; Merino, M J
1995-07-01
Tumor angiogenesis has been found to have prognostic significance in many tumor types for predicting an increased risk of metastasis. We assessed tumor vascularity in 43 cases of advanced stage (International Federation of Gynecologists and Obstetricians stages III and IV) ovarian cancer by using the highly specific endothelial cell marker CD34. Microvessel counts and stage were associated with disease-free survival and with overall survival by Kaplan-Meier analysis. The plots show that higher stage, higher average vessel count at 200x (200x avg) and 400x (400x avg) magnification and highest vessel count at 400x (400x high) magnification confer a worse prognosis for disease-free survival. Average vessel count of less than 16 (400x avg, P2 = 0.01) and less than 45 (200x avg, P2 = 0.026) suggested a better survival. Similarly, a high vessel count of less than 20 (400x high, P2 = 0.019) conferred a better survival as well. The plots suggest that higher stage, higher average vessel count at 200x and 400x, and highest vessel count at 200x and 400x show a trend to worse overall survival as well. With the Cox proportional hazards model, stage was the best predictor of overall survival, however, the average microvessel count at 400x was found to be the best predictor of disease-free survival. These results suggest that analysis of neovascularization in advanced stage ovarian cancer may be a useful prognostic factor.
NASA Astrophysics Data System (ADS)
Davidge, H.; Serjeant, S.; Pearson, C.; Matsuhara, H.; Wada, T.; Dryer, B.; Barrufet, L.
2017-12-01
We present the first detailed analysis of three extragalactic fields (IRAC Dark Field, ELAIS-N1, ADF-S) observed by the infrared satellite, AKARI, using an optimized data analysis toolkit specifically for the processing of extragalactic point sources. The InfaRed Camera (IRC) on AKARI complements the Spitzer Space Telescope via its comprehensive coverage between 8-24 μm filling the gap between the Spitzer/IRAC and MIPS instruments. Source counts in the AKARI bands at 3.2, 4.1, 7, 11, 15 and 18 μm are presented. At near-infrared wavelengths, our source counts are consistent with counts made in other AKARI fields and in general with Spitzer/IRAC (except at 3.2 μm where our counts lie above). In the mid-infrared (11 - 18 μm), we find our counts are consistent with both previous surveys by AKARI and the Spitzer peak-up imaging survey with the InfraRed Spectrograph (IRS). Using our counts to constrain contemporary evolutionary models, we find that although the models and counts are in agreement at mid-infrared wavelengths there are inconsistencies at wavelengths shortward of 7 μm, suggesting either a problem with stellar subtraction or indicating the need for refinement of the stellar population models. We have also investigated the AKARI/IRC filters, and find an active galactic nucleus selection criteria out to z < 2 on the basis of AKARI 4.1, 11, 15 and 18 μm colours.
Predictive factors for long-term engraftment of autologous blood stem cells.
Duggan, P R; Guo, D; Luider, J; Auer, I; Klassen, J; Chaudhry, A; Morris, D; Glück, S; Brown, C B; Russell, J A; Stewart, D A
2000-12-01
Data from 170 consecutive patients aged 19-66 years (median age 46 years) who underwent unmanipulated autologous blood stem cell transplant (ASCT) were analyzed to determine if total CD34+ cells/kg infused, CD34+ subsets (CD34+41+, CD34+90+, CD34+33-, CD34+38-, CD34+38-DR-), peripheral blood CD34+ cell (PBCD34+) count on first apheresis day, or various clinical factors were associated with low blood counts 6 months post ASCT. Thirty-four patients were excluded from analysis either because of death (n = 17) or re-induction chemotherapy prior to 6 months post ASCT (n = 13), or because of lack of follow-up data (n = 4). Of the remaining 136 patients, 46% had low WBC ( < 4 x 10(9)/l), 41% low platelets (<150 x 10(9)/l), and 34% low hemoglobin ( < 120 g/l) at a median of 6 months following ASCT. By Spearman's rank correlation, both the total CD34+ cell dose/kg and the PBCD34+ count correlated with 6 month blood counts better than any subset of CD34+ cells or any clinical factor. The PBCD34+ count was overall a stronger predictor of 6 month blood counts than was the total CD34+ cells/kg infused. Both factors retained their significance in multivariate analysis, controlling for clinical factors. In conclusion, subsets of CD34+ cells and clinical factors are inferior to the total CD34+ cell dose/kg and PBCD34+ count in predicting 6 month blood counts following ASCT.
Gonsalves, Nirmala; Yang, Guang-Yu; Doerfler, Bethany; Ritz, Sally; Ditto, Anne M; Hirano, Ikuo
2012-06-01
Adults with eosinophilic esophagitis (EoE) typically present with dysphagia and food impaction. A 6-food elimination diet (SFED) is effective in children with EoE. We assessed the effects of the SFED followed by food reintroduction on the histologic response, symptoms, and quality of life in adults with EoE. At the start of the study, 50 adults with EoE underwent esophagogastroduodenoscopies (EGDs), biopsies, and skin-prick tests for food and aeroallergens. After 6 weeks of SFED, patients underwent repeat EGD and biopsies. Histologic responders, defined by ≤ 5 eosinophils/high-power field (eos/hpf) (n = 32), underwent systematic reintroduction of foods followed by EGD and biopsies (n = 20). Symptom and quality of life scores were determined before and after SFED. Common symptoms of EoE included dysphagia (96%), food impaction (74%), and heartburn (94%). The mean peak eosinophil counts in the proximal esophagus were 34 eos/hpf and 8 eos/hpf, before and after the SFED, and 44 eos/hpf and 13 eos/hpf in the distal esophagus, respectively (P < .0001). After the SFED, 64% of patients had peak counts ≤ 5 eos/hpf and 70% had peak counts of ≤ 10 eos/hpf. Symptom scores decreased in 94% (P < .0001). After food reintroduction, esophageal eosinophil counts returned to pretreatment values (P < .0001). Based on reintroduction, the foods most frequently associated with EoE were wheat (60% of cases) and milk (50% of cases). Skin-prick testing predicted only 13% of foods associated with EoE. An elimination diet significantly improves symptoms and reduces endoscopic and histopathologic features of EoE in adults. Food reintroduction re-initiated features of EoE in patients, indicating a role for food allergens in its pathogenesis. Foods that activated EoE were identified by systematic reintroduction analysis but not by skin-prick tests. Copyright © 2012 AGA Institute. Published by Elsevier Inc. All rights reserved.
Lucijanic, Marko; Livun, Ana; Stoos-Veic, Tajana; Pejsa, Vlatko; Jaksic, Ozren; Cicic, David; Lucijanic, Jelena; Romic, Zeljko; Orehovec, Biserka; Aralica, Gorana; Miletic, Marko; Kusec, Rajko
2018-05-01
To investigate the clinical and prognostic significance of absolute basophil count (ABC) in patients with primary myelofibrosis (PMF). We retrospectively investigated 58 patients with PMF treated in our institution in the period from 2006 to 2017. ABC was obtained in addition to other hematological and clinical parameters. Patients were separated into high and low ABC groups using the Receiver operating characteristic curve analysis. ABC was higher in PMF patients than in healthy controls (P < 0.001). Patients with high ABC had higher white blood cells (P < 0.001), higher red cell distribution width (P = 0.035), higher lactate dehydrogenase (P < 0.001), more frequently had circulatory blasts (P < 0.001), constitutional symptoms (P = 0.030) and massive splenomegaly (P = 0.014). ABC was also positively correlated with absolute monocyte count (AMC) (P < 0.001) and other components of differential blood count. There was no difference in ABC regarding driver mutations or degree of bone marrow fibrosis. Univariately, high ABC was significantly associated with inferior overall survival (hazard ratio (HR) 4.79, P < 0.001). This effect remained statistically significant (HR 4.27, P = 0.009) in a multivariate Cox regression model adjusted for age, gender, Dynamic International Prognostic Scoring System (HR 2.6, P = 0.001) and AMC (HR 8.45, P = 0.002). High ABC reflects higher disease activity and stronger proliferative potential of disease. ABC and AMC independently predict survival and therefore seem to reflect different underlying pathophysiologic processes. Hence, both have a potential for improvement of current prognostic scores. Basophils represent a part of malignant clone in PMF and are associated with unfavorable disease features and poor prognosis which is independent of currently established prognostic scoring system and monocytosis.
Risk factors associated with low CD4+ lymphocyte count among HIV-positive pregnant women in Nigeria.
Abimiku, Alash'le; Villalba-Diebold, Pacha; Dadik, Jelpe; Okolo, Felicia; Mang, Edwina; Charurat, Man
2009-09-01
To determine the risk factors for CD4+ lymphocyte counts of 200 cells/mm(3) or lower in HIV-positive pregnant women in Nigeria. A cross-sectional data analysis from a prospective cohort of 515 HIV-positive women attending a prenatal clinic. Risk of a low CD4+ count was estimated using logistic regression analysis. CD4+ lymphocyte counts of 200 cells/mm(3) or lower (280+/-182 cells/mm(3)) were recorded in 187 (36.3%) out of 515 HIV-positive pregnant women included in the study. Low CD4+ count was associated with older age (adjusted odds ratio [aOR] 10.71; 95% confidence interval [CI], 1.20-95.53), lack of condom use (aOR, 5.16; 95% CI, 1.12-23.8), history of genital ulcers (aOR, 1.78; 95% CI, 1.12-2.82), and history of vaginal discharge (aOR; 1.62; 1.06-2.48). Over 35% of the HIV-positive pregnant women had low CD4+ counts, indicating the need for treatment. The findings underscore the need to integrate prevention of mother-to-child transmission with HIV treatment and care, particularly services for sexually transmitted infections.
Influences of environmental and anthropogenic features on greater sage-grouse populations, 1997-2007
Johnson, Douglas H.; Holloran, Matthew J.; Connelly, John W.; Hanser, Steven E.; Amundson, Courtney L.; Knick, Steven T.; Knick, Steven T.; Connelly, John W.
2011-01-01
The Greater Sage-Grouse (Centrocercus urophasianus), endemic to western North Amer-ica, is of great conservation interest. Its popula-tions are tracked by spring counts of males at lek sites. We explored the relations between trends of Greater Sage-Grouse lek counts from 1997 to 2007 and a variety of natural and anthropogenic fea-tures. We found that trends were correlated with several habitat features, but not always similarly throughout the range. Lek trends were positively associated with proportion of sagebrush (Artemisia spp.) cover, within 5 km and 18 km. Lek trends had negative associations with the coverage of agriculture and exotic plant species. Trends also tended to be lower for leks where a greater pro-portion of their surrounding landscape had been burned. Few leks were located within 5 km of developed land and trends were lower for those leks with more developed land within 5 km or 18 km. Lek trends were reduced where communi-cation towers were nearby, whereas no effect of power lines was detected. Active oil or natural gas wells and highways, but not secondary roads, were associated with lower trends. Effects of some anthropogenic features may have already been manifested before our study period and thus not have been detected in this analysis. Results of this range-wide analysis complement those from more intensive studies on smaller areas. Our findings are important for identifying features that could threaten Greater Sage-Grouse populations.
Content analysis of 150 years of British periodicals.
Lansdall-Welfare, Thomas; Sudhahar, Saatviga; Thompson, James; Lewis, Justin; Cristianini, Nello
2017-01-24
Previous studies have shown that it is possible to detect macroscopic patterns of cultural change over periods of centuries by analyzing large textual time series, specifically digitized books. This method promises to empower scholars with a quantitative and data-driven tool to study culture and society, but its power has been limited by the use of data from books and simple analytics based essentially on word counts. This study addresses these problems by assembling a vast corpus of regional newspapers from the United Kingdom, incorporating very fine-grained geographical and temporal information that is not available for books. The corpus spans 150 years and is formed by millions of articles, representing 14% of all British regional outlets of the period. Simple content analysis of this corpus allowed us to detect specific events, like wars, epidemics, coronations, or conclaves, with high accuracy, whereas the use of more refined techniques from artificial intelligence enabled us to move beyond counting words by detecting references to named entities. These techniques allowed us to observe both a systematic underrepresentation and a steady increase of women in the news during the 20th century and the change of geographic focus for various concepts. We also estimate the dates when electricity overtook steam and trains overtook horses as a means of transportation, both around the year 1900, along with observing other cultural transitions. We believe that these data-driven approaches can complement the traditional method of close reading in detecting trends of continuity and change in historical corpora.
Content analysis of 150 years of British periodicals
Lansdall-Welfare, Thomas; Sudhahar, Saatviga; Thompson, James; Lewis, Justin; Cristianini, Nello
2017-01-01
Previous studies have shown that it is possible to detect macroscopic patterns of cultural change over periods of centuries by analyzing large textual time series, specifically digitized books. This method promises to empower scholars with a quantitative and data-driven tool to study culture and society, but its power has been limited by the use of data from books and simple analytics based essentially on word counts. This study addresses these problems by assembling a vast corpus of regional newspapers from the United Kingdom, incorporating very fine-grained geographical and temporal information that is not available for books. The corpus spans 150 years and is formed by millions of articles, representing 14% of all British regional outlets of the period. Simple content analysis of this corpus allowed us to detect specific events, like wars, epidemics, coronations, or conclaves, with high accuracy, whereas the use of more refined techniques from artificial intelligence enabled us to move beyond counting words by detecting references to named entities. These techniques allowed us to observe both a systematic underrepresentation and a steady increase of women in the news during the 20th century and the change of geographic focus for various concepts. We also estimate the dates when electricity overtook steam and trains overtook horses as a means of transportation, both around the year 1900, along with observing other cultural transitions. We believe that these data-driven approaches can complement the traditional method of close reading in detecting trends of continuity and change in historical corpora. PMID:28069962
THE CHANDRA SURVEY OF THE COSMOS FIELD. II. SOURCE DETECTION AND PHOTOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puccetti, S.; Vignali, C.; Cappelluti, N.
2009-12-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that covers the central contiguous {approx}0.92 deg{sup 2} of the COSMOS field. C-COSMOS is the result of a complex tiling, with every position being observed in up to six overlapping pointings (four overlapping pointings in most of the central {approx}0.45 deg{sup 2} area with the best exposure, and two overlapping pointings in most of the surrounding area, covering an additional {approx}0.47 deg{sup 2}). Therefore, the full exploitation of the C-COSMOS data requires a dedicated and accurate analysis focused on three main issues: (1) maximizing the sensitivity when themore » point-spread function (PSF) changes strongly among different observations of the same source (from {approx}1 arcsec up to {approx}10 arcsec half-power radius); (2) resolving close pairs; and (3) obtaining the best source localization and count rate. We present here our treatment of four key analysis items: source detection, localization, photometry, and survey sensitivity. Our final procedure consists of a two step procedure: (1) a wavelet detection algorithm to find source candidates and (2) a maximum likelihood PSF fitting algorithm to evaluate the source count rates and the probability that each source candidate is a fluctuation of the background. We discuss the main characteristics of this procedure, which was the result of detailed comparisons between different detection algorithms and photometry tools, calibrated with extensive and dedicated simulations.« less
Reviving common standards in point-count surveys for broad inference across studies
Matsuoka, Steven M.; Mahon, C. Lisa; Handel, Colleen M.; Solymos, Peter; Bayne, Erin M.; Fontaine, Patricia C.; Ralph, C.J.
2014-01-01
We revisit the common standards recommended by Ralph et al. (1993, 1995a) for conducting point-count surveys to assess the relative abundance of landbirds breeding in North America. The standards originated from discussions among ornithologists in 1991 and were developed so that point-count survey data could be broadly compared and jointly analyzed by national data centers with the goals of monitoring populations and managing habitat. Twenty years later, we revisit these standards because (1) they have not been universally followed and (2) new methods allow estimation of absolute abundance from point counts, but these methods generally require data beyond the original standards to account for imperfect detection. Lack of standardization and the complications it introduces for analysis become apparent from aggregated data. For example, only 3% of 196,000 point counts conducted during the period 1992-2011 across Alaska and Canada followed the standards recommended for the count period and count radius. Ten-minute, unlimited-count-radius surveys increased the number of birds detected by >300% over 3-minute, 50-m-radius surveys. This effect size, which could be eliminated by standardized sampling, was ≥10 times the published effect sizes of observers, time of day, and date of the surveys. We suggest that the recommendations by Ralph et al. (1995a) continue to form the common standards when conducting point counts. This protocol is inexpensive and easy to follow but still allows the surveys to be adjusted for detection probabilities. Investigators might optionally collect additional information so that they can analyze their data with more flexible forms of removal and time-of-detection models, distance sampling, multiple-observer methods, repeated counts, or combinations of these methods. Maintaining the common standards as a base protocol, even as these study-specific modifications are added, will maximize the value of point-count data, allowing compilation and analysis by regional and national data centers.
NASA Astrophysics Data System (ADS)
Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.
2009-12-01
We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.
Kahl, Oliver; Ferrari, Simone; Kovalyuk, Vadim; Goltsman, Gregory N.; Korneev, Alexander; Pernice, Wolfram H. P.
2015-01-01
Superconducting nanowire single-photon detectors (SNSPDs) provide high efficiency for detecting individual photons while keeping dark counts and timing jitter minimal. Besides superior detection performance over a broad optical bandwidth, compatibility with an integrated optical platform is a crucial requirement for applications in emerging quantum photonic technologies. Here we present SNSPDs embedded in nanophotonic integrated circuits which achieve internal quantum efficiencies close to unity at 1550 nm wavelength. This allows for the SNSPDs to be operated at bias currents far below the critical current where unwanted dark count events reach milli-Hz levels while on-chip detection efficiencies above 70% are maintained. The measured dark count rates correspond to noise-equivalent powers in the 10−19 W/Hz−1/2 range and the timing jitter is as low as 35 ps. Our detectors are fully scalable and interface directly with waveguide-based optical platforms. PMID:26061283
DOE Office of Scientific and Technical Information (OSTI.GOV)
Church, J; Slaughter, D; Norman, E
Error rates in a cargo screening system such as the Nuclear Car Wash [1-7] depend on the standard deviation of the background radiation count rate. Because the Nuclear Car Wash is an active interrogation technique, the radiation signal for fissile material must be detected above a background count rate consisting of cosmic, ambient, and neutron-activated radiations. It was suggested previously [1,6] that the Corresponding negative repercussions for the sensitivity of the system were shown. Therefore, to assure the most accurate estimation of the variation, experiments have been performed to quantify components of the actual variance in the background count rate,more » including variations in generator power, irradiation time, and container contents. The background variance is determined by these experiments to be a factor of 2 smaller than values assumed in previous analyses, resulting in substantially improved projections of system performance for the Nuclear Car Wash.« less
Kahl, Oliver; Ferrari, Simone; Kovalyuk, Vadim; Goltsman, Gregory N; Korneev, Alexander; Pernice, Wolfram H P
2015-06-10
Superconducting nanowire single-photon detectors (SNSPDs) provide high efficiency for detecting individual photons while keeping dark counts and timing jitter minimal. Besides superior detection performance over a broad optical bandwidth, compatibility with an integrated optical platform is a crucial requirement for applications in emerging quantum photonic technologies. Here we present SNSPDs embedded in nanophotonic integrated circuits which achieve internal quantum efficiencies close to unity at 1550 nm wavelength. This allows for the SNSPDs to be operated at bias currents far below the critical current where unwanted dark count events reach milli-Hz levels while on-chip detection efficiencies above 70% are maintained. The measured dark count rates correspond to noise-equivalent powers in the 10(-19) W/Hz(-1/2) range and the timing jitter is as low as 35 ps. Our detectors are fully scalable and interface directly with waveguide-based optical platforms.
NASA Astrophysics Data System (ADS)
Lundqvist, Mats; Danielsson, Mats; Cederstroem, Bjoern; Chmill, Valery; Chuntonov, Alexander; Aslund, Magnus
2003-06-01
Sectra Microdose is the first single photon counting mammography detector. An edge-on crystalline silicon detector is connected to application specific integrated circuits that individually process each photon. The detector is scanned across the breast and the rejection of scattered radiation exceeds 97% without the use of a Bucky. Processing of each x-rays individually enables an optimization of the information transfer from the x-rays to the image in a way previously not possible. Combined with an almost absence of noise from scattered radiation and from electronics we foresee a possibility to reduce the radiation dose and/or increase the image quality. We will discuss fundamental features of the new direct photon counting technique in terms of dose efficiency and present preliminary measurements for a prototype on physical parameters such as Noise Power Spectra (NPS), MTF and DQE.
Disease-Concordant Twins Empower Genetic Association Studies.
Tan, Qihua; Li, Weilong; Vandin, Fabio
2017-01-01
Genome-wide association studies with moderate sample sizes are underpowered, especially when testing SNP alleles with low allele counts, a situation that may lead to high frequency of false-positive results and lack of replication in independent studies. Related individuals, such as twin pairs concordant for a disease, should confer increased power in genetic association analysis because of their genetic relatedness. We conducted a computer simulation study to explore the power advantage of the disease-concordant twin design, which uses singletons from disease-concordant twin pairs as cases and ordinary healthy samples as controls. We examined the power gain of the twin-based design for various scenarios (i.e., cases from monozygotic and dizygotic twin pairs concordant for a disease) and compared the power with the ordinary case-control design with cases collected from the unrelated patient population. Simulation was done by assigning various allele frequencies and allelic relative risks for different mode of genetic inheritance. In general, for achieving a power estimate of 80%, the sample sizes needed for dizygotic and monozygotic twin cases were one half and one fourth of the sample size of an ordinary case-control design, with variations depending on genetic mode. Importantly, the enriched power for dizygotic twins also applies to disease-concordant sibling pairs, which largely extends the application of the concordant twin design. Overall, our simulation revealed a high value of disease-concordant twins in genetic association studies and encourages the use of genetically related individuals for highly efficiently identifying both common and rare genetic variants underlying human complex diseases without increasing laboratory cost. © 2016 John Wiley & Sons Ltd/University College London.
1981-07-01
Dennis M. Lavoie of NORDA for chemical analysis of clay minerals with the x-ray energy dispersive spectrometer. We thank Fred Bowles, Peter Fleischer...diffractograi of Nuculana acuta fecal pellet 11 residue (illite experiment). TABLES TABLE 1. X-ray energy dispersive spectrometer chemical 8 analysis for...inontmorillonite experiments. Counts for elements after background counts removed. TABLE 2. X-ray energy dispersive spectroneter chemical analysis 12 for
Count on It: Congruent Manipulative Displays
ERIC Educational Resources Information Center
Morin, Joe; Samelson, Vicki M.
2015-01-01
Representations that create informative visual displays are powerful tools for communicating mathematical concepts. The National Council of Teachers of Mathematics encourages the use of manipulatives (NCTM 2000). Manipulative materials are often used to present initial representations of basic numerical principles to young children, and it is…
Locality and Unitarity of Scattering Amplitudes from Singularities and Gauge Invariance
NASA Astrophysics Data System (ADS)
Arkani-Hamed, Nima; Rodina, Laurentiu; Trnka, Jaroslav
2018-06-01
We conjecture that the leading two-derivative tree-level amplitudes for gluons and gravitons can be derived from gauge invariance together with mild assumptions on their singularity structure. Assuming locality (that the singularities are associated with the poles of cubic graphs), we prove that gauge invariance in just n -1 particles together with minimal power counting uniquely fixes the amplitude. Unitarity in the form of factorization then follows from locality and gauge invariance. We also give evidence for a stronger conjecture: assuming only that singularities occur when the sum of a subset of external momenta go on shell, we show in nontrivial examples that gauge invariance and power counting demand a graph structure for singularities. Thus, both locality and unitarity emerge from singularities and gauge invariance. Similar statements hold for theories of Goldstone bosons like the nonlinear sigma model and Dirac-Born-Infeld by replacing the condition of gauge invariance with an appropriate degree of vanishing in soft limits.
Effects of EMF emissions from undersea electric cables on coral reef fish.
Kilfoyle, Audie K; Jermain, Robert F; Dhanak, Manhar R; Huston, Joseph P; Spieler, Richard E
2018-01-01
The objective of this study was to determine if electromagnetic field (EMF) emissions from undersea power cables impacted local marine life, with an emphasis on coral reef fish. The work was done at the South Florida Ocean Measurement Facility of Naval Surface Warfare Center in Broward County, Florida, which has a range of active undersea detection and data transmission cables. EMF emissions from a selected cable were created during non-destructive visual fish surveys on SCUBA. During surveys, the transmission of either alternating current (AC), direct current (DC), or none (OFF) was randomly initiated by the facility at a specified time. Visual surveys were conducted using standardized transect and point-count methods to acquire reef fish abundances and species richness prior to and immediately after a change in transmission frequency. The divers were also tasked to note the reaction of the reef fish to the immediate change in EMF during a power transition. In general, analysis of the data did not find statistical differences among power states and any variables. However, this may be a Type II error as there are strong indications of a potential difference of a higher abundance of reef fish at the sites when the power was off, and further study is warranted. Bioelectromagnetics. 39:35-52, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Automated Pedestrian Detection, Count and Analysis System
DOT National Transportation Integrated Search
2015-04-15
Pedestrian and bicycle count data is necessary for transportation planning, implementing safety countermeasures, and traffic management. This data is critical when evaluating the pedestrian level of service of safety (LOSS) and pedestrian safety perf...
Using behavioral skills training and video rehearsal to teach blackjack skills.
Speelman, Ryan C; Whiting, Seth W; Dixon, Mark R
2015-09-01
A behavioral skills training procedure that consisted of video instructions, video rehearsal, and video testing was used to teach 4 recreational gamblers a specific skill in playing blackjack (sometimes called card counting). A multiple baseline design was used to evaluate intervention effects on card-counting accuracy and chips won or lost across participants. Before training, no participant counted cards accurately. Each participant completed all phases of the training protocol, counting cards fluently with 100% accuracy during changing speed criterion training exercises. Generalization probes were conducted while participants played blackjack in a mock casino following each training phase. Afterwards, all 4 participants were able to count cards while they played blackjack. In conjunction with count accuracy, total winnings were tracked to determine the monetary advantages associated with counting cards. After losing money during baseline, 3 of 4 participants won a substantial amount of money playing blackjack after the intervention. © Society for the Experimental Analysis of Behavior.
HgCdTe APD-based linear-mode photon counting components and ladar receivers
NASA Astrophysics Data System (ADS)
Jack, Michael; Wehner, Justin; Edwards, John; Chapman, George; Hall, Donald N. B.; Jacobson, Shane M.
2011-05-01
Linear mode photon counting (LMPC) provides significant advantages in comparison with Geiger Mode (GM) Photon Counting including absence of after-pulsing, nanosecond pulse to pulse temporal resolution and robust operation in the present of high density obscurants or variable reflectivity objects. For this reason Raytheon has developed and previously reported on unique linear mode photon counting components and modules based on combining advanced APDs and advanced high gain circuits. By using HgCdTe APDs we enable Poisson number preserving photon counting. A metric of photon counting technology is dark count rate and detection probability. In this paper we report on a performance breakthrough resulting from improvement in design, process and readout operation enabling >10x reduction in dark counts rate to ~10,000 cps and >104x reduction in surface dark current enabling long 10 ms integration times. Our analysis of key dark current contributors suggest that substantial further reduction in DCR to ~ 1/sec or less can be achieved by optimizing wavelength, operating voltage and temperature.
Choudhry, Priya
2016-01-01
Counting cells and colonies is an integral part of high-throughput screens and quantitative cellular assays. Due to its subjective and time-intensive nature, manual counting has hindered the adoption of cellular assays such as tumor spheroid formation in high-throughput screens. The objective of this study was to develop an automated method for quick and reliable counting of cells and colonies from digital images. For this purpose, I developed an ImageJ macro Cell Colony Edge and a CellProfiler Pipeline Cell Colony Counting, and compared them to other open-source digital methods and manual counts. The ImageJ macro Cell Colony Edge is valuable in counting cells and colonies, and measuring their area, volume, morphology, and intensity. In this study, I demonstrate that Cell Colony Edge is superior to other open-source methods, in speed, accuracy and applicability to diverse cellular assays. It can fulfill the need to automate colony/cell counting in high-throughput screens, colony forming assays, and cellular assays. PMID:26848849
Scabbio, Camilla; Zoccarato, Orazio; Malaspina, Simona; Lucignani, Giovanni; Del Sole, Angelo; Lecchi, Michela
2017-10-17
To evaluate the impact of non-specific normal databases on the percent summed rest score (SR%) and stress score (SS%) from simulated low-dose SPECT studies by shortening the acquisition time/projection. Forty normal-weight and 40 overweight/obese patients underwent myocardial studies with a conventional gamma-camera (BrightView, Philips) using three different acquisition times/projection: 30, 15, and 8 s (100%-counts, 50%-counts, and 25%-counts scan, respectively) and reconstructed using the iterative algorithm with resolution recovery (IRR) Astonish TM (Philips). Three sets of normal databases were used: (1) full-counts IRR; (2) half-counts IRR; and (3) full-counts traditional reconstruction algorithm database (TRAD). The impact of these databases and the acquired count statistics on the SR% and SS% was assessed by ANOVA analysis and Tukey test (P < 0.05). Significantly higher SR% and SS% values (> 40%) were found for the full-counts TRAD databases respect to the IRR databases. For overweight/obese patients, significantly higher SS% values for 25%-counts scans (+19%) are confirmed compared to those of 50%-counts scan, independently of using the half-counts or the full-counts IRR databases. Astonish TM requires the adoption of the own specific normal databases in order to prevent very high overestimation of both stress and rest perfusion scores. Conversely, the count statistics of the normal databases seems not to influence the quantification scores.
Boskabadi, Hassan; Zakerihamidi, Maryam; Sadeghian, Mohammad Hadi; Avan, Amir; Ghayour-Mobarhan, Majid; Ferns, Gordon A
2017-11-01
Nucleated-red-blood-cells (NRBC) count in umbilical cord of newborns is been suggested as a sign of birth asphyxia. The present study was conducted to explore the value of NRBC count in prognosis of asphyxiated neonates. Sixty-three neonates with asphyxia were followed up for two years. Maternal and neonatal information was recorded follow by clinical and laboratory evaluation. NRBC-level was determined per 100 white-blood-cells (WBC). After discharge, follow-up of asphyxiated infants was performed using Denver II test at 6, 12, 18 and 24 months. Neonates were divided into two groups, with favorable and unfavorable outcome based on developmental delay or death. We observed that NRBC count with more than 11 per 100 WBC, had sensitivity of 85% and specificity of 90% in predicting complications of asphyxia, while in absolute NRBC count with more than 1554, the sensitivity and specificity were 85% and of 87%, respectively. Combination of NRBC + HIE (hypoxic ischemic encephalopathy) grade had a high-predictive power for determining the prognosis of asphyxia in neonates. We demonstrate that NRBC/100 WBC and absolute NRCB count can be used as prognostic marker for neonatal asphyxia, which in combination with the severity of asphyxia could indicate high infant mortality, and complications of asphyxia. Further studies in a larger and multi center setting trail are warranted to investigate the value of NRBC and HIE in asphyxiate term infants.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
Adjoint-Based Implicit Uncertainty Analysis for Figures of Merit in a Laser Inertial Fusion Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, J E; Fratoni, M; Kramer, K J
A primary purpose of computational models is to inform design decisions and, in order to make those decisions reliably, the confidence in the results of such models must be estimated. Monte Carlo neutron transport models are common tools for reactor designers. These types of models contain several sources of uncertainty that propagate onto the model predictions. Two uncertainties worthy of note are (1) experimental and evaluation uncertainties of nuclear data that inform all neutron transport models and (2) statistical counting precision, which all results of a Monte Carlo codes contain. Adjoint-based implicit uncertainty analyses allow for the consideration of anymore » number of uncertain input quantities and their effects upon the confidence of figures of merit with only a handful of forward and adjoint transport calculations. When considering a rich set of uncertain inputs, adjoint-based methods remain hundreds of times more computationally efficient than Direct Monte-Carlo methods. The LIFE (Laser Inertial Fusion Energy) engine is a concept being developed at Lawrence Livermore National Laboratory. Various options exist for the LIFE blanket, depending on the mission of the design. The depleted uranium hybrid LIFE blanket design strives to close the fission fuel cycle without enrichment or reprocessing, while simultaneously achieving high discharge burnups with reduced proliferation concerns. Neutron transport results that are central to the operation of the design are tritium production for fusion fuel, fission of fissile isotopes for energy multiplication, and production of fissile isotopes for sustained power. In previous work, explicit cross-sectional uncertainty analyses were performed for reaction rates related to the figures of merit for the depleted uranium hybrid LIFE blanket. Counting precision was also quantified for both the figures of merit themselves and the cross-sectional uncertainty estimates to gauge the validity of the analysis. All cross-sectional uncertainties were small (0.1-0.8%), bounded counting uncertainties, and were precise with regard to counting precision. Adjoint/importance distributions were generated for the same reaction rates. The current work leverages those adjoint distributions to transition from explicit sensitivities, in which the neutron flux is constrained, to implicit sensitivities, in which the neutron flux responds to input perturbations. This treatment vastly expands the set of data that contribute to uncertainties to produce larger, more physically accurate uncertainty estimates.« less
NASA Astrophysics Data System (ADS)
Böbel, A.; Knapek, C. A.; Räth, C.
2018-05-01
Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski tensor analysis turns out to be a powerful tool for investigations of crystallization processes. It is capable of revealing nonlinear local topological properties, however, still provides easily interpretable results founded on a solid mathematical framework.
Why you cannot transform your way out of trouble for small counts.
Warton, David I
2018-03-01
While data transformation is a common strategy to satisfy linear modeling assumptions, a theoretical result is used to show that transformation cannot reasonably be expected to stabilize variances for small counts. Under broad assumptions, as counts get smaller, it is shown that the variance becomes proportional to the mean under monotonic transformations g(·) that satisfy g(0)=0, excepting a few pathological cases. A suggested rule-of-thumb is that if many predicted counts are less than one then data transformation cannot reasonably be expected to stabilize variances, even for a well-chosen transformation. This result has clear implications for the analysis of counts as often implemented in the applied sciences, but particularly for multivariate analysis in ecology. Multivariate discrete data are often collected in ecology, typically with a large proportion of zeros, and it is currently widespread to use methods of analysis that do not account for differences in variance across observations nor across responses. Simulations demonstrate that failure to account for the mean-variance relationship can have particularly severe consequences in this context, and also in the univariate context if the sampling design is unbalanced. © 2017 The Authors. Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Regression analysis of mixed recurrent-event and panel-count data
Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L.
2014-01-01
In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20, 1–42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. PMID:24648408
NASA Astrophysics Data System (ADS)
Kasprak, Alan; Magilligan, Francis J.; Nislow, Keith H.; Renshaw, Carl E.; Snyder, Noah P.; Dade, W. Brian
2013-03-01
Timber harvest often results in accelerated soil erosion and subsequent elevated fine (< 2 mm) sediment delivery to channels causing deleterious effects to numerous aquatic species, particularly salmonid fishes. Here we determine, through sediment physical analyses (pebble counts, embeddedness surveys, and interstitial shelter space counts) and geochemical analyses (7Be and 210Pbex activities), the amount and timing of delivery of fine sediment currently found on streambeds of the Narraguagus River watershed in coastal Maine. The role of recent timber harvest, documented via aerial photo spatial analysis, on fine sediment delivery is contrasted with the ability of the glacially influenced topography and surficial geology to deliver fine sediment to streams and to influence channel substrate. Results show that of the land use and geomorphic variables examined, only 210Pbex activities were significantly correlated with the amount of upstream harvest (r2 = 0.49). Concurrently, we find that unit stream power (particularly the slope component) explains much of the variability in channel substrate and that slope and stream power are largely influenced by the legacy of Pleistocene glaciation on channel form. Results suggest a conceptual model whereby fine sediment delivery as a result of late twentieth century timber harvest is likely dampened because of the low gradient landscape of coastal Maine. While geochemical tracers indicate recent fine sediment delivery in harvested areas, channels are likely capable of quickly winnowing these fines from the channel bed. These results further suggest that under contemporary land use conditions, the geomorphic and geologic setting represents a first-order control on channel substrate and habitat suitability for salmonid fishes, including federally endangered Atlantic salmon (Salmo salar), in coastal drainages of northeastern Maine.
Constraining the Galactic structure parameters with the XSTPS-GAC and SDSS photometric surveys
NASA Astrophysics Data System (ADS)
Chen, B.-Q.; Liu, X.-W.; Yuan, H.-B.; Robin, A. C.; Huang, Y.; Xiang, M.-S.; Wang, C.; Ren, J.-J.; Tian, Z.-J.; Zhang, H.-W.
2017-01-01
Photometric data from the Xuyi Schmidt Telescope Photometric Survey of the Galactic Anticentre (XSTPS-GAC) and the Sloan Digital Sky Survey (SDSS) are used to derive the global structure parameters of the smooth components of the Milky Way. The data, which cover nearly 11 000 deg2 sky area and the full range of Galactic latitude, allow us to construct a globally representative Galactic model. The number density distribution of Galactic halo stars is fitted with an oblate spheroid that decays by power law. The best fitting yields an axis ratio and a power-law index κ = 0.65 and p = 2.79, respectively. The r-band differential star counts of three dwarf samples are then fitted with a Galactic model. The best-fitting model yielded by a Markov Chain Monte Carlo analysis has thin and thick disc scale heights and lengths of H1 = 322 pc and L1 = 2343 pc, H2 = 794 pc and L2 = 3638 pc, a local thick-to-thin disc density ratio of f2 = 11 per cent, and a local density ratio of the oblate halo to the thin disc of fh = 0.16 per cent. The measured star count distribution, which is in good agreement with the above model for most of the sky area, shows a number of statistically significant large-scale overdensities, including some of the previously known substructures, such as the Virgo overdensity and the so-called `north near structure', and a new feature between 150° < l < 240° and -1° < b < -5°, at an estimated distance between 1.0 and 1.5 kpc. The Galactic North-South asymmetry in the anticentre is even stronger than previously thought.
The vela pulsar: results from the first year of FERMI lat observations
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2010-03-18
Here, we report on analysis of timing and spectroscopy of the Vela pulsar using 11 months of observations with the Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope. The intrinsic brightness of Vela at GeV energies combined with the angular resolution and sensitivity of the LAT allows us to make the most detailed study to date of the energy-dependent light curves and phase-resolved spectra, using a LAT-derived timing model. The light curve consists of two peaks (P1 and P2) connected by bridge emission containing a third peak (P3). We have confirmed the strong decrease of the P1/P2 ratiomore » with increasing energy seen with EGRET and previous Fermi LAT data, and observe that P1 disappears above 20 GeV. The increase with energy of the mean phase of the P3 component can be followed with much greater detail, showing that P3 and P2 are present up to the highest energies of pulsation. We find significant pulsed emission at phases outside the main profile, indicating that magnetospheric emission exists over 80% of the pulsar period. With increased high-energy counts the phase-averaged spectrum is seen to depart from a power law with simple exponential cutoff, and is better fit with a more gradual cutoff. The spectra in fixed-count phase bins are well fit with power laws with exponential cutoffs, revealing a strong and complex phase dependence of the cutoff energy, especially in the peaks. Finally, by combining these results with predictions of the outer magnetosphere models that map emission characteristics to phase, it will be possible to probe the particle acceleration and the structure of the pulsar magnetosphere with unprecedented detail.« less
Genetic analysis of 19 X chromosome STR loci for forensic purposes in four Chinese ethnic groups
Yang, Xingyi; Zhang, Xiaofang; Zhu, Junyong; Chen, Linli; Liu, Changhui; Feng, Xingling; Chen, Ling; Wang, Huijun; Liu, Chao
2017-01-01
A new 19 X- short tandem repeat (STR) multiplex PCR system has recently been developed, though its applicability in forensic studies has not been thoroughly assessed. In this study, 932 unrelated individuals from four Chinese ethnic groups (Han, Tibet, Uighur and Hui) were successfully genotyped using this new multiplex PCR system. Our results showed significant linkage disequilibrium between markers DXS10103 and DXS10101 in all four ethnic groups; markers DXS10159 and DXS10162, DXS6809 and DXS6789, and HPRTB and DXS10101 in Tibetan populations; and markers DXS10074 and DXS10075 in Uighur populations. The combined powers of discrimination in males and females were calculated according to haplotype frequencies from allele distributions rather than haplotype counts in the relevant population and were high in four ethnic groups. The cumulative powers of discrimination of the tested X-STR loci were 1.000000000000000 and 0.999999999997940 in females and males, respectively. All 19 X-STR loci are highly polymorphic. The highest Reynolds genetic distances were observed for the Tibet-Uighur pairwise comparisons. This study represents an extensive report on X-STR marker variation in minor Chinese populations and a comprehensive analysis of the diversity of these 19 X STR markers in four Chinese ethnic groups. PMID:28211539
Association between oral health behavior and periodontal disease among Korean adults
Han, Kyungdo; Park, Jun-Beom
2017-01-01
Abstract This study was performed to assess the association between oral health behavior and periodontal disease using nationally representative data. This study involved a cross-sectional analysis and multivariable logistic regression analysis models using the data from the Korean National Health and Nutrition Examination Survey. A community periodontal index greater than or equal to code 3 was used to define periodontal disease. Adjusted odds ratios and their 95% confidence intervals of periodontitis for the toothbrushing after lunch group and the toothbrushing before bedtime group were 0.842 (0.758, 0.936) and 0.814 (0.728, 0.911), respectively, after adjustments for age, sex, body mass index, drinking, exercise, education, income, white blood cell count, and metabolic syndrome. Adjusted odds ratios and their 95% confidence intervals of periodontitis for the floss group and the powered toothbrush group after adjustment were 0.678 (0.588, 0.781) and 0.771 (0.610, 0.974), respectively. The association between oral health behavior and periodontitis was proven by multiple logistic regression analyses after adjusting for confounding factors among Korean adults. Brushing after lunch and before bedtime as well as the use of floss and a powered toothbrush may be considered independent risk indicators of periodontal disease among Korean adults. PMID:28207558
Dusick, Allison; Young, Karen M; Muir, Peter
2014-12-01
Canine osteoarthritis is a common disorder seen in veterinary clinical practice and causes considerable morbidity in dogs as they age. Synovial fluid analysis is an important tool for diagnosis and treatment of canine joint disease and obtaining a total nucleated cell count (TNCC) is particularly important. However, the low sample volumes obtained during arthrocentesis are often insufficient for performing an automated TNCC, thereby limiting diagnostic interpretation. The aim of the present study was to investigate whether estimation of TNCC in canine synovial fluid could be achieved by performing manual cell counts on direct smears of fluid. Fifty-eight synovial fluid samples, taken by arthrocentesis from 48 dogs, were included in the study. Direct smears of synovial fluid were prepared, and hyaluronidase added before cell counts were obtained using a commercial laser-based instrument. A protocol was established to count nucleated cells in a specific region of the smear, using a serpentine counting pattern; the mean number of nucleated cells per 400 × field was then calculated. There was a positive correlation between the automated TNCC and mean manual cell count, with more variability at higher TNCC. Regression analysis was performed to estimate TNCC from manual counts. By this method, 78% of the samples were correctly predicted to fall into one of three categories (within the reference interval, mildly to moderately increased, or markedly increased) relative to the automated TNCC. Intra-observer and inter-observer agreement was good to excellent. The results of the study suggest that interpretation of canine synovial fluid samples of low volume can be aided by methodical manual counting of cells on direct smears. Copyright © 2014 Elsevier Ltd. All rights reserved.
George L. Farnsworth; James D. Nichols; John R. Sauer; Steven G. Fancy; Kenneth H. Pollock; Susan A. Shriner; Theodore R. Simons
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point...
ERIC Educational Resources Information Center
Powell, Sarah R.; Nurnberger-Haag, Julie
2015-01-01
Research Findings: Teachers and parents often use trade books to introduce or reinforce mathematics concepts. To date, an analysis of the early numeracy content of trade books has not been conducted. Consequently, this study evaluated the properties of numbers and counting within trade books. We coded 160 trade books targeted at establishing early…
Using Count Data and Ordered Models in National Forest Recreation Demand Analysis
NASA Astrophysics Data System (ADS)
Simões, Paula; Barata, Eduardo; Cruz, Luis
2013-11-01
This research addresses the need to improve our knowledge on the demand for national forests for recreation and offers an in-depth data analysis supported by the complementary use of count data and ordered models. From a policy-making perspective, while count data models enable the estimation of monetary welfare measures, ordered models allow for the wider use of the database and provide a more flexible analysis of data. The main purpose of this article is to analyse the individual forest recreation demand and to derive a measure of its current use value. To allow a more complete analysis of the forest recreation demand structure the econometric approach supplements the use of count data models with ordered category models using data obtained by means of an on-site survey in the Bussaco National Forest (Portugal). Overall, both models reveal that travel cost and substitute prices are important explanatory variables, visits are a normal good and demographic variables seem to have no influence on demand. In particular, estimated price and income elasticities of demand are quite low. Accordingly, it is possible to argue that travel cost (price) in isolation may be expected to have a low impact on visitation levels.
Cryogenic method for measuring nuclides and fission gases
Perdue, P.T.; Haywood, F.F.
1980-05-02
A cryogenic method is provided for determining airborne gases and particulates from which gamma rays are emitted. A special dewar counting vessel is filled with the contents of the sampling flask which is immersed in liquid nitrogen. A vertically placed sodium-iodide or germanium-lithium gamma-ray detector is used. The device and method are of particular use in measuring and identifying the radioactive noble gases including emissions from coal-fired power plants, as well as fission gases released or escaping from nuclear power plants.
Air & Space Power Journal. Volume 27, Number 2, March-April 2013
2013-04-01
Executive Sum- mary, 18. 40. It is difficult to find a definitive source and data due to variations in what sources count as sorties (i.e., sorties...April 2013 Air & Space Power Journal | 67 Sundberg A Case for Air Force Reorganization Feature tribution and broad variations in spans of control for...at 465,000 in 1992, 24 years later. Air Force Personnel Center, “Air Force Strength from 1948 thru 2010.” 16. Alfred Goldberg , ed., A History of the
Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Aongus; Collins, Robert J.; Krichel, Nils J.
2009-11-10
We describe a scanning time-of-flight system which uses the time-correlated single-photon counting technique to produce three-dimensional depth images of distant, noncooperative surfaces when these targets are illuminated by a kHz to MHz repetition rate pulsed laser source. The data for the scene are acquired using a scanning optical system and an individual single-photon detector. Depth images have been successfully acquired with centimeter xyz resolution, in daylight conditions, for low-signature targets in field trials at distances of up to 325 m using an output illumination with an average optical power of less than 50 {mu}W.
NASA Astrophysics Data System (ADS)
Areces, Carlos; Hoffmann, Guillaume; Denis, Alexandre
We present a modal language that includes explicit operators to count the number of elements that a model might include in the extension of a formula, and we discuss how this logic has been previously investigated under different guises. We show that the language is related to graded modalities and to hybrid logics. We illustrate a possible application of the language to the treatment of plural objects and queries in natural language. We investigate the expressive power of this logic via bisimulations, discuss the complexity of its satisfiability problem, define a new reasoning task that retrieves the cardinality bound of the extension of a given input formula, and provide an algorithm to solve it.
Sperm count. Do we need a new reference value?
Cardona Maya, Walter
2010-03-01
To evaluate the sperm count in fertile men, general population, and infertile men in different regions of the world. Sperm counts were recorded according to their fertility status, proven fertility, men recruited from an andrology/infertility clinic, or healthy men. The average of sperm count in the different studies is lower in infertile men that in fertile men (p>0.001) and in the general population (p>0.001). Based on this analysis the normal sperm count is about 65 million per mL. Using these reference value, only the 25% of the studies in infertile men are above this value, and the 75% studies with fertile men (>65 x 106 sperm/mL).
Effects of student pairing and public review on physical activity during school recess.
Zerger, Heather M; Miller, Bryon G; Valbuena, Diego; Miltenberger, Raymond G
2017-07-01
The purpose of this study was to evaluate the effects of student pairing and feedback during recess on children's step counts. During baseline, participants wore a sealed pedometer during recess. During intervention, we paired participants with higher step counts with participants with lower step counts. We encouraged teams to compete for the highest step count each day and provided feedback on their performance during each recess session. Results showed a large mean increase in step count from baseline to intervention. These results suggest that children's steps during recess can be increased with a simple and cost-effective intervention. © 2017 Society for the Experimental Analysis of Behavior.
Waters, L; Fisher, M; Anderson, J; Wood, C; Delpech, V; Hill, T; Walsh, J; Orkin, C; Bansi, L; Gompels, M; Phillips, A; Johnson, M; Gilson, R; Easterbrook, P; Leen, C; Porter, K; Gazzard, B; Sabin, C
2011-05-01
We investigated whether adverse responses to highly active antiretroviral therapy (HAART) associated with late HIV presentation are secondary to low CD4 cell count per se or other confounding factors. A longitudinal analysis of the UK Collaborative HIV Cohort (CHIC) Study of individuals starting HAART in 1998-2007 was carried out, comparing late presenters (presenting/starting HAART at a CD4 count <200 cells/μL) with late starters (presenting at a CD4 count>350 cells/μL; starting HAART at a CD4 count<200 cells/μL), using 'ideal starters' (presenting at a CD4 count>350 cells/μL; starting HAART at a CD4 count of 200-350 cells/μL) as a comparator. Virological, immunological and clinical (new AIDS event/death) outcomes at 48 and 96 weeks were analysed, with the analysis being limited to those remaining on HAART for>3 months. A total of 4978 of 9095 individuals starting first-line HAART with HIV RNA>500 HIV-1 RNA copies/mL were included in the analysis: 2741 late presenters, 947 late starters and 1290 ideal starters. Late presenters were more commonly female, heterosexual and Black African. Most started nonnucleoside reverse transcriptase inhibitors (NNRTIs); 48-week virological suppression was similar in late presenters and starters (and marginally lower than in ideal starters); by week 96 differences were reduced and nonsignificant. The median CD4 cell count increase in late presenters was significantly lower than that in late starters (weeks 48 and 96). During year 1, new clinical events were more frequent for late presenters [odds ratio (OR) 2.04; 95% confidence interval (CI) 1.19-3.51; P=0.01]; by year 2, event rates were similar in all groups. Amongst patients who initiate, and remain on, HAART, late presentation is associated with lower rates of virological suppression, blunted CD4 cell count increases and more clinical events compared with late starters in year 1, but similar clinical and immunological outcomes by year 2 to those of both late and ideal starters. Differences between late presenters and late starters suggest that factors other than CD4 cell count alone may be driving adverse treatment outcomes in late-presenting individuals.
Functional regression method for whole genome eQTL epistasis analysis with sequencing data.
Xu, Kelin; Jin, Li; Xiong, Momiao
2017-05-18
Epistasis plays an essential rule in understanding the regulation mechanisms and is an essential component of the genetic architecture of the gene expressions. However, interaction analysis of gene expressions remains fundamentally unexplored due to great computational challenges and data availability. Due to variation in splicing, transcription start sites, polyadenylation sites, post-transcriptional RNA editing across the entire gene, and transcription rates of the cells, RNA-seq measurements generate large expression variability and collectively create the observed position level read count curves. A single number for measuring gene expression which is widely used for microarray measured gene expression analysis is highly unlikely to sufficiently account for large expression variation across the gene. Simultaneously analyzing epistatic architecture using the RNA-seq and whole genome sequencing (WGS) data poses enormous challenges. We develop a nonlinear functional regression model (FRGM) with functional responses where the position-level read counts within a gene are taken as a function of genomic position, and functional predictors where genotype profiles are viewed as a function of genomic position, for epistasis analysis with RNA-seq data. Instead of testing the interaction of all possible pair-wises SNPs, the FRGM takes a gene as a basic unit for epistasis analysis, which tests for the interaction of all possible pairs of genes and use all the information that can be accessed to collectively test interaction between all possible pairs of SNPs within two genome regions. By large-scale simulations, we demonstrate that the proposed FRGM for epistasis analysis can achieve the correct type 1 error and has higher power to detect the interactions between genes than the existing methods. The proposed methods are applied to the RNA-seq and WGS data from the 1000 Genome Project. The numbers of pairs of significantly interacting genes after Bonferroni correction identified using FRGM, RPKM and DESeq were 16,2361, 260 and 51, respectively, from the 350 European samples. The proposed FRGM for epistasis analysis of RNA-seq can capture isoform and position-level information and will have a broad application. Both simulations and real data analysis highlight the potential for the FRGM to be a good choice of the epistatic analysis with sequencing data.
Deep 3 GHz number counts from a P(D) fluctuation analysis
NASA Astrophysics Data System (ADS)
Vernstrom, T.; Scott, Douglas; Wall, J. V.; Condon, J. J.; Cotton, W. D.; Fomalont, E. B.; Kellermann, K. I.; Miller, N.; Perley, R. A.
2014-05-01
Radio source counts constrain galaxy populations and evolution, as well as the global star formation history. However, there is considerable disagreement among the published 1.4-GHz source counts below 100 μJy. Here, we present a statistical method for estimating the μJy and even sub-μJy source count using new deep wide-band 3-GHz data in the Lockman Hole from the Karl G. Jansky Very Large Array. We analysed the confusion amplitude distribution P(D), which provides a fresh approach in the form of a more robust model, with a comprehensive error analysis. We tested this method on a large-scale simulation, incorporating clustering and finite source sizes. We discuss in detail our statistical methods for fitting using Markov chain Monte Carlo, handling correlations, and systematic errors from the use of wide-band radio interferometric data. We demonstrated that the source count can be constrained down to 50 nJy, a factor of 20 below the rms confusion. We found the differential source count near 10 μJy to have a slope of -1.7, decreasing to about -1.4 at fainter flux densities. At 3 GHz, the rms confusion in an 8-arcsec full width at half-maximum beam is ˜ 1.2 μJy beam-1, and a radio background temperature ˜14 mK. Our counts are broadly consistent with published evolutionary models. With these results, we were also able to constrain the peak of the Euclidean normalized differential source count of any possible new radio populations that would contribute to the cosmic radio background down to 50 nJy.
Multi-scale temporal patterns in fish presence in a high-velocity tidal channel
Viehman, Haley A.
2017-01-01
The natural variation of fish presence in high-velocity tidal channels is not well understood. A better understanding of fish use of these areas would aid in predicting fish interactions with marine hydrokinetic (MHK) devices, the effects of which are uncertain but of high concern. To characterize the patterns in fish presence at a tidal energy site in Cobscook Bay, Maine, we examined two years of hydroacoustic data continuously collected at the proposed depth of an MHK turbine with a bottom-mounted, side-looking echosounder. The maximum number of fish counted per hour ranged from hundreds in the early spring to over 1,000 in the fall. Counts varied greatly with tidal and diel cycles in a seasonally changing relationship, likely linked to the seasonally changing fish community of the bay. In the winter and spring, higher hourly counts were generally confined to ebb tides and low slack tides near sunrise and sunset. In summer and fall of each year, the highest fish counts shifted to night and occurred during ebb, low slack, and flood tides. Fish counts were not linked to current speed, and did not decrease as current speed increased, contrary to observations at other tidal power sites. As fish counts may be proportional to the encounter rate of fish with an MHK turbine at the same depth, highly variable counts indicate that the risk to fish is similarly variable. The links between fish presence and environmental cycles at this site will likely be present at other locations with similar environmental forcing, making these observations useful in predicting potential fish interactions at tidal energy sites worldwide. PMID:28493894
Multi-scale temporal patterns in fish presence in a high-velocity tidal channel.
Viehman, Haley A; Zydlewski, Gayle Barbin
2017-01-01
The natural variation of fish presence in high-velocity tidal channels is not well understood. A better understanding of fish use of these areas would aid in predicting fish interactions with marine hydrokinetic (MHK) devices, the effects of which are uncertain but of high concern. To characterize the patterns in fish presence at a tidal energy site in Cobscook Bay, Maine, we examined two years of hydroacoustic data continuously collected at the proposed depth of an MHK turbine with a bottom-mounted, side-looking echosounder. The maximum number of fish counted per hour ranged from hundreds in the early spring to over 1,000 in the fall. Counts varied greatly with tidal and diel cycles in a seasonally changing relationship, likely linked to the seasonally changing fish community of the bay. In the winter and spring, higher hourly counts were generally confined to ebb tides and low slack tides near sunrise and sunset. In summer and fall of each year, the highest fish counts shifted to night and occurred during ebb, low slack, and flood tides. Fish counts were not linked to current speed, and did not decrease as current speed increased, contrary to observations at other tidal power sites. As fish counts may be proportional to the encounter rate of fish with an MHK turbine at the same depth, highly variable counts indicate that the risk to fish is similarly variable. The links between fish presence and environmental cycles at this site will likely be present at other locations with similar environmental forcing, making these observations useful in predicting potential fish interactions at tidal energy sites worldwide.
Neutrophil-lymphocyte ratio in patients with pesticide poisoning.
Dundar, Zerrin Defne; Ergin, Mehmet; Koylu, Ramazan; Ozer, Rasit; Cander, Basar; Gunaydin, Yahya Kemal
2014-09-01
Pesticides are highly toxic to human beings, and pesticide poisoning is associated with high morbidity and mortality. The identification of powerful prognostic markers is important for the management of patients with pesticide poisoning in emergency settings. To investigate the prognostic value of the neutrophil-lymphocyte ratio and hematological parameters measured in patients with pesticide poisoning within the first 24 h after admission to the emergency department (ED). All patients (≥15 years old) admitted to the ED from July 2008 through February 2013 due to pesticide poisoning were enrolled in the study. The written and electronic medical charts of patients were reviewed. Neutrophil-lymphocyte ratio and platelet-lymphocyte ratio were calculated for each patient using absolute neutrophil, lymphocyte, and platelet counts. Mechanical ventilation requirement and mortality were used as the primary endpoints. A total of 189 patients were included in the study. The mechanically ventilated patients had significantly higher leukocyte and neutrophil counts, and neutrophil-lymphocyte and platelet-lymphocyte ratios (p < 0.001, p < 0.001, p < 0.001, p = 0.003, respectively), whereas they had significantly lower lymphocyte counts compared to nonventilated patients (p = 0.011). Survivors had significantly higher leukocyte and neutrophil counts, and neutrophil-lymphocyte ratios (p < 0.001, p < 0.001, p = 0.002, respectively), whereas there was no significant difference between groups in terms of lymphocyte counts (p = 0.463), compared to nonsurvivors. Leukocyte counts, neutrophil counts, and neutrophil-lymphocyte ratios measured within the first 24 h after admission to the ED are useful and easy-to-use parameters for estimating prognosis in the follow-up of patients with pesticide poisoning. Copyright © 2014 Elsevier Inc. All rights reserved.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Multi-scale temporal patterns in fish presence in a high-velocity tidal channel
Viehman, Haley A.; Zydlewski, Gayle Barbin; Hewitt, Judi
2017-05-11
The natural variation of fish presence in high-velocity tidal channels is not well understood. A better understanding of fish use of these areas would aid in predicting fish interactions with marine hydrokinetic (MHK) devices, the effects of which are uncertain but of high concern. To characterize the patterns in fish presence at a tidal energy site in Cobscook Bay, Maine, we examined two years of hydroacoustic data continuously collected at the proposed depth of an MHK turbine with a bottom-mounted, side-looking echosounder. The maximum number of fish counted per hour ranged from hundreds in the early spring to over 1,000more » in the fall. Counts varied greatly with tidal and diel cycles in a seasonally changing relationship, likely linked to the seasonally changing fish community of the bay. In the winter and spring, higher hourly counts were generally confined to ebb tides and low slack tides near sunrise and sunset. In summer and fall of each year, the highest fish counts shifted to night and occurred during ebb, low slack, and flood tides. Fish counts were not linked to current speed, and did not decrease as current speed increased, contrary to observations at other tidal power sites. As fish counts may be proportional to the encounter rate of fish with an MHK turbine at the same depth, highly variable counts indicate that the risk to fish is similarly variable. The links between fish presence and environmental cycles at this site will likely be present at other locations with similar environmental forcing, making these observations useful in predicting potential fish interactions at tidal energy sites worldwide.« less
Multi-scale temporal patterns in fish presence in a high-velocity tidal channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viehman, Haley A.; Zydlewski, Gayle Barbin; Hewitt, Judi
The natural variation of fish presence in high-velocity tidal channels is not well understood. A better understanding of fish use of these areas would aid in predicting fish interactions with marine hydrokinetic (MHK) devices, the effects of which are uncertain but of high concern. To characterize the patterns in fish presence at a tidal energy site in Cobscook Bay, Maine, we examined two years of hydroacoustic data continuously collected at the proposed depth of an MHK turbine with a bottom-mounted, side-looking echosounder. The maximum number of fish counted per hour ranged from hundreds in the early spring to over 1,000more » in the fall. Counts varied greatly with tidal and diel cycles in a seasonally changing relationship, likely linked to the seasonally changing fish community of the bay. In the winter and spring, higher hourly counts were generally confined to ebb tides and low slack tides near sunrise and sunset. In summer and fall of each year, the highest fish counts shifted to night and occurred during ebb, low slack, and flood tides. Fish counts were not linked to current speed, and did not decrease as current speed increased, contrary to observations at other tidal power sites. As fish counts may be proportional to the encounter rate of fish with an MHK turbine at the same depth, highly variable counts indicate that the risk to fish is similarly variable. The links between fish presence and environmental cycles at this site will likely be present at other locations with similar environmental forcing, making these observations useful in predicting potential fish interactions at tidal energy sites worldwide.« less
The Prompt Gamma Neutron Activation Analysis Facility at ICN—Pitesti
NASA Astrophysics Data System (ADS)
Bǎrbos, D.; Pǎunoiu, C.; Mladin, M.; Cosma, C.
2008-08-01
PGNAA is a very widely applicable technique for determining the presence and amount of many elements simultaneously in samples ranging in size from micrograms to many grams. PGNAA is characterized by its capability for nondestructive multi-elemental analysis and its ability to analyse elements that cannot be determined by INAA. By means of this PGNAA method we are able to increase the performace of INAA method. A facility has been developed at Institute for Nuclear Research—Piteşti so that the unique features of prompt gamma-ray neutron activation analysis can be used to measure trace and major elements in samples. The facility is linked at the radial neutron beam tube at ACPR-TRIGA reactor. During the PGNAA—facility is in use the ACPR reactor will be operated in steady-state mode at 250 KW maximum power. The facility consists of a radial beam-port, external sample position with shielding, and induced prompt gamma-ray counting system. Thermal neutron flux with energy lower than cadmium cut-off at the sample position was measured using thin gold foil is: φscd = 1.106 n/cm2/s with a cadmium ratio of:80. The gamma-ray detection system consist of an HpGe detector of 16% efficiency (detector model GC1518) with 1.85 keV resolution capability. The HpGe is mounted with its axis at 90° with respect to the incident neutron beam at distance about 200mm from the sample position. To establish the performance capabilities of the facility, irradiation of pure element or sample compound standards were performed to identify the gama-ray energies from each element and their count rates.
NASA Astrophysics Data System (ADS)
Tanguay, Jesse; Benard, Francois; Celler, Anna; Ruth, Thomas; Schaffer, Paul
2017-03-01
Attaching alpha-emitting radionuclides to cancer-targeting agents increases the anti-tumor effects of targeted cancer therapies. The success of alpha therapy for treating bone metastases has increased interest in using targeted alpha therapy (TAT) to treat a broad spectrum of metastatic cancers. Estimating radiation doses to targeted tumors, including small (<250 μm) clusters of cancer cells, and to non-targeted tissues is critical in the pre-clinical development of TATs. However, accurate quantification of heterogeneous distributions of alpha-emitters in small metastases is not possible with existing pre-clinical in-vivo imaging systems. Ex-vivo digital autoradiography using a scintillator in combination with an image intensifier and a charged coupled device (CCD) has gained interest for pre-clinical ex-vivo alpha particle imaging. We present a simulation-based analysis of the fundamental spatial resolution limits of digital autoradiography systems. Spatial resolution was quantified in terms of the modulation transfer function (MTF) and Wagner's equivalent aperture. We modeled systems operating in either particle-counting (PC) or energy-integrating (EI) mode using a cascaded systems approach that accounts for: 1) the stopping power of alpha particles; 2) the distance alpha particles travel within the scintillator; 3) optical blur, and; 4) binning in detector elements. We applied our analysis to imaging of astatine-211 using an LYSO scintillator with thickness ranging from 10 μm to 20 μm. Our analysis demonstrates that when these systems are operated in particle-counting mode with a centroid-calculation algorithm, the effective apertures of 35 μm can be achieved, which suggests that digital autoradiography may enable quantifying the uptake of alpha emitters in tumors consisting of a few cancer cells. Future work will investigate the image noise and energy-resolution properties of digital autoradiography systems.
Data Independent Acquisition analysis in ProHits 4.0.
Liu, Guomin; Knight, James D R; Zhang, Jian Ping; Tsou, Chih-Chiang; Wang, Jian; Lambert, Jean-Philippe; Larsen, Brett; Tyers, Mike; Raught, Brian; Bandeira, Nuno; Nesvizhskii, Alexey I; Choi, Hyungwon; Gingras, Anne-Claude
2016-10-21
Affinity purification coupled with mass spectrometry (AP-MS) is a powerful technique for the identification and quantification of physical interactions. AP-MS requires careful experimental design, appropriate control selection and quantitative workflows to successfully identify bona fide interactors amongst a large background of contaminants. We previously introduced ProHits, a Laboratory Information Management System for interaction proteomics, which tracks all samples in a mass spectrometry facility, initiates database searches and provides visualization tools for spectral counting-based AP-MS approaches. More recently, we implemented Significance Analysis of INTeractome (SAINT) within ProHits to provide scoring of interactions based on spectral counts. Here, we provide an update to ProHits to support Data Independent Acquisition (DIA) with identification software (DIA-Umpire and MSPLIT-DIA), quantification tools (through DIA-Umpire, or externally via targeted extraction), and assessment of quantitative enrichment (through mapDIA) and scoring of interactions (through SAINT-intensity). With additional improvements, notably support of the iProphet pipeline, facilitated deposition into ProteomeXchange repositories and enhanced export and viewing functions, ProHits 4.0 offers a comprehensive suite of tools to facilitate affinity proteomics studies. It remains challenging to score, annotate and analyze proteomics data in a transparent manner. ProHits was previously introduced as a LIMS to enable storing, tracking and analysis of standard AP-MS data. In this revised version, we expand ProHits to include integration with a number of identification and quantification tools based on Data-Independent Acquisition (DIA). ProHits 4.0 also facilitates data deposition into public repositories, and the transfer of data to new visualization tools. Copyright © 2016 Elsevier B.V. All rights reserved.
Automating digital leaf measurement: the tooth, the whole tooth, and nothing but the tooth.
Corney, David P A; Tang, H Lilian; Clark, Jonathan Y; Hu, Yin; Jin, Jing
2012-01-01
Many species of plants produce leaves with distinct teeth around their margins. The presence and nature of these teeth can often help botanists to identify species. Moreover, it has long been known that more species native to colder regions have teeth than species native to warmer regions. It has therefore been suggested that fossilized remains of leaves can be used as a proxy for ancient climate reconstruction. Similar studies on living plants can help our understanding of the relationships. The required analysis of leaves typically involves considerable manual effort, which in practice limits the number of leaves that are analyzed, potentially reducing the power of the results. In this work, we describe a novel algorithm to automate the marginal tooth analysis of leaves found in digital images. We demonstrate our methods on a large set of images of whole herbarium specimens collected from Tilia trees (also known as lime, linden or basswood). We chose the genus Tilia as its constituent species have toothed leaves of varied size and shape. In a previous study we extracted c.1600 leaves automatically from a set of c.1100 images. Our new algorithm locates teeth on the margins of such leaves and extracts features such as each tooth's area, perimeter and internal angles, as well as counting them. We evaluate an implementation of our algorithm's performance against a manually analyzed subset of the images. We found that the algorithm achieves an accuracy of 85% for counting teeth and 75% for estimating tooth area. We also demonstrate that the automatically extracted features are sufficient to identify different species of Tilia using a simple linear discriminant analysis, and that the features relating to teeth are the most useful.
Low Noise Research Fan Stage Design
NASA Technical Reports Server (NTRS)
Hobbs, David E.; Neubert, Robert J.; Malmborg, Eric W.; Philbrick, Daniel H.; Spear, David A.
1995-01-01
This report describes the design of a Low Noise ADP Research Fan stage. The fan is a variable pitch design which is designed at the cruise pitch condition. Relative to the cruise setting, the blade is closed at takeoff and opened for reverse thrust operation. The fan stage is a split flow design with fan exit guide vanes and core stators. This fan stage design was combined with a nacelle and engine core duct to form a powered fan/nacelle, subscale model. This model is intended for use in aerodynamic performance, acoustic and structural testing in a wind tunnel. The model has a 22-inch outer fan diameter and a hub-to-top ratio of 0.426 which permits the use of existing NASA fan and cowl force balance designs and rig drive system. The design parameters were selected to permit valid acoustic and aerodynamic comparisons with the PW 17-inch rig previously tested under NASA contract. The fan stage design is described in detail. The results of the design axisymmetric analysis at aerodynamic design condition are included. The structural analysis of the fan rotor and attachment is described including the material selections and stress analysis. The blade and attachment are predicted to have adequate low cycle fatigue life, and an acceptable operating range without resonant stress or flutter. The stage was acoustically designed with airfoil counts in the fan exit guide vane and core stator to minimize noise. A fan-FEGV tone analysis developed separately under NASA contract was used to determine these airfoil counts. The fan stage design was matched to a nacelle design to form a fan/nacelle model for wind tunnel testing. The nacelle design was developed under a separate NASA contract. The nacelle was designed with an axisymmetric inlet, cowl and nozzle for convenience in testing and fabrication. Aerodynamic analysis of the nacelle confirmed the required performance at various aircraft operating conditions.
NASA Astrophysics Data System (ADS)
Berman, D. C.; Crown, D. A.; Joseph, E. C. S.
2012-03-01
Compilation of crater counts using CTX images and analysis of SFD, coupled with categorization of crater morphologies, provides important insights into interpretation of the formation and modification of lobate debris aprons.
Friman, Patrick C
2004-01-01
Branch and Vollmer (2004) argue that use of the word behavior as a count noun is ungrammatical and, worse, mischaracterizes and ultimately degrades the concept of the operant. In this paper I argue that use of behavior as a count noun is a reflection of its grammatical status as a hybrid of count and mass noun. I show that such usage is widespread across colloquial, referential, and scientific documents including the writings of major figures in behavior analysis (most notably B. F. Skinner), books describing its applications, and its major journals. Finally, I argue against the assertion that such usage degrades the concept of the operant, at least in any meaningful way, and argue instead that employing eccentric definitions for ordinary words and using arcane terms to describe everyday human behavior risks diminishing the influence of behavior analysis on human affairs.