Censoring: a new approach for detection limits in total-reflection X-ray fluorescence
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.; Braziewicz, J.
2004-08-01
It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called "nondetects", i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for investigation of metallic impurities on the silicon wafers.
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
Code of Federal Regulations, 2010 CFR
2010-07-01
... calculated method detection limit. To insure that the estimate of the method detection limit is a good...) where: MDL = the method detection limit t(n-1,1- α=.99) = the students' t value appropriate for a 99... Determination of the Method Detection Limit-Revision 1.11 B Appendix B to Part 136 Protection of Environment...
Epidemiologic Evaluation of Measurement Data in the Presence of Detection Limits
Lubin, Jay H.; Colt, Joanne S.; Camann, David; Davis, Scott; Cerhan, James R.; Severson, Richard K.; Bernstein, Leslie; Hartge, Patricia
2004-01-01
Quantitative measurements of environmental factors greatly improve the quality of epidemiologic studies but can pose challenges because of the presence of upper or lower detection limits or interfering compounds, which do not allow for precise measured values. We consider the regression of an environmental measurement (dependent variable) on several covariates (independent variables). Various strategies are commonly employed to impute values for interval-measured data, including assignment of one-half the detection limit to nondetected values or of “fill-in” values randomly selected from an appropriate distribution. On the basis of a limited simulation study, we found that the former approach can be biased unless the percentage of measurements below detection limits is small (5–10%). The fill-in approach generally produces unbiased parameter estimates but may produce biased variance estimates and thereby distort inference when 30% or more of the data are below detection limits. Truncated data methods (e.g., Tobit regression) and multiple imputation offer two unbiased approaches for analyzing measurement data with detection limits. If interest resides solely on regression parameters, then Tobit regression can be used. If individualized values for measurements below detection limits are needed for additional analysis, such as relative risk regression or graphical display, then multiple imputation produces unbiased estimates and nominal confidence intervals unless the proportion of missing data is extreme. We illustrate various approaches using measurements of pesticide residues in carpet dust in control subjects from a case–control study of non-Hodgkin lymphoma. PMID:15579415
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...
The estimation method on diffusion spot energy concentration of the detection system
NASA Astrophysics Data System (ADS)
Gao, Wei; Song, Zongxi; Liu, Feng; Dan, Lijun; Sun, Zhonghan; Du, Yunfei
2016-09-01
We propose a method to estimate the diffusion spot energy of the detection system. We do outdoor observation experiments in Xinglong Observatory, by using a detection system which diffusion spot energy concentration is estimated (the correlation coefficient is approximate 0.9926).The aperture of system is 300mm and limiting magnitude of system is 14.15Mv. Observation experiments show that the highest detecting magnitude of estimated system is 13.96Mv, and the average detecting magnitude of estimated system is about 13.5Mv. The results indicate that this method can be used to evaluate the energy diffusion spot concentration level of detection system efficiently.
Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.
Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen
2015-05-01
Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.
Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen
2014-01-01
Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.
Shen, Yi
2015-01-01
Purpose Gap detection and the temporal modulation transfer function (TMTF) are 2 common methods to obtain behavioral estimates of auditory temporal acuity. However, the agreement between the 2 measures is not clear. This study compares results from these 2 methods and their dependencies on listener age and hearing status. Method Gap detection thresholds and the parameters that describe the TMTF (sensitivity and cutoff frequency) were estimated for young and older listeners who were naive to the experimental tasks. Stimuli were 800-Hz-wide noises with upper frequency limits of 2400 Hz, presented at 85 dB SPL. A 2-track procedure (Shen & Richards, 2013) was used for the efficient estimation of the TMTF. Results No significant correlation was found between gap detection threshold and the sensitivity or the cutoff frequency of the TMTF. No significant effect of age and hearing loss on either the gap detection threshold or the TMTF cutoff frequency was found, while the TMTF sensitivity improved with increasing hearing threshold and worsened with increasing age. Conclusion Estimates of temporal acuity using gap detection and TMTF paradigms do not seem to provide a consistent description of the effects of listener age and hearing status on temporal envelope processing. PMID:25087722
Censoring approach to the detection limits in X-ray fluorescence analysis
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.
2004-10-01
We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called "nondetects", can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples.
Estimating the resolution limit of the map equation in community detection
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Rosvall, Martin
2015-01-01
A community detection algorithm is considered to have a resolution limit if the scale of the smallest modules that can be resolved depends on the size of the analyzed subnetwork. The resolution limit is known to prevent some community detection algorithms from accurately identifying the modular structure of a network. In fact, any global objective function for measuring the quality of a two-level assignment of nodes into modules must have some sort of resolution limit or an external resolution parameter. However, it is yet unknown how the resolution limit affects the so-called map equation, which is known to be an efficient objective function for community detection. We derive an analytical estimate and conclude that the resolution limit of the map equation is set by the total number of links between modules instead of the total number of links in the full network as for modularity. This mechanism makes the resolution limit much less restrictive for the map equation than for modularity; in practice, it is orders of magnitudes smaller. Furthermore, we argue that the effect of the resolution limit often results from shoehorning multilevel modular structures into two-level descriptions. As we show, the hierarchical map equation effectively eliminates the resolution limit for networks with nested multilevel modular structures.
Browne, Richard W; Whitcomb, Brian W
2010-07-01
Problems in the analysis of laboratory data commonly arise in epidemiologic studies in which biomarkers subject to lower detection thresholds are used. Various thresholds exist including limit of detection (LOD), limit of quantification (LOQ), and limit of blank (LOB). Choosing appropriate strategies for dealing with data affected by such limits relies on proper understanding of the nature of the detection limit and its determination. In this paper, we demonstrate experimental and statistical procedures generally used for estimating different detection limits according to standard procedures in the context of analysis of fat-soluble vitamins and micronutrients in human serum. Fat-soluble vitamins and micronutrients were analyzed by high-performance liquid chromatography with diode array detection. A simulated serum matrix blank was repeatedly analyzed for determination of LOB parametrically by using the observed blank distribution as well as nonparametrically by using ranks. The LOD was determined by combining information regarding the LOB with data from repeated analysis of standard reference materials (SRMs), diluted to low levels; from LOB to 2-3 times LOB. The LOQ was determined experimentally by plotting the observed relative standard deviation (RSD) of SRM replicates compared with the concentration, where the LOQ is the concentration at an RSD of 20%. Experimental approaches and example statistical procedures are given for determination of LOB, LOD, and LOQ. These quantities are reported for each measured analyte. For many analyses, there is considerable information available below the LOQ. Epidemiologic studies must understand the nature of these detection limits and how they have been estimated for appropriate treatment of affected data.
Bio-Inspired Distributed Decision Algorithms for Anomaly Detection
2017-03-01
TERMS DIAMoND, Local Anomaly Detector, Total Impact Estimation, Threat Level Estimator 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU...21 4.2 Performance of the DIAMoND Algorithm as a DNS-Server Level Attack Detection and Mitigation...with 6 Nodes ........................................................................................ 13 8 Hierarchical 2- Level Topology
Lawryk, Nicholas J; Feng, H Amy; Chen, Bean T
2009-07-01
Recent advances in field-portable X-ray fluorescence (FP XRF) spectrometer technology have made it a potentially valuable screening tool for the industrial hygienist to estimate worker exposures to airborne metals. Although recent studies have shown that FP XRF technology may be better suited for qualitative or semiquantitative analysis of airborne lead in the workplace, these studies have not extensively addressed its ability to measure other elements. This study involved a laboratory-based evaluation of a representative model FP XRF spectrometer to measure elements commonly encountered in workplace settings that may be collected on air sample filter media, including chromium, copper, iron, manganese, nickel, lead, and zinc. The evaluation included assessments of (1) response intensity with respect to location on the probe window, (2) limits of detection for five different filter media, (3) limits of detection as a function of analysis time, and (4) bias, precision, and accuracy estimates. Teflon, polyvinyl chloride, polypropylene, and mixed cellulose ester filter media all had similarly low limits of detection for the set of elements examined. Limits of detection, bias, and precision generally improved with increasing analysis time. Bias, precision, and accuracy estimates generally improved with increasing element concentration. Accuracy estimates met the National Institute for Occupational Safety and Health criterion for nearly all the element and concentration combinations. Based on these results, FP XRF spectrometry shows potential to be useful in the assessment of worker inhalation exposures to other metals in addition to lead.
Scientists, especially environmental scientists often encounter trace level concentrations that are typically reported as less than a certain limit of detection, L. Type 1, left-censored data arise when certain low values lying below L are ignored or unknown as they cannot be mea...
Detection limit for rate fluctuations in inhomogeneous Poisson processes
NASA Astrophysics Data System (ADS)
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Detection limit for rate fluctuations in inhomogeneous Poisson processes.
Shintani, Toshiaki; Shinomoto, Shigeru
2012-04-01
Estimations of an underlying rate from data points are inevitably disturbed by the irregular occurrence of events. Proper estimation methods are designed to avoid overfitting by discounting the irregular occurrence of data, and to determine a constant rate from irregular data derived from a constant probability distribution. However, it can occur that rapid or small fluctuations in the underlying density are undetectable when the data are sparse. For an estimation method, the maximum degree of undetectable rate fluctuations is uniquely determined as a phase transition, when considering an infinitely long series of events drawn from a fluctuating density. In this study, we analytically examine an optimized histogram and a Bayesian rate estimator with respect to their detectability of rate fluctuation, and determine whether their detectable-undetectable phase transition points are given by an identical formula defining a degree of fluctuation in an underlying rate. In addition, we numerically examine the variational Bayes hidden Markov model in its detectability of rate fluctuation, and determine whether the numerically obtained transition point is comparable to those of the other two methods. Such consistency among these three principled methods suggests the presence of a theoretical limit for detecting rate fluctuations.
Geiss, S; Einax, J W
2001-07-01
Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.
Hunter, Margaret; Dorazio, Robert M.; Butterfield, John S.; Meigs-Friend, Gaia; Nico, Leo; Ferrante, Jason A.
2017-01-01
A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species’ presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty – indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis, and forensic and clinical diagnostics.
NASA Astrophysics Data System (ADS)
Prudhomme, G.; Berthe, L.; Bénier, J.; Bozier, O.; Mercier, P.
2017-01-01
Photonic Doppler Velocimetry is a plug-and-play and versatile diagnostic used in dynamic physic experiments to measure velocities. When signals are analyzed using a Short-Time Fourier Transform, multiple velocities can be distinguished: for example, the velocities of moving particle-cloud appear on spectrograms. In order to estimate the back-scattering fluxes of target, we propose an original approach "PDV Radiometric analysis" resulting in an expression of time-velocity spectrograms coded in power units. Experiments involving micron-sized particles raise the issue of detection limit; particle-size limit is very difficult to evaluate. From the quantification of noise sources, we derive an estimation of the spectrogram noise leading to a detectivity limit, which may be compared to the fraction of the incoming power which has been back-scattered by the particle and then collected by the probe. This fraction increases with their size. At last, some results from laser-shock accelerated particles using two different PDV systems are compared: it shows the improvement of detectivity with respect to the Effective Number of Bits (ENOB) of the digitizer.
Burgués, Javier; Jiménez-Soto, Juan Manuel; Marco, Santiago
2018-07-12
The limit of detection (LOD) is a key figure of merit in chemical sensing. However, the estimation of this figure of merit is hindered by the non-linear calibration curve characteristic of semiconductor gas sensor technologies such as, metal oxide (MOX), gasFETs or thermoelectric sensors. Additionally, chemical sensors suffer from cross-sensitivities and temporal stability problems. The application of the International Union of Pure and Applied Chemistry (IUPAC) recommendations for univariate LOD estimation in non-linear semiconductor gas sensors is not straightforward due to the strong statistical requirements of the IUPAC methodology (linearity, homoscedasticity, normality). Here, we propose a methodological approach to LOD estimation through linearized calibration models. As an example, the methodology is applied to the detection of low concentrations of carbon monoxide using MOX gas sensors in a scenario where the main source of error is the presence of uncontrolled levels of humidity. Copyright © 2018 Elsevier B.V. All rights reserved.
Limits of detection and decision. Part 4
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
Probability density functions (PDFs) have been derived for a number of commonly used limit of detection definitions, including several variants of the Relative Standard Deviation of the Background-Background Equivalent Concentration (RSDB-BEC) method, for a simple linear chemical measurement system (CMS) having homoscedastic, Gaussian measurement noise and using ordinary least squares (OLS) processing. All of these detection limit definitions serve as both decision and detection limits, thereby implicitly resulting in 50% rates of Type 2 errors. It has been demonstrated that these are closely related to Currie decision limits, if the coverage factor, k, is properly defined, and that all of the PDFs are scaled reciprocals of noncentral t variates. All of the detection limits have well-defined upper and lower limits, thereby resulting in finite moments and confidence limits, and the problem of estimating the noncentrality parameter has been addressed. As in Parts 1-3, extensive Monte Carlo simulations were performed and all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Specific recommendations for harmonization of detection limit methodology have also been made.
Methodologies for Adaptive Flight Envelope Estimation and Protection
NASA Technical Reports Server (NTRS)
Tang, Liang; Roemer, Michael; Ge, Jianhua; Crassidis, Agamemnon; Prasad, J. V. R.; Belcastro, Christine
2009-01-01
This paper reports the latest development of several techniques for adaptive flight envelope estimation and protection system for aircraft under damage upset conditions. Through the integration of advanced fault detection algorithms, real-time system identification of the damage/faulted aircraft and flight envelop estimation, real-time decision support can be executed autonomously for improving damage tolerance and flight recoverability. Particularly, a bank of adaptive nonlinear fault detection and isolation estimators were developed for flight control actuator faults; a real-time system identification method was developed for assessing the dynamics and performance limitation of impaired aircraft; online learning neural networks were used to approximate selected aircraft dynamics which were then inverted to estimate command margins. As off-line training of network weights is not required, the method has the advantage of adapting to varying flight conditions and different vehicle configurations. The key benefit of the envelope estimation and protection system is that it allows the aircraft to fly close to its limit boundary by constantly updating the controller command limits during flight. The developed techniques were demonstrated on NASA s Generic Transport Model (GTM) simulation environments with simulated actuator faults. Simulation results and remarks on future work are presented.
NASA Astrophysics Data System (ADS)
Naderi, E.; Khorasani, K.
2018-02-01
In this work, a data-driven fault detection, isolation, and estimation (FDI&E) methodology is proposed and developed specifically for monitoring the aircraft gas turbine engine actuator and sensors. The proposed FDI&E filters are directly constructed by using only the available system I/O data at each operating point of the engine. The healthy gas turbine engine is stimulated by a sinusoidal input containing a limited number of frequencies. First, the associated system Markov parameters are estimated by using the FFT of the input and output signals to obtain the frequency response of the gas turbine engine. These data are then used for direct design and realization of the fault detection, isolation and estimation filters. Our proposed scheme therefore does not require any a priori knowledge of the system linear model or its number of poles and zeros at each operating point. We have investigated the effects of the size of the frequency response data on the performance of our proposed schemes. We have shown through comprehensive case studies simulations that desirable fault detection, isolation and estimation performance metrics defined in terms of the confusion matrix criterion can be achieved by having access to only the frequency response of the system at only a limited number of frequencies.
CANDID: Companion Analysis and Non-Detection in Interferometric Data
NASA Astrophysics Data System (ADS)
Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.
2015-05-01
CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.
Hunter, Margaret E; Dorazio, Robert M; Butterfield, John S S; Meigs-Friend, Gaia; Nico, Leo G; Ferrante, Jason A
2017-03-01
A set of universal guidelines is needed to determine the limit of detection (LOD) in PCR-based analyses of low-concentration DNA. In particular, environmental DNA (eDNA) studies require sensitive and reliable methods to detect rare and cryptic species through shed genetic material in environmental samples. Current strategies for assessing detection limits of eDNA are either too stringent or subjective, possibly resulting in biased estimates of species' presence. Here, a conservative LOD analysis grounded in analytical chemistry is proposed to correct for overestimated DNA concentrations predominantly caused by the concentration plateau, a nonlinear relationship between expected and measured DNA concentrations. We have used statistical criteria to establish formal mathematical models for both quantitative and droplet digital PCR. To assess the method, a new Grass Carp (Ctenopharyngodon idella) TaqMan assay was developed and tested on both PCR platforms using eDNA in water samples. The LOD adjustment reduced Grass Carp occupancy and detection estimates while increasing uncertainty-indicating that caution needs to be applied to eDNA data without LOD correction. Compared to quantitative PCR, digital PCR had higher occurrence estimates due to increased sensitivity and dilution of inhibitors at low concentrations. Without accurate LOD correction, species occurrence and detection probabilities based on eDNA estimates are prone to a source of bias that cannot be reduced by an increase in sample size or PCR replicates. Other applications also could benefit from a standardized LOD such as GMO food analysis and forensic and clinical diagnostics. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Statistical behavior of ten million experimental detection limits
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-02-01
Using a lab-constructed laser-excited fluorimeter, together with bootstrapping methodology, the authors have generated many millions of experimental linear calibration curves for the detection of rhodamine 6G tetrafluoroborate in ethanol solutions. The detection limits computed from them are in excellent agreement with both previously published theory and with comprehensive Monte Carlo computer simulations. Currie decision levels and Currie detection limits, each in the theoretical, chemical content domain, were found to be simply scaled reciprocals of the non-centrality parameter of the non-central t distribution that characterizes univariate linear calibration curves that have homoscedastic, additive Gaussian white noise. Accurate and precise estimates of the theoretical, content domain Currie detection limit for the experimental system, with 5% (each) probabilities of false positives and false negatives, are presented.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
Bedrossian, Manuel; Lindensmith, Chris
2017-01-01
Abstract Detection of extant microbial life on Earth and elsewhere in the Solar System requires the ability to identify and enumerate micrometer-scale, essentially featureless cells. On Earth, bacteria are usually enumerated by culture plating or epifluorescence microscopy. Culture plates require long incubation times and can only count culturable strains, and epifluorescence microscopy requires extensive staining and concentration of the sample and instrumentation that is not readily miniaturized for space. Digital holographic microscopy (DHM) represents an alternative technique with no moving parts and higher throughput than traditional microscopy, making it potentially useful in space for detection of extant microorganisms provided that sufficient numbers of cells can be collected. Because sample collection is expected to be the limiting factor for space missions, especially to outer planets, it is important to quantify the limits of detection of any proposed technique for extant life detection. Here we use both laboratory and field samples to measure the limits of detection of an off-axis digital holographic microscope (DHM). A statistical model is used to estimate any instrument's probability of detection at various bacterial concentrations based on the optical performance characteristics of the instrument, as well as estimate the confidence interval of detection. This statistical model agrees well with the limit of detection of 103 cells/mL that was found experimentally with laboratory samples. In environmental samples, active cells were immediately evident at concentrations of 104 cells/mL. Published estimates of cell densities for Enceladus plumes yield up to 104 cells/mL, which are well within the off-axis DHM's limits of detection to confidence intervals greater than or equal to 95%, assuming sufficient sample volumes can be collected. The quantitative phase imaging provided by DHM allowed minerals to be distinguished from cells. Off-axis DHM's ability for rapid low-level bacterial detection and counting shows its viability as a technique for detection of extant microbial life provided that the cells can be captured intact and delivered to the sample chamber in a sufficient volume of liquid for imaging. Key Words: In situ life detection—Extant microorganisms—Holographic microscopy—Ocean Worlds—Enceladus—Imaging. Astrobiology 17, 913–925. PMID:28708412
Two biased estimation techniques in linear regression: Application to aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav
1988-01-01
Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.
Development of an enzyme-linked immunosorbent assay for the detection of dicamba.
Clegg, B S; Stephenson, G R; Hall, J C
2001-05-01
A competitive indirect enzyme-linked immunosorbent assay (CI-ELISA) was developed to quantitate the herbicide dicamba (3,6-dichloro-2-methoxybenzoic acid) in water. The CI-ELISA has a detection limit of 2.3 microg L(-1) and a linear working range of 10--10000 microg L(-1) with an IC(50) value of 195 microg L(-1). The dicamba polyclonal antisera did not cross-react with a number of other herbicides tested but did cross-react with a dicamba metabolite, 5-hydroxydicamba, and structurally related chlorobenzoic acids. The assay was used to estimate quantitatively dicamba concentrations in water samples. Water samples were analyzed directly, and no sample preparation was required. To improve detection limits, a C(18) (reversed phase) column concentration step was devised prior to analysis, and the detection limits were increased by at least by 10-fold. After the sample preconcentration, the detection limit, IC(50), and linear working range were 0.23, 19.5, and 5-200 microg L(-1), respectively. The CI-ELISA estimations in water correlated well with those from gas chromatography-mass spectrometry (GC-MS) analysis (r(2) = 0.9991). This assay contributes to reducing laboratory costs associated with the conventional GC-MS residue analysis techniques for the quantitation of dicamba in water.
Gray, Brian R.; Holland, Mark D.; Yi, Feng; Starcevich, Leigh Ann Harrod
2013-01-01
Site occupancy models are commonly used by ecologists to estimate the probabilities of species site occupancy and of species detection. This study addresses the influence on site occupancy and detection estimates of variation in species availability among surveys within sites. Such variation in availability may result from temporary emigration, nonavailability of the species for detection, and sampling sites spatially when species presence is not uniform within sites. We demonstrate, using Monte Carlo simulations and aquatic vegetation data, that variation in availability and heterogeneity in the probability of availability may yield biases in the expected values of the site occupancy and detection estimates that have traditionally been associated with low-detection probabilities and heterogeneity in those probabilities. These findings confirm that the effects of availability may be important for ecologists and managers, and that where such effects are expected, modification of sampling designs and/or analytical methods should be considered. Failure to limit the effects of availability may preclude reliable estimation of the probability of site occupancy.
Prognostic Fusion for Uncertainty Reduction
2007-02-01
Damage estimates are arrived at using sensor information such as oil debris monitoring data as well as vibration data. The method detects the onset of...NAME OF RESPONSIBLE PERSON ( Monitor ) a. REPORT Unclassified b. ABSTRACT Unclassified c . THIS PAGE Unclassified 17. LIMITATION OF ABSTRACT...estimates are arrived at using sensor information such as oil debris monitoring data as well as vibration data. The method detects the onset of
Prediction of the limit of detection of an optical resonant reflection biosensor.
Hong, Jongcheol; Kim, Kyung-Hyun; Shin, Jae-Heon; Huh, Chul; Sung, Gun Yong
2007-07-09
A prediction of the limit of detection of an optical resonant reflection biosensor is presented. An optical resonant reflection biosensor using a guided-mode resonance filter is one of the most promising label-free optical immunosensors due to a sharp reflectance peak and a high sensitivity to the changes of optical path length. We have simulated this type of biosensor using rigorous coupled wave theory to calculate the limit of detection of the thickness of the target protein layer. Theoretically, our biosensor has an estimated ability to detect thickness change approximately the size of typical antigen proteins. We have also investigated the effects of the absorption and divergence of the incident light on the detection ability of the biosensor.
Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.
2010-01-01
With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.
Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems
Connolly, Patrick J.
2010-01-01
With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Cedric J., E-mail: cedric.powell@nist.gov; Werner, Wolfgang S. M.; Smekal, Werner
2014-09-01
The authors show that the National Institute of Standards and Technology database for the simulation of electron spectra for surface analysis (SESSA) can be used to determine detection limits for thin-film materials such as a thin film on a substrate or buried at varying depths in another material for common x-ray photoelectron spectroscopy (XPS) measurement conditions. Illustrative simulations were made for a W film on or in a Ru matrix and for a Ru film on or in a W matrix. In the former case, the thickness of a W film at a given depth in the Ru matrix wasmore » varied so that the intensity of the W 4d{sub 5/2} peak was essentially the same as that for a homogeneous RuW{sub 0.001} alloy. Similarly, the thickness of a Ru film at a selected depth in the W matrix was varied so that the intensity of the Ru 3p{sub 3/2} peak matched that from a homogeneous WRu{sub 0.01} alloy. These film thicknesses correspond to the detection limits of each minor component for measurement conditions where the detection limits for a homogeneous sample varied between 0.1 at. % (for the RuW{sub 0.001} alloy) and 1 at. % (for the WRu{sub 0.01} alloy). SESSA can be similarly used to convert estimates of XPS detection limits for a minor species in a homogeneous solid to the corresponding XPS detection limits for that species as a thin film on or buried in the chosen solid.« less
Detection limit used for early warning in public health surveillance.
Kobari, Tsuyoshi; Iwaki, Kazuo; Nagashima, Tomomi; Ishii, Fumiyoshi; Hayashi, Yuzuru; Yajima, Takehiko
2009-06-01
A theory of detection limit, developed in analytical chemistry, is applied to public health surveillance to detect an outbreak of national emergencies such as natural disaster and bioterrorism. In this investigation, the influenza epidemic around the Tokyo area from 2003 to 2006 is taken as a model of normal and large-scale epidemics. The detection limit of the normal epidemic is used as a threshold with a specified level of significance to identify a sign of the abnormal epidemic among the daily variation in anti-influenza drug sales at community pharmacies. While auto-correlation of data is often an obstacle to an unbiased estimator of standard deviation involved in the detection limit, the analytical theory (FUMI) can successfully treat the auto-correlation of the drug sales in the same way as the auto-correlation appearing as 1/f noise in many analytical instruments.
Brown, Gary S.; Betty, Rita G.; Brockmann, John E.; Lucero, Daniel A.; Souza, Caroline A.; Walsh, Kathryn S.; Boucher, Raymond M.; Tezak, Mathew; Wilson, Mollye C.; Rudolph, Todd
2007-01-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of ±0.12, and for painted wallboard it was 0.29 with a standard deviation of ±0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of ±0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis. PMID:17122390
Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Mathew; Wilson, Mollye C; Rudolph, Todd
2007-02-01
Polyester-rayon blend wipes were evaluated for efficiency of extraction and recovery of powdered Bacillus atrophaeus spores from stainless steel and painted wallboard surfaces. Method limits of detection were also estimated for both surfaces. The observed mean efficiency of polyester-rayon blend wipe recovery from stainless steel was 0.35 with a standard deviation of +/-0.12, and for painted wallboard it was 0.29 with a standard deviation of +/-0.15. Evaluation of a sonication extraction method for the polyester-rayon blend wipes produced a mean extraction efficiency of 0.93 with a standard deviation of +/-0.09. Wipe recovery quantitative limits of detection were estimated at 90 CFU per unit of stainless steel sample area and 105 CFU per unit of painted wallboard sample area. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling following the release of a biological agent such as Bacillus anthracis.
USDA-ARS?s Scientific Manuscript database
A simple, rapid and sensitive immunogold chromatographic strip test based on a monoclonal antibody was developed for the detection of melamine (MEL) residues in raw milk, milk products and animal feed. The limit of detection was estimated to be 0.05 µg/mL in raw milk, since the detection test line ...
Label-free optical biosensing with slot-waveguides.
Barrios, Carlos A; Bañuls, María José; González-Pedro, Victoria; Gylfason, Kristinn B; Sánchez, Benito; Griol, Amadeu; Maquieira, A; Sohlström, H; Holgado, M; Casquel, R
2008-04-01
We demonstrate label-free molecule detection by using an integrated biosensor based on a Si(3)N(4)/SiO(2) slot-waveguide microring resonator. Bovine serum albumin (BSA) and anti-BSA molecular binding events on the sensor surface are monitored through the measurement of resonant wavelength shifts with varying biomolecule concentrations. The biosensor exhibited sensitivities of 1.8 and 3.2 nm/(ng/mm(2)) for the detection of anti-BSA and BSA, respectively. The estimated detection limits are 28 and 16 pg/mm(2) for anti-BSA and BSA, respectively, limited by wavelength resolution.
NASA Technical Reports Server (NTRS)
Johnston, P. H.
2008-01-01
This activity seeks to estimate a theoretical upper bound of detectability for a layer of oxide embedded in a friction stir weld in aluminum. The oxide is theoretically modeled as an ideal planar layer of aluminum oxide, oriented normal to an interrogating ultrasound beam. Experimentally-measured grain scattering level is used to represent the practical noise floor. Echoes from naturally-occurring oxides will necessarily fall below this theoretical limit, and must be above the measurement noise to be potentially detectable.
ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT
The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...
Nakashima, Shinya; Hayashi, Yuzuru
2016-01-01
The aim of this paper is to propose a stochastic method for estimating the detection limits (DLs) and quantitation limits (QLs) of compounds registered in a database of a GC/MS system and prove its validity with experiments. The approach described in ISO 11843 Part 7 is adopted here as an estimation means of DL and QL, and the decafluorotriphenylphosphine (DFTPP) tuning and retention time locking are carried out for adjusting the system. Coupled with the data obtained from the system adjustment experiments, the information (noise and signal of chromatograms and calibration curves) stored in the database is used for the stochastic estimation, dispensing with the repetition measurements. Of sixty-six pesticides, the DL values obtained by the ISO method were compared with those from the statistical approach and the correlation between them was observed to be excellent with the correlation coefficient of 0.865. The accuracy of the method proposed was also examined and concluded to be satisfactory as well. The samples used are commercial products of pesticides mixtures and the uncertainty from sample preparation processes is not taken into account. PMID:27162706
Mass spectrometer characterization of halogen gases in air at atmospheric pressure.
Ivey, Michelle M; Foster, Krishna L
2005-03-01
We have developed a new interface for a commercial ion trap mass spectrometer equipped with APCI capable of real-time measurements of gaseous compounds with limits of detection on the order of pptv. The new interface has been tested using the detection of Br2 and Cl2 over synthetic seawater ice at atmospheric pressure as a model system. A mechanical pump is used to draw gaseous mixtures through a glass manifold into the corona discharge area, where the molecules are ionized. Analysis of bromine and chlorine in dry air show that ion intensity is affected by the pumping rate and the position of the glass manifold. The mass spectrometer signals for Br2 are linear in the 0.1-10.6 ppbv range, and the estimated 3sigma detection limit is 20 pptv. The MS signals for Cl2 are linear in the 0.2-25 ppbv range, and the estimated 3sigma detection limit is 1 ppbv. This new interface advances the field of analytical chemistry by introducing a practical modification to a commercially available ion trap mass spectrometer that expands the available methods for performing highly specific and sensitive measurements of gases in air at atmospheric pressure.
Cauchemez, Simon; Epperson, Scott; Biggerstaff, Matthew; Swerdlow, David; Finelli, Lyn; Ferguson, Neil M
2013-01-01
Prior to emergence in human populations, zoonoses such as SARS cause occasional infections in human populations exposed to reservoir species. The risk of widespread epidemics in humans can be assessed by monitoring the reproduction number R (average number of persons infected by a human case). However, until now, estimating R required detailed outbreak investigations of human clusters, for which resources and expertise are not always available. Additionally, existing methods do not correct for important selection and under-ascertainment biases. Here, we present simple estimation methods that overcome many of these limitations. Our approach is based on a parsimonious mathematical model of disease transmission and only requires data collected through routine surveillance and standard case investigations. We apply it to assess the transmissibility of swine-origin influenza A H3N2v-M virus in the US, Nipah virus in Malaysia and Bangladesh, and also present a non-zoonotic example (cholera in the Dominican Republic). Estimation is based on two simple summary statistics, the proportion infected by the natural reservoir among detected cases (G) and among the subset of the first detected cases in each cluster (F). If detection of a case does not affect detection of other cases from the same cluster, we find that R can be estimated by 1-G; otherwise R can be estimated by 1-F when the case detection rate is low. In more general cases, bounds on R can still be derived. We have developed a simple approach with limited data requirements that enables robust assessment of the risks posed by emerging zoonoses. We illustrate this by deriving transmissibility estimates for the H3N2v-M virus, an important step in evaluating the possible pandemic threat posed by this virus. Please see later in the article for the Editors' Summary.
Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel
2016-07-01
Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Novel trace chemical detection algorithms: a comparative study
NASA Astrophysics Data System (ADS)
Raz, Gil; Murphy, Cara; Georgan, Chelsea; Greenwood, Ross; Prasanth, R. K.; Myers, Travis; Goyal, Anish; Kelley, David; Wood, Derek; Kotidis, Petros
2017-05-01
Algorithms for standoff detection and estimation of trace chemicals in hyperspectral images in the IR band are a key component for a variety of applications relevant to law-enforcement and the intelligence communities. Performance of these methods is impacted by the spectral signature variability due to presence of contaminants, surface roughness, nonlinear dependence on abundances as well as operational limitations on the compute platforms. In this work we provide a comparative performance and complexity analysis of several classes of algorithms as a function of noise levels, error distribution, scene complexity, and spatial degrees of freedom. The algorithm classes we analyze and test include adaptive cosine estimator (ACE and modifications to it), compressive/sparse methods, Bayesian estimation, and machine learning. We explicitly call out the conditions under which each algorithm class is optimal or near optimal as well as their built-in limitations and failure modes.
Jannati, Ali; McDonald, John J; Di Lollo, Vincent
2015-06-01
The capacity of visual short-term memory (VSTM) is commonly estimated by K scores obtained with a change-detection task. Contrary to common belief, K may be influenced not only by capacity but also by the rate at which stimuli are encoded into VSTM. Experiment 1 showed that, contrary to earlier conclusions, estimates of VSTM capacity obtained with a change-detection task are constrained by temporal limitations. In Experiment 2, we used change-detection and backward-masking tasks to obtain separate within-subject estimates of K and of rate of encoding, respectively. A median split based on rate of encoding revealed significantly higher K estimates for fast encoders. Moreover, a significant correlation was found between K and the estimated rate of encoding. The present findings raise the prospect that the reported relationships between K and such cognitive concepts as fluid intelligence may be mediated not only by VSTM capacity but also by rate of encoding. (c) 2015 APA, all rights reserved).
Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.
Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo
2015-01-01
We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.
Detecting isotopic ratio outliers
NASA Astrophysics Data System (ADS)
Bayne, C. K.; Smith, D. H.
An alternative method is proposed for improving isotopic ratio estimates. This method mathematically models pulse-count data and uses iterative reweighted Poisson regression to estimate model parameters to calculate the isotopic ratios. This computer-oriented approach provides theoretically better methods than conventional techniques to establish error limits and to identify outliers.
True detection limits in an experimental linearly heteroscedastic system. Part 1
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Using a lab-constructed laser-excited filter fluorimeter deliberately designed to exhibit linearly heteroscedastic, additive Gaussian noise, it has been shown that accurate estimates may be made of the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD) for the detection of rhodamine 6 G tetrafluoroborate in ethanol. The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.1 mV, YD = 125. mV, XC = 0.132 μg /mL and XD = 0.294 μg /mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158. mV and XD = 0.372 μg /mL. These decision levels and corresponding detection limits were shown to pass the ultimate test: they resulted in observed probabilities of false positives and false negatives that were statistically equivalent to the a priori specified values.
NASA Astrophysics Data System (ADS)
Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzyński, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.
2015-07-01
Context. Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars in a few hours of observations. Aims: We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. Methods: We developed the code CANDID (Companion Analysis and Non-Detection in Interferometric Data), a set of Python tools that allows us to search systematically for point-source, high-contrast companions and estimate the detection limit using all interferometric observables, i.e., the squared visibilities, closure phases and bispectrum amplitudes. The search procedure is made on a N × N grid of fit, whose minimum needed resolution is estimated a posteriori. It includes a tool to estimate the detection level of the companion in the number of sigmas. The code CANDID also incorporates a robust method to set a 3σ detection limit on the flux ratio, which is based on an analytical injection of a fake companion at each point in the grid. Our injection method also allows us to analytically remove a detected component to 1) search for a second companion; and 2) set an unbiased detection limit. Results: We used CANDID to search for the companions around the binary Cepheids V1334 Cyg, AX Cir, RT Aur, AW Per, SU Cas, and T Vul. First, we showed that our previous discoveries of the components orbiting V1334 Cyg and AX Cir were detected at >25σ and >13σ, respectively. The astrometric positions and flux ratios provided by CANDID for these two stars are in good agreement with our previously published values. The companion around AW Per is detected at more than 15σ with a flux ratio of f = 1.22 ± 0.30%, and it is located at ρ = 32.16 ± 0.29 mas and PA = 67.1 ± 0.3°. We made a possible detection of the companion orbiting RT Aur with f = 0.22 ± 0.11%, and at ρ = 2.10 ± 0.23 mas and PA = -136 ± 6°. It was detected at 3.8σ using the closure phases only, and so more observations are needed to confirm the dectection. No companions were detected around SU Cas and T Vul. We also set the detection limit for possible undetected companions around these stars. We found that there is no companion with a spectral type earlier than B7V, A5V, F0V, B9V, A0V, and B9V orbiting the Cepheids V1334 Cyg, AX Cir, RT Aur, AW Per, SU Cas, and T Vul, respectively. This work also demonstrates the capabilities of the MIRC and PIONIER instruments, which can reach a dynamic range of 1:200, depending on the angular distance of the companion and the (u,v) plane coverage. In the future, we plan to work on improving the sensitivity limits for realistic data through better handling of the correlations. The current version of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/579/A68
Minimum Detectable Activity for Tomographic Gamma Scanning System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkataraman, Ram; Smith, Susan; Kirkpatrick, J. M.
2015-01-01
For any radiation measurement system, it is useful to explore and establish the detection limits and a minimum detectable activity (MDA) for the radionuclides of interest, even if the system is to be used at far higher values. The MDA serves as an important figure of merit, and often a system is optimized and configured so that it can meet the MDA requirements of a measurement campaign. The non-destructive assay (NDA) systems based on gamma ray analysis are no exception and well established conventions, such the Currie method, exist for estimating the detection limits and the MDA. However, the Tomographicmore » Gamma Scanning (TGS) technique poses some challenges for the estimation of detection limits and MDAs. The TGS combines high resolution gamma ray spectrometry (HRGS) with low spatial resolution image reconstruction techniques. In non-imaging gamma ray based NDA techniques measured counts in a full energy peak can be used to estimate the activity of a radionuclide, independently of other counting trials. However, in the case of the TGS each “view” is a full spectral grab (each a counting trial), and each scan consists of 150 spectral grabs in the transmission and emission scans per vertical layer of the item. The set of views in a complete scan are then used to solve for the radionuclide activities on a voxel by voxel basis, over 16 layers of a 10x10 voxel grid. Thus, the raw count data are not independent trials any more, but rather constitute input to a matrix solution for the emission image values at the various locations inside the item volume used in the reconstruction. So, the validity of the methods used to estimate MDA for an imaging technique such as TGS warrant a close scrutiny, because the pair-counting concept of Currie is not directly applicable. One can also raise questions as to whether the TGS, along with other image reconstruction techniques which heavily intertwine data, is a suitable method if one expects to measure samples whose activities are at or just above MDA levels. The paper examines methods used to estimate MDAs for a TGS system, and explores possible solutions that can be rigorously defended.« less
Achieving metrological precision limits through postselection
NASA Astrophysics Data System (ADS)
Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos
2017-01-01
Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.
Identification of Patient Zero in Static and Temporal Networks: Robustness and Limitations
NASA Astrophysics Data System (ADS)
Antulov-Fantulin, Nino; Lančić, Alen; Šmuc, Tomislav; Štefančić, Hrvoje; Šikić, Mile
2015-06-01
Detection of patient zero can give new insights to epidemiologists about the nature of first transmissions into a population. In this Letter, we study the statistical inference problem of detecting the source of epidemics from a snapshot of spreading on an arbitrary network structure. By using exact analytic calculations and Monte Carlo estimators, we demonstrate the detectability limits for the susceptible-infected-recovered model, which primarily depend on the spreading process characteristics. Finally, we demonstrate the applicability of the approach in a case of a simulated sexually transmitted infection spreading over an empirical temporal network of sexual interactions.
Of Detection Limits and Effective Mitigation: The Use of Infrared Cameras for Methane Leak Detection
NASA Astrophysics Data System (ADS)
Ravikumar, A. P.; Wang, J.; McGuire, M.; Bell, C.; Brandt, A. R.
2017-12-01
Mitigating methane emissions, a short-lived and potent greenhouse gas, is critical to limiting global temperature rise to two degree Celsius as outlined in the Paris Agreement. A major source of anthropogenic methane emissions in the United States is the oil and gas sector. To this effect, state and federal governments have recommended the use of optical gas imaging systems in periodic leak detection and repair (LDAR) surveys to detect for fugitive emissions or leaks. The most commonly used optical gas imaging systems (OGI) are infrared cameras. In this work, we systematically evaluate the limits of infrared (IR) camera based OGI system for use in methane leak detection programs. We analyze the effect of various parameters that influence the minimum detectable leak rates of infrared cameras. Blind leak detection tests were carried out at the Department of Energy's MONITOR natural gas test-facility in Fort Collins, CO. Leak sources included natural gas wellheads, separators, and tanks. With an EPA mandated 60 g/hr leak detection threshold for IR cameras, we test leak rates ranging from 4 g/hr to over 350 g/hr at imaging distances between 5 ft and 70 ft from the leak source. We perform these experiments over the course of a week, encompassing a wide range of wind and weather conditions. Using repeated measurements at a given leak rate and imaging distance, we generate detection probability curves as a function of leak-size for various imaging distances, and measurement conditions. In addition, we estimate the median detection threshold - leak-size at which the probability of detection is 50% - under various scenarios to reduce uncertainty in mitigation effectiveness. Preliminary analysis shows that the median detection threshold varies from 3 g/hr at an imaging distance of 5 ft to over 150 g/hr at 50 ft (ambient temperature: 80 F, winds < 4 m/s). Results from this study can be directly used to improve OGI based LDAR protocols and reduce uncertainty in estimated mitigation effectiveness. Furthermore, detection limits determined in this study can be used as standards to compare new detection technologies.
True detection limits in an experimental linearly heteroscedastic system.. Part 2
NASA Astrophysics Data System (ADS)
Voigtman, Edward; Abraham, Kevin T.
2011-11-01
Despite much different processing of the experimental fluorescence detection data presented in Part 1, essentially the same estimates were obtained for the true theoretical Currie decision levels ( YC and XC) and true Currie detection limits ( YD and XD). The obtained experimental values, for 5% probability of false positives and 5% probability of false negatives, were YC = 56.0 mV, YD = 125. mV, XC = 0.132 μg/mL and XD = 0.293 μg/mL. For 5% probability of false positives and 1% probability of false negatives, the obtained detection limits were YD = 158 . mV and XD = 0.371 μg/mL. Furthermore, by using bootstrapping methodology on the experimental data for the standards and the analytical blank, it was possible to validate previously published experimental domain expressions for the decision levels ( yC and xC) and detection limits ( yD and xD). This was demonstrated by testing the generated decision levels and detection limits for their performance in regard to false positives and false negatives. In every case, the obtained numbers of false negatives and false positives were as specified a priori.
Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.
Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi
2013-12-01
Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
McNew, Lance B.; Handel, Colleen M.
2015-01-01
Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results indicate that neglecting the effects of site covariates on species detection probabilities may lead to significant bias in estimation of species richness, as well as the inferred relationships between community size and environmental covariates.
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Specific determination of bromate in bread by ion chromatography with ICP-MS.
Akiyama, Takumi; Yamanaka, Michiko; Date, Yukiko; Kubota, Hiroki; Nagaoka, Megumi Hamano; Kawasaki, Yoko; Yamazaki, Takeshi; Yomota, Chikako; Maitani, Tamio
2002-12-01
A sensitive method for detecting bromate in bread by ion chromatography with inductively-coupled plasma mass spectrometry (IC/ICP-MS) was developed. Bromate was extracted from bread with water. The clean-up procedure included a 0.2 micron filter, a C18 cartridge for defatting, a silver cartridge to remove halogen anions, a centrifugal ultrafiltration unit to remove proteins, and a cation-exchange cartridge to remove silver ions. A 500 microL sample solution was applied to IC/ICP-MS. The detection limit and the quantitation limit of bromate in the solution were 0.3 ng/mL and 1.0 ng/mL, expressed as HBrO3, respectively, which corresponded to 2 ng/g and 5 ng/g, respectively, in bread. Recovery of bromate was about 90%, and the CV was about 2%. Based on the detection limit in solution and recovery from bread, the detection limit of bromate in bread was estimated to be 2 ng/g.
Dong, Zehua; Ye, Shengbo; Gao, Yunze; Fang, Guangyou; Zhang, Xiaojuan; Xue, Zhongjun; Zhang, Tao
2016-01-01
The thickness estimation of the top surface layer and surface layer, as well as the detection of road defects, are of great importance to the quality conditions of asphalt pavement. Although ground penetrating radar (GPR) methods have been widely used in non-destructive detection of pavements, the thickness estimation of the thin top surface layer is still a difficult problem due to the limitations of GPR resolution and the similar permittivity of asphalt sub-layers. Besides, the detection of some road defects, including inadequate compaction and delamination at interfaces, require further practical study. In this paper, a newly-developed vehicle-mounted GPR detection system is introduced. We used a horizontal high-pass filter and a modified layer localization method to extract the underground layers. Besides, according to lab experiments and simulation analysis, we proposed theoretical methods for detecting the degree of compaction and delamination at the interface, respectively. Moreover, a field test was carried out and the estimated results showed a satisfactory accuracy of the system and methods. PMID:27929409
Dong, Zehua; Ye, Shengbo; Gao, Yunze; Fang, Guangyou; Zhang, Xiaojuan; Xue, Zhongjun; Zhang, Tao
2016-12-06
The thickness estimation of the top surface layer and surface layer, as well as the detection of road defects, are of great importance to the quality conditions of asphalt pavement. Although ground penetrating radar (GPR) methods have been widely used in non-destructive detection of pavements, the thickness estimation of the thin top surface layer is still a difficult problem due to the limitations of GPR resolution and the similar permittivity of asphalt sub-layers. Besides, the detection of some road defects, including inadequate compaction and delamination at interfaces, require further practical study. In this paper, a newly-developed vehicle-mounted GPR detection system is introduced. We used a horizontal high-pass filter and a modified layer localization method to extract the underground layers. Besides, according to lab experiments and simulation analysis, we proposed theoretical methods for detecting the degree of compaction and delamination at the interface, respectively. Moreover, a field test was carried out and the estimated results showed a satisfactory accuracy of the system and methods.
Optical detection of glyphosate in water
NASA Astrophysics Data System (ADS)
de Góes, R. E.; Possetti, G. R. C.; Muller, M.; Fabris, J. L.
2017-04-01
This work shows preliminary results of the detection of Glyphosate in water by using optical fiber spectroscopy. A colloid with citrate-caped silver nanoparticles was employed as substrate for the measurements. A cross analysis between optical absorption and inelastic scattering evidenced a controlled aggregation of the sample constituents, leading to the possibility of quantitative detection of the analyte. The estimate limit of detection for Glyphosate in water for the proposed sensing scheme was about 1.7 mg/L.
Stochastic fluctuations and the detectability limit of network communities.
Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo
2013-12-01
We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.
Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A
2018-02-08
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.
A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination
2018-01-01
Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759
Ship heading and velocity analysis by wake detection in SAR images
NASA Astrophysics Data System (ADS)
Graziano, Maria Daniela; D'Errico, Marco; Rufino, Giancarlo
2016-11-01
With the aim of ship-route estimation, a wake detection method is developed and applied to COSMO/SkyMed and TerraSAR-X Stripmap SAR images over the Gulf of Naples, Italy. In order to mitigate the intrinsic limitations of the threshold logic, the algorithm identifies the wake features according to the hydrodynamic theory. A post-detection validation phase is performed to classify the features as real wake structures by means of merit indexes defined in the intensity domain. After wake reconstruction, ship heading is evaluated on the basis of turbulent wake direction and ship velocity is estimated by both techniques of azimuth shift and Kelvin pattern wavelength. The method is tested over 34 ship wakes identified by visual inspection in both HH and VV images at different incidence angles. For all wakes, no missed detections are reported and at least the turbulent and one narrow-V wakes are correctly identified, with ship heading successfully estimated. Also, the azimuth shift method is applied to estimate velocity for the 10 ships having route with sufficient angular separation from the satellite ground track. In one case ship velocity is successfully estimated with both methods, showing agreement within 14%.
Cryogen-free heterodyne-enhanced mid-infrared Faraday rotation spectrometer
Wang, Yin; Nikodem, Michal; Wysocki, Gerard
2013-01-01
A new detection method for Faraday rotation spectra of paramagnetic molecular species is presented. Near shot-noise limited performance in the mid-infrared is demonstrated using a heterodyne enhanced Faraday rotation spectroscopy (H-FRS) system without any cryogenic cooling. Theoretical analysis is performed to estimate the ultimate sensitivity to polarization rotation for both heterodyne and conventional FRS. Sensing of nitric oxide (NO) has been performed with an H-FRS system based on thermoelectrically cooled 5.24 μm quantum cascade laser (QCL) and a mercury-cadmium-telluride photodetector. The QCL relative intensity noise that dominates at low frequencies is largely avoided by performing the heterodyne detection in radio frequency range. H-FRS exhibits a total noise level of only 3.7 times the fundamental shot noise. The achieved sensitivity to polarization rotation of 1.8 × 10−8 rad/Hz1/2 is only 5.6 times higher than the ultimate theoretical sensitivity limit estimated for this system. The path- and bandwidth-normalized NO detection limit of 3.1 ppbv-m/Hz1/2 was achieved using the R(17/2) transition of NO at 1906.73 cm−1. PMID:23388967
Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa
NASA Astrophysics Data System (ADS)
Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.
2014-12-01
Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.
Richman, Nadia I.; Gibbons, James M.; Turvey, Samuel T.; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D.; Jones, Julia P. G.
2014-01-01
Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring. PMID:24805782
Richman, Nadia I; Gibbons, James M; Turvey, Samuel T; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D; Jones, Julia P G
2014-01-01
Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring.
Dou, Z; Chen, J; Jiang, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To understand the distribution of population viral load (PVL) data in HIV infected men who have sex with men (MSM), fit distribution function and explore the appropriate estimating parameter of PVL. Methods: The detection limit of viral load (VL) was ≤ 50 copies/ml. Box-Cox transformation and normal distribution tests were used to describe the general distribution characteristics of the original and transformed data of PVL, then the stable distribution function was fitted with test of goodness of fit. Results: The original PVL data fitted a skewed distribution with the variation coefficient of 622.24%, and had a multimodal distribution after Box-Cox transformation with optimal parameter ( λ ) of-0.11. The distribution of PVL data over the detection limit was skewed and heavy tailed when transformed by Box-Cox with optimal λ =0. By fitting the distribution function of the transformed data over the detection limit, it matched the stable distribution (SD) function ( α =1.70, β =-1.00, γ =0.78, δ =4.03). Conclusions: The original PVL data had some censored data below the detection limit, and the data over the detection limit had abnormal distribution with large degree of variation. When proportion of the censored data was large, it was inappropriate to use half-value of detection limit to replace the censored ones. The log-transformed data over the detection limit fitted the SD. The median ( M ) and inter-quartile ranger ( IQR ) of log-transformed data can be used to describe the centralized tendency and dispersion tendency of the data over the detection limit.
Sabo, M; Malásková, M; Matejčík, S
2014-10-21
We present a new highly sensitive technique for the detection of explosives directly from the surface using laser desorption-corona discharge-ion mobility spectrometry (LD-CD-IMS). We have developed LD based on laser diode modules (LDM) and the technique was tested using three different LDM (445, 532 and 665 nm). The explosives were detected directly from the surface without any further preparation. We discuss the mechanism of the LD and the limitations of this technique such as desorption time, transport time and desorption area. After the evaluation of experimental data, we estimated the potential limits of detection of this method to be 0.6 pg for TNT, 2.8 pg for RDX and 8.4 pg for PETN.
Fu, Liya; Wang, You-Gan
2011-02-15
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which clearly demonstrates the advantages of the rank regression models.
Upper limits on the rates of BNS and NSBH mergers from Advanced LIGO's first observing run
NASA Astrophysics Data System (ADS)
Lackey, Benjamin; LIGO Collaboration
2017-01-01
Last year the Advanced LIGO detectors finished their first observing run and detected two binary black hole mergers with high significance but no binary neutron star (BNS) or neutron-star-black-hole (NSBH) mergers. We present upper limits on the rates of BNS and NSBH mergers in the universe based on their non-detection with two modeled searches. With zero detections, the upper limits depend on the choice of prior, but we find 90% upper limits using a conservative prior of 12 , 000 / Gpc3 / yr for BNS mergers and 1 , 000 - 3 , 000 / Gpc3 / yr for NSBH mergers depending on the black hole mass. Comparing these upper limits to several rates predictions in the literature, we find our upper limits are close to the more optimistic rates estimates. Further non-detections in the second and third observing runs should be able to rule out several rates predictions. Using the observed rate of short gamma ray bursts (GRBs), we can also place lower limits on the average beaming angle of short GRBs. Assuming all short GRBs come from BNS mergers, we find a 90% lower limit of 1-4 degrees on the GRB beaming angle, with the range coming from the uncertainty in short GRB rates.
Feng, Hanzhou; Bondi, Robert W; Anderson, Carl A; Drennen, James K; Igne, Benoît
2017-08-01
Polymorph detection is critical for ensuring pharmaceutical product quality in drug substances exhibiting polymorphism. Conventional analytical techniques such as X-ray powder diffraction and solid-state nuclear magnetic resonance are utilized primarily for characterizing the presence and identity of specific polymorphs in a sample. These techniques have encountered challenges in analyzing the constitution of polymorphs in the presence of other components commonly found in pharmaceutical dosage forms. Laborious sample preparation procedures are usually required to achieve satisfactory data interpretability. There is a need for alternative techniques capable of probing pharmaceutical dosage forms rapidly and nondestructively, which is dictated by the practical requirements of applications such as quality monitoring on production lines or when quantifying product shelf lifetime. The sensitivity of transmission Raman spectroscopy for detecting polymorphs in final tablet cores was investigated in this work. Carbamazepine was chosen as a model drug, polymorph form III is the commercial form, whereas form I is an undesired polymorph that requires effective detection. The concentration of form I in a direct compression tablet formulation containing 20% w/w of carbamazepine, 74.00% w/w of fillers (mannitol and microcrystalline cellulose), and 6% w/w of croscarmellose sodium, silicon dioxide, and magnesium stearate was estimated using transmission Raman spectroscopy. Quantitative models were generated and optimized using multivariate regression and data preprocessing. Prediction uncertainty was estimated for each validation sample by accounting for all the main variables contributing to the prediction. Multivariate detection limits were calculated based on statistical hypothesis testing. The transmission Raman spectroscopic model had an absolute prediction error of 0.241% w/w for the independent validation set. The method detection limit was estimated at 1.31% w/w. The results demonstrated that transmission Raman spectroscopy is a sensitive tool for polymorphs detection in pharmaceutical tablets.
Digital photo monitoring for tree crown
Neil Clark; Sang-Mook Lee
2007-01-01
Assessing change in the amount of foliage within a treeâs crown is the goal of crown transparency estimation, a component in many forest health assessment programs. Many sources of variability limit analysis and interpretation of crown condition data. Increased precision is needed to detect more subtle changes that are important for detection of health problems....
NASA Technical Reports Server (NTRS)
Scholtz, P.; Smyth, P.
1992-01-01
This article describes an investigation of a statistical hypothesis testing method for detecting changes in the characteristics of an observed time series. The work is motivated by the need for practical automated methods for on-line monitoring of Deep Space Network (DSN) equipment to detect failures and changes in behavior. In particular, on-line monitoring of the motor current in a DSN 34-m beam waveguide (BWG) antenna is used as an example. The algorithm is based on a measure of the information theoretic distance between two autoregressive models: one estimated with data from a dynamic reference window and one estimated with data from a sliding reference window. The Hinkley cumulative sum stopping rule is utilized to detect a change in the mean of this distance measure, corresponding to the detection of a change in the underlying process. The basic theory behind this two-model test is presented, and the problem of practical implementation is addressed, examining windowing methods, model estimation, and detection parameter assignment. Results from the five fault-transition simulations are presented to show the possible limitations of the detection method, and suggestions for future implementation are given.
Detecting local diversity-dependence in diversification.
Xu, Liang; Etienne, Rampal S
2018-04-06
Whether there are ecological limits to species diversification is a hotly debated topic. Molecular phylogenies show slowdowns in lineage accumulation, suggesting that speciation rates decline with increasing diversity. A maximum-likelihood (ML) method to detect diversity-dependent (DD) diversification from phylogenetic branching times exists, but it assumes that diversity-dependence is a global phenomenon and therefore ignores that the underlying species interactions are mostly local, and not all species in the phylogeny co-occur locally. Here, we explore whether this ML method based on the nonspatial diversity-dependence model can detect local diversity-dependence, by applying it to phylogenies, simulated with a spatial stochastic model of local DD speciation, extinction, and dispersal between two local communities. We find that type I errors (falsely detecting diversity-dependence) are low, and the power to detect diversity-dependence is high when dispersal rates are not too low. Interestingly, when dispersal is high the power to detect diversity-dependence is even higher than in the nonspatial model. Moreover, estimates of intrinsic speciation rate, extinction rate, and ecological limit strongly depend on dispersal rate. We conclude that the nonspatial DD approach can be used to detect diversity-dependence in clades of species that live in not too disconnected areas, but parameter estimates must be interpreted cautiously. © 2018 The Author(s). Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Vision-Based Detection and Distance Estimation of Micro Unmanned Aerial Vehicles
Gökçe, Fatih; Üçoluk, Göktürk; Şahin, Erol; Kalkan, Sinan
2015-01-01
Detection and distance estimation of micro unmanned aerial vehicles (mUAVs) is crucial for (i) the detection of intruder mUAVs in protected environments; (ii) sense and avoid purposes on mUAVs or on other aerial vehicles and (iii) multi-mUAV control scenarios, such as environmental monitoring, surveillance and exploration. In this article, we evaluate vision algorithms as alternatives for detection and distance estimation of mUAVs, since other sensing modalities entail certain limitations on the environment or on the distance. For this purpose, we test Haar-like features, histogram of gradients (HOG) and local binary patterns (LBP) using cascades of boosted classifiers. Cascaded boosted classifiers allow fast processing by performing detection tests at multiple stages, where only candidates passing earlier simple stages are processed at the preceding more complex stages. We also integrate a distance estimation method with our system utilizing geometric cues with support vector regressors. We evaluated each method on indoor and outdoor videos that are collected in a systematic way and also on videos having motion blur. Our experiments show that, using boosted cascaded classifiers with LBP, near real-time detection and distance estimation of mUAVs are possible in about 60 ms indoors (1032×778 resolution) and 150 ms outdoors (1280×720 resolution) per frame, with a detection rate of 0.96 F-score. However, the cascaded classifiers using Haar-like features lead to better distance estimation since they can position the bounding boxes on mUAVs more accurately. On the other hand, our time analysis yields that the cascaded classifiers using HOG train and run faster than the other algorithms. PMID:26393599
Cheng, Hong; Macaluso, Maurizio; Vermund, Sten H.; Hook, Edward W.
2001-01-01
Published estimates of the sensitivity and specificity of PCR and ligase chain reaction (LCR) for detecting Chlamydia trachomatis are potentially biased because of study design limitations (confirmation of test results was limited to subjects who were PCR or LCR positive but culture negative). Relative measures of test accuracy are less prone to bias in incomplete study designs. We estimated the relative sensitivity (RSN) and relative false-positive rate (RFP) for PCR and LCR versus cell culture among 1,138 asymptomatic men and evaluated the potential bias of RSN and RFP estimates. PCR and LCR testing in urine were compared to culture of urethral specimens. Discordant results (PCR or LCR positive, but culture negative) were confirmed by using a sequence including the other DNA amplification test, direct fluorescent antibody testing, and a DNA amplification test to detect chlamydial major outer membrane protein. The RSN estimates for PCR and LCR were 1.45 (95% confidence interval [CI] = 1.3 to 1.7) and 1.49 (95% CI = 1.3 to 1.7), respectively, indicating that both methods are more sensitive than culture. Very few false-positive results were found, indicating that the specificity levels of PCR, LCR, and culture are high. The potential bias in RSN and RFP estimates were <5 and <20%, respectively. The estimation of bias is based on the most likely and probably conservative parameter settings. If the sensitivity of culture is between 60 and 65%, then the true sensitivity of PCR and LCR is between 90 and 97%. Our findings indicate that PCR and LCR are significantly more sensitive than culture, while the three tests have similar specificities. PMID:11682509
Baker, Ronald J.; Chepiga, Mary M.; Cauller, Stephen J.
2015-01-01
The Kaplan-Meier method of estimating summary statistics from left-censored data was applied in order to include nondetects (left-censored data) in median nitrate-concentration calculations. Median concentrations also were determined using three alternative methods of handling nondetects. Treatment of the 23 percent of samples that were nondetects had little effect on estimated median nitrate concentrations because method detection limits were mostly less than median values.
Saikali, Melody; Tanios, Alain; Saab, Antoine
2017-11-21
The aim of the study was to evaluate the sensitivity and resource efficiency of a partially automated adverse event (AE) surveillance system for routine patient safety efforts in hospitals with limited resources. Twenty-eight automated triggers from the hospital information system's clinical and administrative databases identified cases that were then filtered by exclusion criteria per trigger and then reviewed by an interdisciplinary team. The system, developed and implemented using in-house resources, was applied for 45 days of surveillance, for all hospital inpatient admissions (N = 1107). Each trigger was evaluated for its positive predictive value (PPV). Furthermore, the sensitivity of the surveillance system (overall and by AE category) was estimated relative to incidence ranges in the literature. The surveillance system identified a total of 123 AEs among 283 reviewed medical records, yielding an overall PPV of 52%. The tool showed variable levels of sensitivity across and within AE categories when compared with the literature, with a relatively low overall sensitivity estimated between 21% and 44%. Adverse events were detected in 23 of the 36 AE categories defined by an established harm classification system. Furthermore, none of the detected AEs were voluntarily reported. The surveillance system showed variable sensitivity levels across a broad range of AE categories with an acceptable PPV, overcoming certain limitations associated with other harm detection methods. The number of cases captured was substantial, and none had been previously detected or voluntarily reported. For hospitals with limited resources, this methodology provides valuable safety information from which interventions for quality improvement can be formulated.
Venkateswarlu, Kambham; Rangareddy, Ardhgeri; Narasimhaiah, Kanaka; Sharma, Hemraj; Bandi, Naga Mallikarjuna Raja
2017-01-01
The main objective of present study was to develop a RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product. The method was developed for Armodafinil estimation and base hydrolytic products were characterized. The separation was carried out on C18 column by using mobile phase as mixture of water and methanol (45:55%v/v). Eluents were detected at 220nm at 1ml/min. Stress studies were performed with milder conditions followed by stronger conditions so as to get sufficient degradation around 20%. A total of five degradation products were detected and separated from analyte. The linearity of the proposed method was investigated in the range of 20-120µg/ml for Armodafinil. The detection limit and quantification limit was found to be 0.01183μg/ml and 0.035µg/ml respectively. The precision % RSD was found to be less than 2% and the recovery was between 98-102%. Armodafinil was found to be more sensitive to the base hydrolysis and yielded its carboxylic acid as degradant. The developed method was stability indicating assay, suitable to quantify Armodafinil in presence of possible degradants. The drug was sensitive to acid, base &photolytic stress and resistant to thermal &oxidation.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Increased efficacy for in-house validation of real-time PCR GMO detection methods.
Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H
2010-03-01
To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
Jain, Deepika; Sheth, Heena; Bender, Filitsa H; Weisbord, Steven D; Green, Jamie A
2014-01-01
Studies have shown that a single-item question might be useful in identifying patients with limited health literacy. However, the utility of the approach has not been studied in patients receiving maintenance peritoneal dialysis (PD). We assessed health literacy in a cohort of 31 PD patients by administering the Rapid Estimate of Adult Literacy in Medicine (REALM) and a single-item health literacy (SHL) screening question "How confident are you filling out medical forms by yourself?" (Extremely, Quite a bit, Somewhat, A little bit, or Not at all). To determine the accuracy of the single-item question for detecting limited health literacy, we performed sensitivity and specificity analyses of the SHL and plotted the area under the receiver operating characteristic (AUROC) curve using the REALM as a reference standard. Using a cut-off of "Somewhat" or less confident, the sensitivity of the SHL for detecting limited health literacy was 80%, and the specificity was 88%. The positive likelihood ratio was 6.9. The SHL had an AUROC of 0.79 (95% confidence interval: 0.52 to 1.00). Our results show that the SHL could be effective in detecting limited health literacy in PD patients.
Al-Chokhachy, R.; Budy, P.; Conner, M.
2009-01-01
Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.
Smoothing of Gaussian quantum dynamics for force detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhishen; Sarovar, Mohan
Building on recent work by Gammelmark et al. we develop a formalism for prediction and retrodiction of Gaussian quantum systems undergoing continuous measurements. We apply the resulting formalism to study the advantage of incorporating a full measurement record and retrodiction for impulselike force detection and accelerometry. Here, we find that using retrodiction can only increase accuracy in a limited parameter regime, but that the reduction in estimation noise that it yields results in better detection of impulselike forces.
Smoothing of Gaussian quantum dynamics for force detection
Huang, Zhishen; Sarovar, Mohan
2018-04-10
Building on recent work by Gammelmark et al. we develop a formalism for prediction and retrodiction of Gaussian quantum systems undergoing continuous measurements. We apply the resulting formalism to study the advantage of incorporating a full measurement record and retrodiction for impulselike force detection and accelerometry. Here, we find that using retrodiction can only increase accuracy in a limited parameter regime, but that the reduction in estimation noise that it yields results in better detection of impulselike forces.
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy. PMID:22163641
Przybyla, Jay; Taylor, Jeffrey; Zhou, Xuesong
2010-01-01
In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy.
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Improved Sensor Fault Detection, Isolation, and Mitigation Using Multiple Observers Approach
Wang, Zheng; Anand, D. M.; Moyne, J.; Tilbury, D. M.
2017-01-01
Traditional Fault Detection and Isolation (FDI) methods analyze a residual signal to detect and isolate sensor faults. The residual signal is the difference between the sensor measurements and the estimated outputs of the system based on an observer. The traditional residual-based FDI methods, however, have some limitations. First, they require that the observer has reached its steady state. In addition, residual-based methods may not detect some sensor faults, such as faults on critical sensors that result in an unobservable system. Furthermore, the system may be in jeopardy if actions required for mitigating the impact of the faulty sensors are not taken before the faulty sensors are identified. The contribution of this paper is to propose three new methods to address these limitations. Faults that occur during the observers' transient state can be detected by analyzing the convergence rate of the estimation error. Open-loop observers, which do not rely on sensor information, are used to detect faults on critical sensors. By switching among different observers, we can potentially mitigate the impact of the faulty sensor during the FDI process. These three methods are systematically integrated with a previously developed residual-based method to provide an improved FDI and mitigation capability framework. The overall approach is validated mathematically, and the effectiveness of the overall approach is demonstrated through simulation on a 5-state suspension system. PMID:28924303
Fetal QRS detection and heart rate estimation: a wavelet-based approach.
Almeida, Rute; Gonçalves, Hernâni; Bernardes, João; Rocha, Ana Paula
2014-08-01
Fetal heart rate monitoring is used for pregnancy surveillance in obstetric units all over the world but in spite of recent advances in analysis methods, there are still inherent technical limitations that bound its contribution to the improvement of perinatal indicators. In this work, a previously published wavelet transform based QRS detector, validated over standard electrocardiogram (ECG) databases, is adapted to fetal QRS detection over abdominal fetal ECG. Maternal ECG waves were first located using the original detector and afterwards a version with parameters adapted for fetal physiology was applied to detect fetal QRS, excluding signal singularities associated with maternal heartbeats. Single lead (SL) based marks were combined in a single annotator with post processing rules (SLR) from which fetal RR and fetal heart rate (FHR) measures can be computed. Data from PhysioNet with reference fetal QRS locations was considered for validation, with SLR outperforming SL including ICA based detections. The error in estimated FHR using SLR was lower than 20 bpm for more than 80% of the processed files. The median error in 1 min based FHR estimation was 0.13 bpm, with a correlation between reference and estimated FHR of 0.48, which increased to 0.73 when considering only records for which estimated FHR > 110 bpm. This allows us to conclude that the proposed methodology is able to provide a clinically useful estimation of the FHR.
Measurement of radon progenies using the Timepix detector.
Bulanek, Boris; Jilek, Karel; Cermak, Pavel
2014-07-01
After an introduction of Timepix detector, results of these detectors with silicon and cadmium telluride detection layer in assessment of activity of short-lived radon decay products are presented. They were collected on an open-face filter by means of one-grab sampling method from the NRPI radon chamber. Activity of short-lived radon decay products was estimated from measured alpha decays of 218,214Po. The results indicate very good agreement between the use of both Timepix detectors and an NRPI reference instrument, continuous monitor Fritra 4. Low-level detection limit for EEC was estimated to be 41 Bq m(-3) for silicon detection layer and 184 Bq m(-3) for CdTe detection layer, respectively. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Breast cancer detection using time reversal
NASA Astrophysics Data System (ADS)
Sheikh Sajjadieh, Mohammad Hossein
Breast cancer is the second leading cause of cancer death after lung cancer among women. Mammography and magnetic resonance imaging (MRI) have certain limitations in detecting breast cancer, especially during its early stage of development. A number of studies have shown that microwave breast cancer detection has potential to become a successful clinical complement to the conventional X-ray mammography. Microwave breast imaging is performed by illuminating the breast tissues with an electromagnetic waveform and recording its reflections (backscatters) emanating from variations in the normal breast tissues and tumour cells, if present, using an antenna array. These backscatters, referred to as the overall (tumour and clutter) response, are processed to estimate the tumour response, which is applied as input to array imaging algorithms used to estimate the location of the tumour. Due to changes in the breast profile over time, the commonly utilized background subtraction procedures used to estimate the target (tumour) response in array processing are impractical for breast cancer detection. The thesis proposes a new tumour estimation algorithm based on a combination of the data adaptive filter with the envelope detection filter (DAF/EDF), which collectively do not require a training step. After establishing the superiority of the DAF/EDF based approach, the thesis shows that the time reversal (TR) array imaging algorithms outperform their conventional conterparts in detecting and localizing tumour cells in breast tissues at SNRs ranging from 15 to 30dB.
[Urine levels of fenethylline and amphetamine after administration of Captagon].
Iffland, R
1982-01-01
The limit for detecting fenethylline and its metabolite amphetamine in GLC with N-FID is in the range of nanograms. The elimination of these substances in urine was measured after giving different quantities of Captagon to six volunteers. The concentrations of fenethylline and amphetamine in urine allow to estimate with some limitations time and amount of consuming Captagon for forensic purposes.
Estimation of walrus populations on sea ice with infrared imagery and aerial photography
Udevitz, M.S.; Burn, D.M.; Webber, M.A.
2008-01-01
Population sizes of ice-associated pinnipeds have often been estimated with visual or photographic aerial surveys, but these methods require relatively slow speeds and low altitudes, limiting the area they can cover. Recent developments in infrared imagery and its integration with digital photography could allow substantially larger areas to be surveyed and more accurate enumeration of individuals, thereby solving major problems with previous survey methods. We conducted a trial survey in April 2003 to estimate the number of Pacific walruses (Odobenus rosmarus divergens) hauled out on sea ice around St. Lawrence Island, Alaska. The survey used high altitude infrared imagery to detect groups of walruses on strip transects. Low altitude digital photography was used to determine the number of walruses in a sample of detected groups and calibrate the infrared imagery for estimating the total number of walruses. We propose a survey design incorporating this approach with satellite radio telemetry to estimate the proportion of the population in the water and additional low-level flights to estimate the proportion of the hauled-out population in groups too small to be detected in the infrared imagery. We believe that this approach offers the potential for obtaining reliable population estimates for walruses and other ice-associated pinnipeds. ?? 2007 by the Society for Marine Mammalogy.
ON COMPUTING UPPER LIMITS TO SOURCE INTENSITIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashyap, Vinay L.; Siemiginowska, Aneta; Van Dyk, David A.
2010-08-10
A common problem in astrophysics is determining how bright a source could be and still not be detected in an observation. Despite the simplicity with which the problem can be stated, the solution involves complicated statistical issues that require careful analysis. In contrast to the more familiar confidence bound, this concept has never been formally analyzed, leading to a great variety of often ad hoc solutions. Here we formulate and describe the problem in a self-consistent manner. Detection significance is usually defined by the acceptable proportion of false positives (background fluctuations that are claimed as detections, or Type I error),more » and we invoke the complementary concept of false negatives (real sources that go undetected, or Type II error), based on the statistical power of a test, to compute an upper limit to the detectable source intensity. To determine the minimum intensity that a source must have for it to be detected, we first define a detection threshold and then compute the probabilities of detecting sources of various intensities at the given threshold. The intensity that corresponds to the specified Type II error probability defines that minimum intensity and is identified as the upper limit. Thus, an upper limit is a characteristic of the detection procedure rather than the strength of any particular source. It should not be confused with confidence intervals or other estimates of source intensity. This is particularly important given the large number of catalogs that are being generated from increasingly sensitive surveys. We discuss, with examples, the differences between these upper limits and confidence bounds. Both measures are useful quantities that should be reported in order to extract the most science from catalogs, though they answer different statistical questions: an upper bound describes an inference range on the source intensity, while an upper limit calibrates the detection process. We provide a recipe for computing upper limits that applies to all detection algorithms.« less
Combining markers with and without the limit of detection
Dong, Ting; Liu, Catherine Chunling; Petricoin, Emanuel F.; Tang, Liansheng Larry
2014-01-01
In this paper, we consider the combination of markers with and without the limit of detection (LOD). LOD is often encountered when measuring proteomic markers. Because of the limited detecting ability of an equipment or instrument, it is difficult to measure markers at a relatively low level. Suppose that after some monotonic transformation, the marker values approximately follow multivariate normal distributions. We propose to estimate distribution parameters while taking the LOD into account, and then combine markers using the results from the linear discriminant analysis. Our simulation results show that the ROC curve parameter estimates generated from the proposed method are much closer to the truth than simply using the linear discriminant analysis to combine markers without considering the LOD. In addition, we propose a procedure to select and combine a subset of markers when many candidate markers are available. The procedure based on the correlation among markers is different from a common understanding that a subset of the most accurate markers should be selected for the combination. The simulation studies show that the accuracy of a combined marker can be largely impacted by the correlation of marker measurements. Our methods are applied to a protein pathway dataset to combine proteomic biomarkers to distinguish cancer patients from non-cancer patients. PMID:24132938
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Malmberg, Filip; Sandberg-Melin, Camilla
2017-02-01
The present study aimed to elucidate if comparison of angular segments of Pigment epithelium central limit- Inner limit of the retina Minimal Distance, measured over 2π radians in the frontal plane (PIMD-2π) between visits of a patient, renders sufficient precision for detection of loss of nerve fibers in the optic nerve head. An optic nerve head raster scanned cube was captured with a TOPCON 3D OCT 2000 (Topcon, Japan) device in one early to moderate stage glaucoma eye of each of 13 patients. All eyes were recorded at two visits less than 1 month apart. At each visit, 3 volumes were captured. Each volume was extracted from the OCT device for analysis. Then, angular PIMD was segmented three times over 2π radians in the frontal plane, resolved with a semi-automatic algorithm in 500 equally separated steps, PIMD-2π. It was found that individual segmentations within volumes, within visits, within subjects can be phase adjusted to each other in the frontal plane using cross-correlation. Cross correlation was also used to phase adjust volumes within visits within subjects and visits to each other within subjects. Then, PIMD-2π for each subject was split into 250 bundles of 2 adjacent PIMDs. Finally, the sources of variation for estimates of segments of PIMD-2π were derived with analysis of variance assuming a mixed model. The variation among adjacent PIMDS was found very small in relation to the variation among segmentations. The variation among visits was found insignificant in relation to the variation among volumes and the variance for segmentations was found to be on the order of 20 % of that for volumes. The estimated variances imply that, if 3 segmentations are averaged within a volume and at least 10 volumes are averaged within a visit, it is possible to estimate around a 10 % reduction of a PIMD-2π segment from baseline to a subsequent visit as significant. Considering a loss rate for a PIMD-2π segment of 23 μm/yr., 4 visits per year, and averaging 3 segmentations per volume and 3 volumes per visit, a significant reduction from baseline can be detected with a power of 80 % in about 18 months. At higher loss rate for a PIMD-2π segment, a significant difference from baseline can be detected earlier. Averaging over more volumes per visit considerably decreases the time for detection of a significant reduction of a segment of PIMD-2π. Increasing the number of segmentations averaged per visit only slightly reduces the time for detection of a significant reduction. It is concluded that phase adjustment in the frontal plane with cross correlation allows high precision estimates of a segment of PIMD-2π that imply substantially shorter followup time for detection of a significant change than mean deviation (MD) in a visual field estimated with the Humphrey perimeter or neural rim area (NRA) estimated with the Heidelberg retinal tomograph.
Theoretical detection limit of PIXE analysis using 20 MeV proton beams
NASA Astrophysics Data System (ADS)
Ishii, Keizo; Hitomi, Keitaro
2018-02-01
Particle-induced X-ray emission (PIXE) analysis is usually performed using proton beams with energies in the range 2∼3 MeV because at these energies, the detection limit is low. The detection limit of PIXE analysis depends on the X-ray production cross-section, the continuous background of the PIXE spectrum and the experimental parameters such as the beam currents and the solid angle and detector efficiency of X-ray detector. Though the continuous background increases as the projectile energy increases, the cross-section of the X-ray increases as well. Therefore, the detection limit of high energy proton PIXE is not expected to increase significantly. We calculated the cross sections of continuous X-rays produced in several bremsstrahlung processes and estimated the detection limit of a 20 MeV proton PIXE analysis by modelling the Compton tail of the γ-rays produced in the nuclear reactions, and the escape effect on the secondary electron bremsstrahlung. We found that the Compton tail does not affect the detection limit when a thin X-ray detector is used, but the secondary electron bremsstrahlung escape effect does have an impact. We also confirmed that the detection limit of the PIXE analysis, when used with 4 μm polyethylene backing film and an integrated beam current of 1 μC, is 0.4∼2.0 ppm for proton energies in the range 10∼30 MeV and elements with Z = 16-90. This result demonstrates the usefulness of several 10 MeV cyclotrons for performing PIXE analysis. Cyclotrons with these properties are currently installed in positron emission tomography (PET) centers.
Iron pentacarbonyl detection limits in the cigarette smoke matrix using FT-IR spectroscopy
NASA Astrophysics Data System (ADS)
Parrish, Milton E.; Plunkett, Susan E.; Harward, Charles N.
2005-11-01
Endogenous metals present in tobacco from agricultural practices have been purported to generate metal carbonyls in cigarette smoke. Transition metal catalysts, such as iron oxide, have been investigated for the reduction of carbon monoxide (CO) in cigarette smoke. These studies motivated the development of an analytical method to determine if iron pentacarbonyl [Fe(CO) 5] is present in mainstream smoke from cigarette models having cigarette paper made with iron oxide. An FT-IR puff-by-puff method was developed and the detection limit was determined using two primary reference spectra from different sources to estimate the amount of Fe(CO) 5 present in a high-pressure steel cylinder of CO. We do not detect Fe(CO) 5 in a single 35 mL puff from reference cigarettes or from those cigarette models having cigarette paper made with iron oxide, with a 30-ppbV limit of detection (LOD). Also, it was shown that a filter containing activated carbon would remove Fe(CO) 5.
2D and 3D Terahertz Imaging and X-Rays CT for Sigillography Study
NASA Astrophysics Data System (ADS)
Fabre, M.; Durand, R.; Bassel, L.; Recur, B.; Balacey, H.; Bou Sleiman, J.; Perraud, J.-B.; Mounaix, P.
2017-04-01
Seals are part of our cultural heritage but the study of these objects is limited because of their fragility. Terahertz and X-Ray imaging are used to analyze a collection of wax seals from the fourteenth to eighteenth centuries. In this work, both techniques are compared in order to discuss their advantages and limits and their complementarity for conservation state study of the samples. Thanks to 3D analysis and reconstructions, defects and fractures are detected with an estimation of their depth position. The path from the parchment tongue inside the seals is also detected.
Sidor, Inga F; Dunn, J Lawrence; Tsongalis, Gregory J; Carlson, Jolene; Frasca, Salvatore
2013-01-01
Brucellosis has emerged as a disease of concern in marine mammals in the last 2 decades. Molecular detection techniques have the potential to address limitations of other methods for detecting infection with Brucella in these species. Presented herein is a real-time polymerase chain reaction (PCR) method targeting the Brucella genus-specific bcsp31 gene. The method also includes a target to a conserved region of the eukaryotic mitochondrial 16S ribosomal RNA gene to assess suitability of extracted DNA and a plasmid-based internal control to detect failure of PCR due to inhibition. This method was optimized and validated to detect Brucella spp. in multiple sample matrices, including fresh or frozen tissue, blood, and feces. The analytical limit of detection was low, with 95% amplification at 24 fg, or an estimated 7 bacterial genomic copies. When Brucella spp. were experimentally added to tissue or fecal homogenates, the assay detected an estimated 1-5 bacteria/µl. An experiment simulating tissue autolysis showed relative persistence of bacterial DNA compared to host mitochondrial DNA. When used to screen 1,658 field-collected marine mammal tissues in comparison to microbial culture, diagnostic sensitivity and specificity were 70.4% and 98.3%, respectively. In addition to amplification in fresh and frozen tissues, Brucella spp. were detected in feces and formalin-fixed, paraffin-embedded tissues from culture-positive animals. Results indicate the utility of this real-time PCR for the detection of Brucella spp. in marine species, which may have applications in surveillance or epidemiologic investigations.
Pesticides in streams in the Tar-Pamlico drainage basin, North Carolina, 1992-94
Woodside, Michael D.; Ruhl, Kelly E.
2001-01-01
From 1992 to 1994, 147 water samples were collected at 5 sites in the Tar-Pamlico drainage basin in North Carolina and analyzed for 46 herbicides, insecticides, and pesticide metabolites as part of the U.S. Geological Survey's National Water-Quality Assessment Program. Based on a common adjusted detection limit of 0.01 microgram per liter, the most frequently detected herbicides were metolachlor (84 percent), atrazine (78 percent), alachlor (72 percent), and prometon (57 percent). The insecticides detected most frequently were carbaryl (12 percent), carbofuran (7 percent), and diazinon (4 percent). Although the pesticides with the highest estimated uses generally were the compounds detected most frequently, there was not a strong correlation between estimated use and detection frequency. The development of statistical correlations between pesticide use and detection frequency was limited by the lack of information on pesticides commonly applied in urban and agricultural areas, such as prometon, chlorpyrifos, and diazinon, and the small number of basins included in this study. For example, prometon had the fourth highest detection frequency, but use information was not available. Nevertheless, the high detection frequency of prometon indicates that nonagricultural uses also contribute to pesticide levels in streams in the Tar-Pamlico drainage basin.Concentrations of the herbicides atrazine, alachlor, and trifluralin varied seasonally, with elevated concentrations generally occurring in the spring, during and immediately following application periods, and in the summer. Seasonal concentration patterns were less evident for prometon, diazinon, and chlorpyrifos. Alachlor is the only pesticide detected in concentrations that exceeded current (2000) drinking-water standards.
Novel utilisation of a circular multi-reflection cell applied to materials ageing experiments
NASA Astrophysics Data System (ADS)
Knox, D. A.; King, A. K.; McNaghten, E. D.; Brooks, S. J.; Martin, P. A.; Pimblott, S. M.
2015-04-01
We report on the novel utilisation of a circular multi-reflection (CMR) cell applied to materials ageing experiments. This enabled trace gas detection within a narrow interfacial region located between two sample materials and remotely interrogated with near-infrared sources combined with fibre-optic coupling. Tunable diode laser absorption spectroscopy was used to detect water vapour and carbon dioxide at wavelengths near 1,358 and 2,004 nm, respectively, with corresponding detection limits of 7 and 1,139 ppm m Hz-0.5. The minimum detectable absorption was estimated to be 2.82 × 10-3 over a 1-s average. In addition, broadband absorption spectroscopy was carried out for the detection of acetic acid, using a super-luminescent light emitting diode centred around 1,430 nm. The 69 cm measurement pathlength was limited by poor manufacturing tolerances of the spherical CMR mirrors and the consequent difficulty of collecting all the cell output light.
How to limit false positives in environmental DNA and metabarcoding?
Ficetola, Gentile Francesco; Taberlet, Pierre; Coissac, Eric
2016-05-01
Environmental DNA (eDNA) and metabarcoding are boosting our ability to acquire data on species distribution in a variety of ecosystems. Nevertheless, as most of sampling approaches, eDNA is not perfect. It can fail to detect species that are actually present, and even false positives are possible: a species may be apparently detected in areas where it is actually absent. Controlling false positives remains a main challenge for eDNA analyses: in this issue of Molecular Ecology Resources, Lahoz-Monfort et al. () test the performance of multiple statistical modelling approaches to estimate the rate of detection and false positives from eDNA data. Here, we discuss the importance of controlling for false detection from early steps of eDNA analyses (laboratory, bioinformatics), to improve the quality of results and allow an efficient use of the site occupancy-detection modelling (SODM) framework for limiting false presences in eDNA analysis. © 2016 John Wiley & Sons Ltd.
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
Zhu, Hai-Feng; Zele, Andrew J; Suheimat, Marwan; Lambert, Andrew J; Atchison, David A
2016-08-01
This study compared neural resolution and detection limits of the human mid-/long-wavelength and short-wavelength cone systems with anatomical estimates of photoreceptor and retinal ganglion cell spacings and sizes. Detection and resolution limits were measured from central fixation out to 35° eccentricity across the horizontal visual field using a modified Lotmar interferometer. The mid-/long-wavelength cone system was studied using a green (550 nm) test stimulus to which S-cones have low sensitivity. To bias resolution and detection to the short-wavelength cone system, a blue (450 nm) test stimulus was presented against a bright yellow background that desensitized the M- and L-cones. Participants were three trichromatic males with normal visual functions. With green stimuli, resolution showed a steep central-peripheral gradient that was similar between participants, whereas the detection gradient was shallower and patterns were different between participants. Detection and resolution with blue stimuli were poorer than for green stimuli. The detection of blue stimuli was superior to resolution across the horizontal visual field and the patterns were different between participants. The mid-/long-wavelength cone system's resolution is limited by midget ganglion cell spacing and its detection is limited by the size of the M- and L-cone photoreceptors, consistent with previous observations. We found that no such simple relationships occur for the short-wavelength cone system between resolution and the bistratified ganglion cell spacing, nor between detection and the S-cone photoreceptor sizes.
Krupčík, Ján; Májek, Pavel; Gorovenko, Roman; Blaško, Jaroslav; Kubinec, Robert; Sandra, Pat
2015-05-29
Methods based on the blank signal as proposed by IUPAC procedure and on the signal to noise ratio (S/N) as listed in the ISO-11843-1 norm for determination of the limit of detection (LOD) and quantitation (LOQ) in one-dimensional capillary gas chromatography (1D-GC) and comprehensive two-dimensional capillary gas chromatography (CG×GC) are described in detail and compared for both techniques. Flame ionization detection was applied and variables were the data acquisition frequency and, for CG×GC, also the modulation time. It has been stated that LOD and LOQ estimated according to IUPAC might be successfully used for 1D-GC-FID method. Moreover, LOD and LOQ decrease with decrease of data acquisition frequency (DAF). For GC×GC-FID, estimation of LOD by IUPAC gave poor reproducibility of results while for LOQ reproducibility was acceptable (within ±10% rel.). The LOD and LOQ determined by the S/N concept both for 1D-GC-FID and GC×GC-FID methods are ca. three times higher than those values estimated by the standard deviation of the blank. Since the distribution pattern of modulated peaks for any analyte separated by GC×GC is random and cannot be predicted, LOQ and LOD may vary within 30% for 3s modulation time. Concerning sensitivity, 1D-GC-FID at 2Hz and of GC×GC-FID at 50Hz shows a ca. 5 times enhancement of sensitivity in the modulated signal output. Copyright © 2015 Elsevier B.V. All rights reserved.
The NASA Meter Class Autonomous Telescope: Ascension Island
2013-09-01
understand the debris environment by providing high fidelity data in a timely manner to protect satellites and spacecraft in orbit around the Earth...gigabytes of image data nightly. With fainter detection limits, precision detection, acquisition and tracking of targets, multi-color photometry ...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for
Gjerde, Hallvard; Normann, Per T; Christophersen, Asbjørg S; Mørland, Jørg
2011-07-15
To estimate the prevalence of driving with blood drug concentrations above the recently proposed Norwegian legal limits for drugged driving in random traffic. The results from a roadside survey of 10,816 drivers was used as basis for the estimation, and the most prevalent drugs were included. Three approaches were used to estimate the prevalence of drug concentrations above the proposed legal limits in blood based on drug concentrations in oral fluid: comparison with drug concentrations observed in oral fluid and blood in pharmacokinetic studies, estimating the prevalence of drug concentrations in blood by calculating the prevalence of drug concentrations in oral fluid that were larger than the limit in blood multiplied with mean oral fluid/blood ratios, and a mathematical simulation mimicking the relationship between drug concentration distributions in blood and oral fluid for populations of drug users. In total, alcohol or drugs were detected in 5.7% of the samples of oral fluid from drivers in normal traffic; 3.8% (n=410) were positive for the drugs that we included in the assessment. The estimation of drug concentrations in blood suggested that about 1.5% had concentrations above the proposed legal limits in blood for the studied drugs, which is about 40% of those who were positive for the drugs in oral fluid. The estimated prevalence of driving with concentrations of psychoactive drugs in blood above the proposed legal limits was for illegal drugs 0.4% and for medicinal drugs 1.1%. These may be regarded as minimum estimates as some drugs were not included in the assessment. These prevalences are higher than the prevalence of driving with blood alcohol concentrations above the legal limit of 0.2g/kg in Norway. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Krizkova, Sona; Krystofova, Olga; Trnkova, Libuse; Hubalek, Jaromir; Adam, Vojtech; Beklova, Miroslava; Horna, Ales; Havel, Ladislav; Kizek, Rene
2009-01-01
We used carbon paste electrodes and a standard potentiostat to detect silver ions. The detection limit (3 Signal/Noise ratio) was estimated as 0.5 μM. A standard electrochemical instrument microanalysis of silver(I) ions was suggested. As a working electrode a carbon tip (1 mL) or carbon pencil was used. Limits of detection estimated by dilution of a standard were 1 (carbon tip) or 10 nM (carbon pencil). Further we employed flow injection analysis coupled with carbon tip to detect silver(I) ions released in various beverages and mineral waters. During first, second and third week the amount of silver(I) ions releasing into water samples was under the detection limit of the technique used for their quantification. At the end of a thirteen weeks long experiment the content of silver(I) ions was several times higher compared to the beginning of release detected in the third week and was on the order of tens of nanomoles. In subsequent experiments the influence of silver(I) ions (0, 5 and 10 μM) on a plant model system (tobacco BY-2 cells) during a four-day exposition was investigated. Silver(I) ions were highly toxic to the cells, which was revealed by a double staining viability assay. Moreover we investigated the effect of silver(I) ions (0, 0.3, 0.6, 1.2 and 2.5 μM) on guppies (Poecilia reticulata). Content of Ag(I) increased with increasing time of the treatment and applied concentrations in fish tissues. It can be concluded that a carbon tip or carbon pencil coupled with a miniaturized potentiostat can be used for detection of silver(I) ions in environmental samples and thus represents a small, portable, low cost and easy-to-use instrument for such purposes. PMID:22399980
NASA Technical Reports Server (NTRS)
Meslin, P.-Y.; Cicutto, L.; Forni, O.; Drouet, C.; Rapin, W.; Nachon, M.; Cousin, A.; Blank, J. G.; McCubbin, F. M.; Gasnault, O.;
2016-01-01
Determining the composition of apatites is important to understand the behavior of volatiles during planetary differentiation. Apatite is an ubiquitous magmatic mineral in the SNC meteorites. It is a significant reservoir of halogens in these meteorites and has been used to estimate the halogen budget of Mars. Apatites have been identified in sandstones and pebbles at Gale crater by ChemCam, a Laser-Induced Breakdown Spectroscometer (LIBS) instrument onboard the Curiosity rover. Their presence was inferred from correlations between calcium, fluorine (using the CaF molecular band centered near 603 nm, whose detection limit is much lower that atomic or ionic lines and, in some cases, phosphorus (whose detection limit is much larger). An initial quantification of fluorine, based on fluorite (CaF2)/basalt mixtures and obtained at the LANL laboratory, indicated that the excess of F/Ca (compared to the stoichiometry of pure fluorapatites) found on Mars in some cases could be explained by the presence of fluorite. Chlorine was not detected in these targets, at least above a detection limit of 0.6 wt% estimated from. Fluorapatite was later also detected by X-ray diffraction (with CheMin) at a level of approx.1wt% in the Windjana drill sample (Kimberley area), and several points analyzed by ChemCam in this area also revealed a correlation between Ca and F. The in situ detection of F-rich, Cl-poor apatites contrasts with the Cl-rich, F-poor compositions of apatites found in basaltic shergottites and in gabbroic clasts from the martian meteorite NWA 7034, which were also found to be more Cl-rich than apatites from basalts on Earth, the Moon, or Vesta. The in situ observations could call into question one of the few possible explanations brought forward to explain the SNC results, namely that Mars may be highly depleted in fluorine. The purpose of the present study is to refine the calibration of the F, Cl, OH and P signals measured by the ChemCam LIBS instrument, initiated for F, for Cl in soils, for P, and estimate their limit of detection. For this purpose, different types of apatites and mixtures of basalt powder and apatites were analyzed using ChemCam Engineering Qualification Model (EQM) at IRAP, Toulouse. The present abstract presents the initial results from the laboratory analyses. Differences between the response function of the EQM and the Flight Model of ChemCam are still to be refined to apply these new results to the Martian dataset.
Making great leaps forward: Accounting for detectability in herpetological field studies
Mazerolle, Marc J.; Bailey, Larissa L.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Nichols, James D.
2007-01-01
Detecting individuals of amphibian and reptile species can be a daunting task. Detection can be hindered by various factors such as cryptic behavior, color patterns, or observer experience. These factors complicate the estimation of state variables of interest (e.g., abundance, occupancy, species richness) as well as the vital rates that induce changes in these state variables (e.g., survival probabilities for abundance; extinction probabilities for occupancy). Although ad hoc methods (e.g., counts uncorrected for detection, return rates) typically perform poorly in the face of no detection, they continue to be used extensively in various fields, including herpetology. However, formal approaches that estimate and account for the probability of detection, such as capture-mark-recapture (CMR) methods and distance sampling, are available. In this paper, we present classical approaches and recent advances in methods accounting for detectability that are particularly pertinent for herpetological data sets. Through examples, we illustrate the use of several methods, discuss their performance compared to that of ad hoc methods, and we suggest available software to perform these analyses. The methods we discuss control for imperfect detection and reduce bias in estimates of demographic parameters such as population size, survival, or, at other levels of biological organization, species occurrence. Among these methods, recently developed approaches that no longer require marked or resighted individuals should be particularly of interest to field herpetologists. We hope that our effort will encourage practitioners to implement some of the estimation methods presented herein instead of relying on ad hoc methods that make more limiting assumptions.
Herbicide Orange Site Characterization Study Naval Construction Battalion Center
1987-01-01
U.S. Testing Laboratories for analysis. Over 200 additional analyses were performed for a variety of quality assurance criteria. The resultant data...TABLE 9. NCBC PERFORMANCE AUDIT SAMPLE ANALYSIS SUNMARYa (SERIES 1) TCDD Sppb ) Reported Detection Relative b Sample Number Concentration Limit...limit rather than estimating the variance of the results. The sample results were transformed using the natural logarithm. The Shapiro-Wilk W test
Cautela, Domenico; Laratta, Bruna; Santelli, Francesca; Trifirò, Antonio; Servillo, Luigi; Castaldo, Domenico
2008-07-09
The chemical composition of 30 samples of juices obtained from bergamot (Citrus bergamia Risso and Poit.) fruits is reported and compared to the genuineness parameters adopted by Association of the Industry of Juice and Nectars (AIJN) for lemon juice. It was found that the compositional differences between the two juices are distinguishable, although with difficulty. However, these differences are not strong enough to detect the fraudulent addition of bergamot juice to lemon juice. Instead, we found the high-performance liquid chromatography (HPLC) analysis of the flavanones naringin, neohesperidin, and neoeriocitrin, which are present in bergamot juice and practically absent in the lemon juice, is a convenient way to detect and quantify the fraudulent addition of bergamot juice. The method has been validated by calculating the detection and quantification limits according to Eurachem procedures. Employing neoeriocitrin (detection limit = 0.7 mg/L) and naringin (detection limit = 1 mg/L) as markers, it is possible to detect the addition of bergamot juice to lemon juice at the 1% level. When using neohesperidin as a marker (detection limit = 1 mg/L), the minimal percentage of detectable addition of bergamot juice was about 2%. Finally, it is reported that the pattern of flavonoid content of the bergamot juice is similar to those of chinotto (Citrus myrtifolia Raf) and bitter orange (Citrus aurantium L.) juices and that it is possible to distinguish the three kinds of juices by HPLC analysis.
Internal seismological stations for monitoring a comprehensive test ban theory
NASA Astrophysics Data System (ADS)
Dahlman, O.; Israelson, H.
1980-06-01
Verification of the compliance with a Comprehensive Test Ban on nuclear explosions is expected to be carried out by a seismological verification system of some fifty globally distributed teleseismic stations designed to monitor underground explosions at large distances (beyond 2000 km). It is attempted to assess various technical purposes that such internal stations might serve in relation to a global network of seismological stations. The assessment is based on estimates of the detection capabilities of hypothetical networks of internal stations. Estimates pertaining to currently used detection techniques (P waves) indicate that a limited number (less than 30) of such stations would not improve significantly upon the detection capability that a global network of stations would have throughout the territories of the US and the USSR. Recently available and not yet fully analyzed data indicate however that very high detection capabilities might be obtained in certain regions.
Bautista, Leonelo E; Herrera, Víctor M
2018-05-24
We evaluated whether outbreaks of Zika virus (ZIKV) infection, newborn microcephaly, and Guillain-Barré syndrome (GBS) in Latin America may be detected through current surveillance systems, and how cases detected through surveillance may increase health care burden. We estimated the sensitivity and specificity of surveillance case definitions using published data. We assumed a 10% ZIKV infection risk during a non-outbreak period and hypothetical increases in risk during an outbreak period. We used sensitivity and specificity estimates to correct for non-differential misclassification, and calculated a misclassification-corrected relative risk comparing both periods. To identify the smallest hypothetical increase in risk resulting in a detectable outbreak we compared the misclassification-corrected relative risk to the relative risk corresponding to the upper limit of the endemic channel (mean + 2 SD). We also estimated the proportion of false positive cases detected during the outbreak. We followed the same approach for microcephaly and GBS, but assumed the risk of ZIKV infection doubled during the outbreak, and ZIKV infection increased the risk of both diseases. ZIKV infection outbreaks were not detectable through non-serological surveillance. Outbreaks were detectable through serologic surveillance if infection risk increased by at least 10%, but more than 50% of all cases were false positive. Outbreaks of severe microcephaly were detected if ZIKV infection increased prevalence of this condition by at least 24.0 times. When ZIKV infection did not increase the prevalence of severe microcephaly, 34.7 to 82.5% of all cases were false positive, depending on diagnostic accuracy. GBS outbreaks were detected if ZIKV infection increased the GBS risk by at least seven times. For optimal GBS diagnosis accuracy, the proportion of false positive cases ranged from 29 to 54% and from 45 to 56% depending on the incidence of GBS mimics. Current surveillance systems have a low probability of detecting outbreaks of ZIKV infection, severe microcephaly, and GBS, and could result in significant increases in health care burden, due to the detection of large numbers of false positive cases. In view of these limitations, Latin American countries should consider alternative options for surveillance.
DETECTION AND DISINFECTION OF PATHOGENS IN STORM- GENERATED FLOWS
The disease-producing potential of recreational waters is currently estimated through the use of certain bacterial indicators that are believed to be positively correlated with the presence of fecal contamination. In general, these indicators and their recommended limiting values...
DOT National Transportation Integrated Search
2001-01-01
The National Highway Traffic Safety Administration (NHTSA), the federal agency responsible for reducing accidents, deaths, and injuries resulting from motor vehicle crashes on the nation's highways, estimates that over 6 million automobile accidents ...
Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M
2016-01-01
Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.
NASA Astrophysics Data System (ADS)
Maučec, M.; de Meijer, R. J.; Rigollet, C.; Hendriks, P. H. G. M.; Jones, D. G.
2004-06-01
A joint research project between the British Geological Survey and Nuclear Geophysics Division of the Kernfysisch Versneller Instituut, Groningen, the Netherlands, was commissioned by the United Kingdom Atomic Energy Authority to establish the efficiency of a towed seabed γ-ray spectrometer for the detection of 137Cs-containing radioactive particles offshore Dounreay, Scotland. Using the MCNP code, a comprehensive Monte Carlo feasibility study was carried out to model various combinations of geological matrices, particle burial depth and lateral displacement, source activity and detector material. To validate the sampling and absolute normalisation procedures of MCNP for geometries including multiple (natural and induced) heterogeneous sources in environmental monitoring, a benchmark experiment was conducted. The study demonstrates the ability of seabed γ-ray spectrometry to locate radioactive particles offshore and to distinguish between γ count rate increases due to particles from those due to enhanced natural radioactivity. The information presented in this study will be beneficial for estimation of the inventory of 137Cs particles and their activity distribution and for the recovery of particles from the sea floor. In this paper, the Monte Carlo assessment of the detection limits is presented. The estimation of the required towing speed and acquisition times and their application to radioactive particle detection and discrimination offshore formed a supplementary part of this study.
Effects of loss on the phase sensitivity with parity detection in an SU(1,1) interferometer
NASA Astrophysics Data System (ADS)
Li, Dong; Yuan, Chun-Hua; Yao, Yao; Jiang, Wei; Li, Mo; Zhang, Weiping
2018-05-01
We theoretically study the effects of loss on the phase sensitivity of an SU(1,1) interferometer with parity detection with various input states. We show that although the sensitivity of phase estimation decreases in the presence of loss, it can still beat the shot-noise limit with small loss. To examine the performance of parity detection, the comparison is performed among homodyne detection, intensity detection, and parity detection. Compared with homodyne detection and intensity detection, parity detection has a slight better optimal phase sensitivity in the absence of loss, but has a worse optimal phase sensitivity with a significant amount of loss with one-coherent state or coherent $\\otimes$ squeezed state input.
The Chandra Source Catalog: Background Determination and Source Detection
NASA Astrophysics Data System (ADS)
McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula
2009-09-01
The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).
Chandra Source Catalog: Background Determination and Source Detection
NASA Astrophysics Data System (ADS)
McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, He; Cao, Zhoujian; Zhang, Bing, E-mail: gaohe@bnu.edu.cn
Neutron stars may sustain a non-axisymmetric deformation due to magnetic distortion and are potential sources of continuous gravitational waves (GWs) for ground-based interferometric detectors. With decades of searches using available GW detectors, no evidence of a GW signal from any pulsar has been observed. Progressively stringent upper limits of ellipticity have been placed on Galactic pulsars. In this work, we use the ellipticity inferred from the putative millisecond magnetars in short gamma-ray bursts (SGRBs) to estimate their detectability by current and future GW detectors. For ∼1 ms magnetars inferred from the SGRB data, the detection horizon is ∼30 Mpc andmore » ∼600 Mpc for the advanced LIGO (aLIGO) and Einstein Telescope (ET), respectively. Using the ellipticity of SGRB millisecond magnetars as calibration, we estimate the ellipticity and GW strain of Galactic pulsars and magnetars assuming that the ellipticity is magnetic-distortion-induced. We find that the results are consistent with the null detection results of Galactic pulsars and magnetars with the aLIGO O1. We further predict that the GW signals from these pulsars/magnetars may not be detectable by the currently designed aLIGO detector. The ET detector may be able to detect some relatively low-frequency signals (<50 Hz) from some of these pulsars. Limited by its design sensitivity, the eLISA detector seems to not be suitable for detecting the signals from Galactic pulsars and magnetars.« less
NASA Astrophysics Data System (ADS)
Gao, He; Cao, Zhoujian; Zhang, Bing
2017-08-01
Neutron stars may sustain a non-axisymmetric deformation due to magnetic distortion and are potential sources of continuous gravitational waves (GWs) for ground-based interferometric detectors. With decades of searches using available GW detectors, no evidence of a GW signal from any pulsar has been observed. Progressively stringent upper limits of ellipticity have been placed on Galactic pulsars. In this work, we use the ellipticity inferred from the putative millisecond magnetars in short gamma-ray bursts (SGRBs) to estimate their detectability by current and future GW detectors. For ˜1 ms magnetars inferred from the SGRB data, the detection horizon is ˜30 Mpc and ˜600 Mpc for the advanced LIGO (aLIGO) and Einstein Telescope (ET), respectively. Using the ellipticity of SGRB millisecond magnetars as calibration, we estimate the ellipticity and GW strain of Galactic pulsars and magnetars assuming that the ellipticity is magnetic-distortion-induced. We find that the results are consistent with the null detection results of Galactic pulsars and magnetars with the aLIGO O1. We further predict that the GW signals from these pulsars/magnetars may not be detectable by the currently designed aLIGO detector. The ET detector may be able to detect some relatively low-frequency signals (<50 Hz) from some of these pulsars. Limited by its design sensitivity, the eLISA detector seems to not be suitable for detecting the signals from Galactic pulsars and magnetars.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
A search for SiO, OH, CO and HCN radio emission from silicate-carbon stars
NASA Technical Reports Server (NTRS)
Little-Marenin, I. R.; Sahai, R.; Wannier, P. G.; Benson, P. J.; Gaylard, M.; Omont, A.
1994-01-01
We report upper limits for radio emission of SiO at 86 and 43 GHz, of OH at 1612 and 1665/1667 MHz, of CO at 115 GHz and HCN at 88.6 GHz in the silicate-carbon stars. These upper limits of SiO imply that oxygen-rich material has not been detected within 2R(sub star) of a central star even though the detected emission from silicate dust grains, H2O and OH maser establishes the presence of oxygen-rich material from about tens to thousands of AU of a central star. The upper limit of the SiO abundance is consistent with that found in oxygen-rich envelopes. Upper limits of the mass loss rate (based on the CO data) are estimated to be between 10(exp -6) to 10(exp -7) solar mass/yr assuming a distance of 1.5 kpc for these stars. The absence of HCN microwave emission implies that no carbon-rich material can be detected at large distances (thousands of AU) from a central star. The lack of detections of SiO, CO, and HCN emission is most likely due to the large distances of these stars. A number of C stars were detected in CO and HCN, but only the M supergiant VX Sgr was detected in CO.
NASA Astrophysics Data System (ADS)
Zapiór, Maciej; Martínez-Gómez, David
2016-02-01
Based on the data collected by the Vacuum Tower Telescope located in the Teide Observatory in the Canary Islands, we analyzed the three-dimensional (3D) motion of so-called knots in a solar prominence of 2014 June 9. Trajectories of seven knots were reconstructed, giving information of the 3D geometry of the magnetic field. Helical motion was detected. From the equipartition principle, we estimated the lower limit of the magnetic field in the prominence to ≈1-3 G and from the Ampère’s law the lower limit of the electric current to ≈1.2 × 109 A.
Estimate of Cosmic Muon Background for Shallow Underground Neutrino Detectors
NASA Astrophysics Data System (ADS)
Casimiro, E.; Simão, F. R. A.; Anjos, J. C.
One of the severe limitations in detecting neutrino signals from nuclear reactors is that the copious cosmic ray background imposes the use of a time veto upon the passage of the muons to reduce the number of fake signals due to muon-induced spallation neutrons. For this reason neutrino detectors are usually located underground, with a large overburden. However there are practical limitations that do restrain from locating the detectors at large depths underground. In order to decide the depth underground at which the Neutrino Angra Detector (currently in preparation) should be installed, an estimate of the cosmogenic background in the detector as a function of the depth is required. We report here a simple analytical estimation of the muon rates in the detector volume for different plausible depths, assuming a simple plain overburden geometry. We extend the calculation to the case of the San Onofre neutrino detector and to the case of the Double Chooz neutrino detector, where other estimates or measurements have been performed. Our estimated rates are consistent.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
NASA Astrophysics Data System (ADS)
Khatri, Rishi; Sunyaev, Rashid
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10-8 < langle yrangle < 2.2× 10-6. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of langle yrangle <15× 10-6. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from the detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10-6.
Van De Gucht, Tim; Saeys, Wouter; Van Meensel, Jef; Van Nuffel, Annelies; Vangeyte, Jurgen; Lauwers, Ludwig
2018-01-01
Although prototypes of automatic lameness detection systems for dairy cattle exist, information about their economic value is lacking. In this paper, a conceptual and operational framework for simulating the farm-specific economic value of automatic lameness detection systems was developed and tested on 4 system types: walkover pressure plates, walkover pressure mats, camera systems, and accelerometers. The conceptual framework maps essential factors that determine economic value (e.g., lameness prevalence, incidence and duration, lameness costs, detection performance, and their relationships). The operational simulation model links treatment costs and avoided losses with detection results and farm-specific information, such as herd size and lameness status. Results show that detection performance, herd size, discount rate, and system lifespan have a large influence on economic value. In addition, lameness prevalence influences the economic value, stressing the importance of an adequate prior estimation of the on-farm prevalence. The simulations provide first estimates for the upper limits for purchase prices of automatic detection systems. The framework allowed for identification of knowledge gaps obstructing more accurate economic value estimation. These include insights in cost reductions due to early detection and treatment, and links between specific lameness causes and their related losses. Because this model provides insight in the trade-offs between automatic detection systems' performance and investment price, it is a valuable tool to guide future research and developments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Progress in standoff surface contaminant detector platform
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Giblin, Jay; Dixon, John; Hensley, Joel; Mansur, David; Marinelli, William J.
2017-05-01
Progress towards the development of a longwave infrared quantum cascade laser (QLC) based standoff surface contaminant detection platform is presented. The detection platform utilizes reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. The platform employs an ensemble of broadband QCLs with a spectrally selective detector to interrogate target surfaces at 10s of m standoff. A version of the Adaptive Cosine Estimator (ACE) featuring class based screening is used for detection and discrimination in high clutter environments. Detection limits approaching 0.1 μg/cm2 are projected through speckle reduction methods enabling detector noise limited performance. The design, build, and validation of a breadboard version of the QCL-based surface contaminant detector are discussed. Functional test results specific to the QCL illuminator are presented with specific emphasis on speckle reduction.
Zhang, Yun; Liu, Fang; Nie, Jinfang; Jiang, Fuyang; Zhou, Caibin; Yang, Jiani; Fan, Jinlong; Li, Jianping
2014-05-07
In this paper, we report for the first time an electrochemical biosensor for single-step, reagentless, and picomolar detection of a sequence-specific DNA-binding protein using a double-stranded, electrode-bound DNA probe terminally modified with a redox active label close to the electrode surface. This new methodology is based upon local repression of electrolyte diffusion associated with protein-DNA binding that leads to reduction of the electrochemical response of the label. In the proof-of-concept study, the resulting electrochemical biosensor was quantitatively sensitive to the concentrations of the TATA binding protein (TBP, a model analyte) ranging from 40 pM to 25.4 nM with an estimated detection limit of ∼10.6 pM (∼80 to 400-fold improvement on the detection limit over previous electrochemical analytical systems).
Observing Strategies for the Detection of Jupiter Analogs
NASA Astrophysics Data System (ADS)
Wittenmyer, Robert A.; Tinney, C. G.; Horner, J.; Butler, R. P.; Jones, H. R. A.; O'Toole, S. J.; Bailey, J.; Carter, B. D.; Salter, G. S.; Wright, D.
2013-04-01
To understand the frequency, and thus the formation and evolution, of planetary systems like our own solar system, it is critical to detect Jupiter-like planets in Jupiter-like orbits. For long-term radial-velocity monitoring, it is useful to estimate the observational effort required to reliably detect such objects, particularly in light of severe competition for limited telescope time. We perform detailed simulations of observational campaigns, maximizing the realism of the sampling of a set of simulated observations. We then compute the detection limits for each campaign to quantify the effect of increasing the number of observational epochs and varying their time coverage. We show that once there is sufficient time baseline to detect a given orbital period, it becomes less effective to add further time coverage—rather, the detectability of a planet scales roughly as the square root of the number of observations, independently of the number of orbital cycles included in the data string. We also show that no noise floor is reached, with a continuing improvement in detectability at the maximum number of observations N = 500 tested here.
Smoothing of Gaussian quantum dynamics for force detection
NASA Astrophysics Data System (ADS)
Huang, Zhishen; Sarovar, Mohan
2018-04-01
Building on recent work by Gammelmark et al. [Phys. Rev. Lett. 111, 160401 (2013), 10.1103/PhysRevLett.111.160401] we develop a formalism for prediction and retrodiction of Gaussian quantum systems undergoing continuous measurements. We apply the resulting formalism to study the advantage of incorporating a full measurement record and retrodiction for impulselike force detection and accelerometry. We find that using retrodiction can only increase accuracy in a limited parameter regime, but that the reduction in estimation noise that it yields results in better detection of impulselike forces.
Detecting terrorist nuclear weapons at sea: The 10th door problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slaughter, D R
2008-09-15
While screening commercial cargo containers for the possible presence of WMD is important and necessary smugglers have successfully exploited the many other vehicles transporting cargo into the US including medium and small vessels at sea. These vessels provide a venue that is currently not screened and widely used. Physics limits that make screening of large vessels prohibitive impractical do not prohibit effective screening of the smaller vessels. While passive radiation detection is probably ineffective at sea active interrogation may provide a successful approach. The physics limits of active interrogation of ships at sea from standoff platforms are discussed. Autonomous platformsmore » that could carry interrogation systems at sea, both airborne and submersible, are summarized and their utilization discussed. An R&D program to investigate the limits of this approach to screening ships at sea is indicated and limitations estimated.« less
Schottky-contact plasmonic rectenna for biosensing
NASA Astrophysics Data System (ADS)
Alavirad, Mohammad; Siadat Mousavi, Saba; Roy, Langis; Berini, Pierre
2013-10-01
We propose a plasmonic gold nanodipole array on silicon, forming a Schottky contact thereon, and covered by water. The behavior of this array under normal excitation has been extensively investigated. Trends have been found and confirmed by identification of the mode propagating in nanodipoles and its properties. This device can be used to detect infrared radiation below the bandgap energy of the substrate via internal photoelectric effect (IPE). Also we estimate its responsivity and detection limit. Finally, we assess the potential of the structure for bulk and surface (bio) chemical sensing. Based on modal results an analytical model has been proposed to estimate the sensitivity of the device. Results show a good agreement between numerical and analytical interpretations.
Comet encke: radar detection of nucleus.
Kamoun, P G; Campbell, D B; Ostro, S J; Pettengill, G H; Shapiro, I I
1982-04-16
The nucleus of the periodic comet Encke was detected in November 1980 with the Arecibo Observatory's radar system (wavelength, 12.6 centimeters). The echoes in the one sense of circular polarization received imply a radar cross section of 1.1 +/- 0.7 square kilometers. The estimated bandwidth of these echoes combined with an estimate of the rotation vector of Encke yields a radius for the nucleus of l.5(+2.3)(-1.0) kilometers. The uncertainties given are dependent primarily on the range of models considered for the comet and for the manner in which its nucleus backscatters radio waves. Should this range prove inadequate, the true value of the radius of the nucleus might lie outside the limits given.
The use and misuse of aircraft and missile RCS statistics
NASA Astrophysics Data System (ADS)
Bishop, Lee R.
1991-07-01
Both static and dynamic radar cross sections measurements are used for RCS predictions, but the static data are less complete than the dynamic. Integrated dynamics RCS data also have limitations for prediction radar detection performance. When raw static data are properly used, good first-order detection estimates are possible. The research to develop more-usable RCS statistics is reviewed, and windowing techniques for creating probability density functions from static RCS data are discussed.
A source-attractor approach to network detection of radiation sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Qishi; Barry, M. L..; Grieme, M.
Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less
Krejcova, Ludmila; Dospivova, Dana; Ryvolova, Marketa; Kopel, Pavel; Hynek, David; Krizkova, Sona; Hubalek, Jaromir; Adam, Vojtech; Kizek, Rene
2012-11-01
Currently, the influenza virus infects millions of individuals every year. Since the influenza virus represents one of the greatest threats, it is necessary to develop a diagnostic technique that can quickly, inexpensively, and accurately detect the virus to effectively treat and control seasonal and pandemic strains. This study presents an alternative to current detection methods. The flow-injection analysis-based biosensor, which can rapidly and economically analyze a wide panel of influenza virus strains by using paramagnetic particles modified with glycan, can selectively bind to specific viral A/H5N1/Vietnam/1203/2004 protein-labeled quantum dots. Optimized detection of cadmium sulfide quantum dots (CdS QDs)-protein complexes connected to paramagnetic microbeads was performed using differential pulse voltammetry on the surface of a hanging mercury drop electrode (HMDE) and/or glassy carbon electrode (GCE). Detection limit (3 S/N) estimations based on cadmium(II) ions quantification were 0.1 μg/mL or 10 μg/mL viral protein at HMDE or GCE, respectively. Viral protein detection was directly determined using differential pulse voltammetry Brdicka reaction. The limit detection (3 S/N) of viral protein was estimated as 0.1 μg/mL. Streptavidin-modified paramagnetic particles were mixed with biotinylated selective glycan to modify their surfaces. Under optimized conditions (250 μg/mL of glycan, 30-min long interaction with viral protein, 25°C and 400 rpm), the viral protein labeled with quantum dots was selectively isolated and its cadmium(II) content was determined. Cadmium was present in detectable amounts of 10 ng per mg of protein. Using this method, submicrogram concentrations of viral proteins can be identified. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An intelligent detection method for high-field asymmetric waveform ion mobility spectrometry.
Li, Yue; Yu, Jianwen; Ruan, Zhiming; Chen, Chilai; Chen, Ran; Wang, Han; Liu, Youjiang; Wang, Xiaozhi; Li, Shan
2018-04-01
In conventional high-field asymmetric waveform ion mobility spectrometry signal acquisition, multi-cycle detection is time consuming and limits somewhat the technique's scope for rapid field detection. In this study, a novel intelligent detection approach has been developed in which a threshold was set on the relative error of α parameters, which can eliminate unnecessary time spent on detection. In this method, two full-spectrum scans were made in advance to obtain the estimated compensation voltage at different dispersion voltages, resulting in a narrowing down of the whole scan area to just the peak area(s) of interest. This intelligent detection method can reduce the detection time to 5-10% of that of the original full-spectrum scan in a single cycle.
Everatt, Kristoffer T.; Andresen, Leah; Somers, Michael J.
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km2 vs. 3.05 lions/100 km2). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2 400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against. PMID:24914934
Everatt, Kristoffer T; Andresen, Leah; Somers, Michael J
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km² vs. 3.05 lions/100 km²). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2,400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against.
NASA Astrophysics Data System (ADS)
Calderón Bustillo, Juan; Salemi, Francesco; Dal Canton, Tito; Jani, Karan P.
2018-01-01
The sensitivity of gravitational wave searches for binary black holes is estimated via the injection and posterior recovery of simulated gravitational wave signals in the detector data streams. When a search reports no detections, the estimated sensitivity is then used to place upper limits on the coalescence rate of the target source. In order to obtain correct sensitivity and rate estimates, the injected waveforms must be faithful representations of the real signals. Up to date, however, injected waveforms have neglected radiation modes of order higher than the quadrupole, potentially biasing sensitivity and coalescence rate estimates. In particular, higher-order modes are known to have a large impact in the gravitational waves emitted by intermediate-mass black holes binaries. In this work, we evaluate the impact of this approximation in the context of two search algorithms run by the LIGO Scientific Collaboration in their search for intermediate-mass black hole binaries in the O1 LIGO Science Run data: a matched filter-based pipeline and a coherent unmodeled one. To this end, we estimate the sensitivity of both searches to simulated signals for nonspinning binaries including and omitting higher-order modes. We find that omission of higher-order modes leads to biases in the sensitivity estimates which depend on the masses of the binary, the search algorithm, and the required level of significance for detection. In addition, we compare the sensitivity of the two search algorithms across the studied parameter space. We conclude that the most recent LIGO-Virgo upper limits on the rate of coalescence of intermediate-mass black hole binaries are conservative for the case of highly asymmetric binaries. However, the tightest upper limits, placed for nearly equal-mass sources, remain unchanged due to the small contribution of higher modes to the corresponding sources.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.
2017-01-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.
Ribeiro, T; Depres, S; Couteau, G; Pauss, A
2003-01-01
An alternative method for the estimation of nitrate and nitrogen forms in vegetables is proposed. Nitrate can be directly estimated by UV-spectrophotometry after an extraction step with water. The other nitrogen compounds are photo-oxidized into nitrate, and then estimated by UV-spectrophotometry. An oxidative solution of sodium persulfate and a Hg-UV lamp is used. Preliminary assays were realized with vegetables like salade, spinachs, artichokes, small peas, broccolis, carrots, watercress; acceptable correlations between expected and experimental values of nitrate amounts were obtained, while the detection limit needs to be lowered. The optimization of the method is underway.
Targeted Analyte Detection by Standard Addition Improves Detection Limits in MALDI Mass Spectrometry
Eshghi, Shadi Toghi; Li, Xingde; Zhang, Hui
2014-01-01
Matrix-assisted laser desorption/ionization has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications. PMID:22877355
Toghi Eshghi, Shadi; Li, Xingde; Zhang, Hui
2012-09-18
Matrix-assisted laser desorption/ionization (MALDI) has proven an effective tool for fast and accurate determination of many molecules. However, the detector sensitivity and chemical noise compromise the detection of many invaluable low-abundance molecules from biological and clinical samples. To challenge this limitation, we developed a targeted analyte detection (TAD) technique. In TAD, the target analyte is selectively elevated by spiking a known amount of that analyte into the sample, thereby raising its concentration above the noise level, where we take advantage of the improved sensitivity to detect the presence of the endogenous analyte in the sample. We assessed TAD on three peptides in simple and complex background solutions with various exogenous analyte concentrations in two MALDI matrices. TAD successfully improved the limit of detection (LOD) of target analytes when the target peptides were added to the sample in a concentration close to optimum concentration. The optimum exogenous concentration was estimated through a quantitative method to be approximately equal to the original LOD for each target. Also, we showed that TAD could achieve LOD improvements on an average of 3-fold in a simple and 2-fold in a complex sample. TAD provides a straightforward assay to improve the LOD of generic target analytes without the need for costly hardware modifications.
Williams, Shannon D.; Aycock, Robert A.
2001-01-01
Arnold Air Force Base (AAFB) occupies about 40,000 acres in Coffee and Franklin Counties, Tennessee. Numerous site-specific ground-water contamination investigations have been conducted at designated solid waste management units (SWMU?s) at AAFB. Several synthetic volatile organic compounds (VOC?s), primarily chlorinated solvents, have been identified in groundwater samples collected from monitoring wells near SWMU 8 in the Spring Creek area. During April and May 2000, a study of the groundwater resources in the Spring Creek area was conducted to determine if VOC?s from AAFB have affected local private water supplies and to advance understanding of the ground-water-flow system in this area. The study focused on sampling private wells located within the Spring Creek area that are used as a source of drinking water. Ground-water-flow directions were determined by measuring water levels in wells and constructing a potentiometric-surface map of the Manchester aquifer in the study area. Data were collected from a total of 35 private wells and 22 monitoring wells during the period of study. Depths to ground water were determined for 22 of the private wells and all 22 of the monitoring wells. The wells ranged in depth from 21 to 105 feet. Water-level altitudes ranged from 930 to 1,062 feet above sea level. Depths to water ranged from 8 to 83 feet below land surface. Water-quality samples were collected from 29 private wells which draw water from either gravel zones in the upper part of the Manchester aquifer, fractured bedrock in the lower part of the Manchester aquifer, or a combination of these two zones. Concentrations of 50 of the 55 VOC?s analyzed for were less than method detection limits. Chloroform, acetone, chloromethane, 2-butanone, and tetrachloroethylene were detected in concentrations exceeding the method detection limits. Only chloroform and acetone were detected in concentrations equal to or exceeding reporting limits. Chloroform was detected in a sample from one well at a concentration of 1.2 micrograms per liter (?g/L). Acetone was detected in a sample from another well at a concentration of 10 ?g/L. Acetone also was detected in a duplicate sample from the same well at an estimated concentration of 7.2 ?g/L, which is less than the reporting limit for acetone. The only contaminant of concern detected was tetrachloroethylene. Tetrachloroethylene was detected in only one sample, and this detection was at an estimated concentration below the reporting limit. None of the VOC concentrations exceeded drinking water maximum contaminant levels for public water systems.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Limits on radio emission from meteors using the MWA
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Hancock, Paul; Devillepoix, Hadrien A. R.; Wayth, Randall B.; Beardsley, A.; Crosse, B.; Emrich, D.; Franzen, T. M. O.; Gaensler, B. M.; Horsley, L.; Johnston-Hollitt, M.; Kaplan, D. L.; Kenney, D.; Morales, M. F.; Pallot, D.; Steele, K.; Tingay, S. J.; Trott, C. M.; Walker, M.; Williams, A.; Wu, C.; Ji, Jianghui; Ma, Yuehua
2018-04-01
Recently, low frequency, broadband radio emission has been observed accompanying bright meteors by the Long Wavelength Array (LWA). The broadband spectra between 20 and 60 MHz were captured for several events, while the spectral index (dependence of flux density on frequency, with Sν∝να) was estimated to be -4 ± 1 during the peak of meteor afterglows. Here we present a survey of meteor emission and other transient events using the Murchison Widefield Array (MWA) at 72-103 MHz. In our 322-hour survey, down to a 5σ detection threshold of 3.5 Jy/beam, no transient candidates were identified as intrinsic emission from meteors. We derived an upper limit of -3.7 (95% confidence limit) on the spectral index in our frequency range. We also report detections of other transient events, like reflected FM broadcast signals from small satellites, conclusively demonstrating the ability of the MWA to detect and track space debris on scales as small as 0.1 m in low Earth orbits.
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, S. R.; Vallisneri, M.; Ellis, J. A.
2016-03-01
Decade-long timing observations of arrays of millisecond pulsars have placed highly constraining upper limits on the amplitude of the nanohertz gravitational-wave stochastic signal from the mergers of supermassive black hole binaries (∼10{sup −15} strain at f = 1 yr{sup −1}). These limits suggest that binary merger rates have been overestimated, or that environmental influences from nuclear gas or stars accelerate orbital decay, reducing the gravitational-wave signal at the lowest, most sensitive frequencies. This prompts the question whether nanohertz gravitational waves (GWs) are likely to be detected in the near future. In this Letter, we answer this question quantitatively using simple statistical estimates,more » deriving the range of true signal amplitudes that are compatible with current upper limits, and computing expected detection probabilities as a function of observation time. We conclude that small arrays consisting of the pulsars with the least timing noise, which yield the tightest upper limits, have discouraging prospects of making a detection in the next two decades. By contrast, we find large arrays are crucial to detection because the quadrupolar spatial correlations induced by GWs can be well sampled by many pulsar pairs. Indeed, timing programs that monitor a large and expanding set of pulsars have an ∼80% probability of detecting GWs within the next 10 years, under assumptions on merger rates and environmental influences ranging from optimistic to conservative. Even in the extreme case where 90% of binaries stall before merger and environmental coupling effects diminish low-frequency gravitational-wave power, detection is delayed by at most a few years.« less
NASA Astrophysics Data System (ADS)
Gross, W.; Boehler, J.; Twizer, K.; Kedem, B.; Lenz, A.; Kneubuehler, M.; Wellig, P.; Oechslin, R.; Schilling, H.; Rotman, S.; Middelmann, W.
2016-10-01
Hyperspectral remote sensing data can be used for civil and military applications to robustly detect and classify target objects. High spectral resolution of hyperspectral data can compensate for the comparatively low spatial resolution, which allows for detection and classification of small targets, even below image resolution. Hyperspectral data sets are prone to considerable spectral redundancy, affecting and limiting data processing and algorithm performance. As a consequence, data reduction strategies become increasingly important, especially in view of near-real-time data analysis. The goal of this paper is to analyze different strategies for hyperspectral band selection algorithms and their effect on subpixel classification for different target and background materials. Airborne hyperspectral data is used in combination with linear target simulation procedures to create a representative amount of target-to-background ratios for evaluation of detection limits. Data from two different airborne hyperspectral sensors, AISA Eagle and Hawk, are used to evaluate transferability of band selection when using different sensors. The same target objects were recorded to compare the calculated detection limits. To determine subpixel classification results, pure pixels from the target materials are extracted and used to simulate mixed pixels with selected background materials. Target signatures are linearly combined with different background materials in varying ratios. The commonly used classification algorithms Adaptive Coherence Estimator (ACE) is used to compare the detection limit for the original data with several band selection and data reduction strategies. The evaluation of the classification results is done by assuming a fixed false alarm ratio and calculating the mean target-to-background ratio of correctly detected pixels. The results allow drawing conclusions about specific band combinations for certain target and background combinations. Additionally, generally useful wavelength ranges are determined and the optimal amount of principal components is analyzed.
Mali, Ivana; Duarte, Adam; Forstner, Michael R J
2018-01-01
Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
Bioaccumulation Study at Puffer Pond
1994-10-01
organism if the rate of intake of the pollutant is greater than the rate of excretion and/or metabolism . The result is an increase in body burden...PARAMETERS FOR FISH TISSUE SAMPLES Method Analyte Estimated Method Detection Limit (pg/g) ORGANOPHOSPHORUS PESTICIDES ATRAZINE 2.5 VAPONA 4.5 MALATHION
Spatially explicit dynamic N-mixture models
Zhao, Qing; Royle, Andy; Boomer, G. Scott
2017-01-01
Knowledge of demographic parameters such as survival, reproduction, emigration, and immigration is essential to understand metapopulation dynamics. Traditionally the estimation of these demographic parameters requires intensive data from marked animals. The development of dynamic N-mixture models makes it possible to estimate demographic parameters from count data of unmarked animals, but the original dynamic N-mixture model does not distinguish emigration and immigration from survival and reproduction, limiting its ability to explain important metapopulation processes such as movement among local populations. In this study we developed a spatially explicit dynamic N-mixture model that estimates survival, reproduction, emigration, local population size, and detection probability from count data under the assumption that movement only occurs among adjacent habitat patches. Simulation studies showed that the inference of our model depends on detection probability, local population size, and the implementation of robust sampling design. Our model provides reliable estimates of survival, reproduction, and emigration when detection probability is high, regardless of local population size or the type of sampling design. When detection probability is low, however, our model only provides reliable estimates of survival, reproduction, and emigration when local population size is moderate to high and robust sampling design is used. A sensitivity analysis showed that our model is robust against the violation of the assumption that movement only occurs among adjacent habitat patches, suggesting wide applications of this model. Our model can be used to improve our understanding of metapopulation dynamics based on count data that are relatively easy to collect in many systems.
NASA Astrophysics Data System (ADS)
Zhao, Mingkang; Wi, Hun; Lee, Eun Jung; Woo, Eung Je; In Oh, Tong
2014-10-01
Electrical impedance imaging has the potential to detect an early stage of breast cancer due to higher admittivity values compared with those of normal breast tissues. The tumor size and extent of axillary lymph node involvement are important parameters to evaluate the breast cancer survival rate. Additionally, the anomaly characterization is required to distinguish a malignant tumor from a benign tumor. In order to overcome the limitation of breast cancer detection using impedance measurement probes, we developed the high density trans-admittance mammography (TAM) system with 60 × 60 electrode array and produced trans-admittance maps obtained at several frequency pairs. We applied the anomaly detection algorithm to the high density TAM system for estimating the volume and position of breast tumor. We tested four different sizes of anomaly with three different conductivity contrasts at four different depths. From multifrequency trans-admittance maps, we can readily observe the transversal position and estimate its volume and depth. Specially, the depth estimated values were obtained accurately, which were independent to the size and conductivity contrast when applying the new formula using Laplacian of trans-admittance map. The volume estimation was dependent on the conductivity contrast between anomaly and background in the breast phantom. We characterized two testing anomalies using frequency difference trans-admittance data to eliminate the dependency of anomaly position and size. We confirmed the anomaly detection and characterization algorithm with the high density TAM system on bovine breast tissue. Both results showed the feasibility of detecting the size and position of anomaly and tissue characterization for screening the breast cancer.
Deep learning-based depth estimation from a synthetic endoscopy image training set
NASA Astrophysics Data System (ADS)
Mahmood, Faisal; Durr, Nicholas J.
2018-03-01
Colorectal cancer is the fourth leading cause of cancer deaths worldwide. The detection and removal of premalignant lesions through an endoscopic colonoscopy is the most effective way to reduce colorectal cancer mortality. Unfortunately, conventional colonoscopy has an almost 25% polyp miss rate, in part due to the lack of depth information and contrast of the surface of the colon. Estimating depth using conventional hardware and software methods is challenging in endoscopy due to limited endoscope size and deformable mucosa. In this work, we use a joint deep learning and graphical model-based framework for depth estimation from endoscopy images. Since depth is an inherently continuous property of an object, it can easily be posed as a continuous graphical learning problem. Unlike previous approaches, this method does not require hand-crafted features. Large amounts of augmented data are required to train such a framework. Since there is limited availability of colonoscopy images with ground-truth depth maps and colon texture is highly patient-specific, we generated training images using a synthetic, texture-free colon phantom to train our models. Initial results show that our system can estimate depths for phantom test data with a relative error of 0.164. The resulting depth maps could prove valuable for 3D reconstruction and automated Computer Aided Detection (CAD) to assist in identifying lesions.
Detection of single nano-defects in photonic crystals between crossed polarizers.
Grepstad, Jon Olav; Kaspar, Peter; Johansen, Ib-Rune; Solgaard, Olav; Sudbø, Aasmund
2013-12-16
We investigate, by simulations and experiments, the light scattering of small particles trapped in photonic crystal membranes supporting guided resonance modes. Our results show that, due to amplified Rayleigh small particle scattering, such membranes can be utilized to make a sensor that can detect single nano-particles. We have designed a biomolecule sensor that uses cross-polarized excitation and detection for increased sensitivity. Estimated using Rayleigh scattering theory and simulation results, the current fabricated sensor has a detection limit of 26 nm, corresponding to the size of a single virus. The sensor can potentially be made both cheap and compact, to facilitate use at point-of-care.
Hypersensitivity to Cold Stimuli in Symptomatic Contact Lens Wearers
Situ, Ping; Simpson, Trefford; Begley, Carolyn
2016-01-01
Purpose To examine the cooling thresholds and the estimated sensation magnitude at stimulus detection in controls and symptomatic and asymptomatic contact lens (CL) wearers, in order to determine whether detection thresholds depend on the presence of symptoms of dryness and discomfort. Methods 49 adapted CL wearers and 15 non-lens wearing controls had room temperature pneumatic thresholds measured using a custom Belmonte esthesiometer, during Visits 1 and 2 (Baseline CL), Visit 3 (2 weeks no CL wear) and Visit 4 (2 weeks after resuming CL wear). CL wearers were subdivided into symptomatic and asymptomatic groups based on comfortable wearing time (CWT) and CLDEQ-8 score (<8 hours CWT and ≥14 CLDEQ-8 stratified the symptom groups). Detection thresholds were estimated using an ascending method of limits and each threshold was the average of the three first-reported flow rates. The magnitude of intensity, coolness, irritation and pain at detection of the stimulus were estimated using a 1-100 scale (1 very mild, 100 very strong). Results In all measurement conditions, the symptomatic CL wearers were the most sensitive, the asymptomatic CL wearers were the least sensitive and the control group was between the two CL wearing groups (group factor p < 0.001, post hoc asymptomatic vs. symptomatic group, all p’s < 0.015). Similar patterns were found for the estimated magnitude of intensity and irritation (group effect p=0.027 and 0.006 for intensity and irritation, respectively) but not for cooling (p>0.05) at detection threshold. Conclusions Symptomatic CL wearers have higher cold detection sensitivity and report greater intensity and irritation sensation at stimulus detection than the asymptomatic wearers. Room temperature pneumatic esthesiometry may help to better understand the process of sensory adaptation to CL wear. PMID:27046090
Transfer of estradiol to human milk. [Radioimmunoassay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nilsson, S.; Nygren, K.G.; Johansson, E.D.B.
1978-11-15
A radioimmunoassay for the measurement of estradiol in human milk is evaluated. The detection limit was found to be 25 pg of estradiol per milliliter of milk. In milk samples collected from four lactating women during three to four months and from one pregnant and lactating woman, the concentration of estradiol was found to be below the detection limit of the assay. When six lactating women were given vaginal suppositories containing 50 or 100 mg of estradiol, it was possible to estimate the estradiol concentration in milk. A ratio of transfer of estradiol from plasma to milk during physiologic conditionsmore » is calculated to be less than 100 : 10.« less
From nature to MEMS: towards the detection-limit of crickets' hair sensors
NASA Astrophysics Data System (ADS)
Dagamseh, A. M. K.
2013-05-01
Crickets use highly sensitive mechanoreceptor hairs to detect approaching spiders. The high sensitivity of these hairs enables perceiving tiny air-movements which are only just distinguishable from noise. This forms our source of inspiration to design sensitive arrays made of artificial hair sensors for flow pattern observation i.e. Flow camera. The realization of such high-sensitive hair sensor requires designs with low thermo-mechanical noise to match the detection-limit of crickets' hairs. Here we investigate the damping factor in our artificial hair-sensor using different models as it is the source of the thermo-mechanical noise in MEMS structures. The results show that the damping factor estimated in air is in the range of 10-12 N.m/rad.s-1 which translates into a 52 μm/s threshold flow velocity.
A 20-year catalog comparing smooth and sharp estimates of slow slip events in Cascadia
NASA Astrophysics Data System (ADS)
Molitors Bergman, E. G.; Evans, E. L.; Loveless, J. P.
2017-12-01
Slow slip events (SSEs) are a form of aseismic strain release at subduction zones resulting in a temporary reversal in interseismic upper plate motion over a period of weeks, frequently accompanied in time and space by seismic tremor at the Cascadia subduction zone. Locating SSEs spatially along the subduction zone interface is essential to understanding the relationship between SSEs, earthquakes, and tremor and assessing megathrust earthquake hazard. We apply an automated slope comparison-based detection algorithm to single continuously recording GPS stations to determine dates and surface displacement vectors of SSEs, then apply network-based filters to eliminate false detections. The main benefits of this algorithm are its ability to detect SSEs while they are occurring and track the spatial migration of each event. We invert geodetic displacement fields for slip distributions on the subduction zone interface for SSEs between 1997 and 2017 using two estimation techniques: spatial smoothing and total variation regularization (TVR). Smoothing has been frequently used in determining the location of interseismic coupling, earthquake rupture, and SSE slip and yields spatially coherent but inherently blurred solutions. TVR yields compact, sharply bordered slip estimates of similar magnitude and along-strike extent to previously presented studied events, while fitting the constraining geodetic data as well as corresponding smoothing-based solutions. Slip distributions estimated using TVR have up-dip limits that align well with down-dip limits of interseismic coupling on the plate interface and spatial extents that approximately correspond to the distribution of tremor concurrent with each event. TVR gives a unique view of slow slip distributions that can contribute to understanding of the physical properties that govern megathrust slip processes.
Wagner, John H; Miskelly, Gordon M
2003-05-01
The combination of photographs taken at wavelengths at and bracketing the peak of a narrow absorbance band can lead to enhanced visualization of the substance causing the narrow absorbance band. This concept can be used to detect putative bloodstains by division of a linear photographic image taken at or near 415 nm with an image obtained by averaging linear photographs taken at or near 395 and 435 nm. Nonlinear images can also be background corrected by substituting subtraction for the division. This paper details experimental applications and limitations of this technique, including wavelength selection of the illuminant and at the camera. Characterization of a digital camera to be used in such a study is also detailed. Detection limits for blood using the three wavelength correction method under optimum conditions have been determined to be as low as 1 in 900 dilution, although on strongly patterned substrates blood diluted more than twenty-fold is difficult to detect. Use of only the 435 nm photograph to estimate the background in the 415 nm image lead to a twofold improvement in detection limit on unpatterned substrates compared with the three wavelength method with the particular camera and lighting system used, but it gave poorer background correction on patterned substrates.
N-mixture models for estimating population size from spatially replicated counts
Royle, J. Andrew
2004-01-01
Spatial replication is a common theme in count surveys of animals. Such surveys often generate sparse count data from which it is difficult to estimate population size while formally accounting for detection probability. In this article, i describe a class of models (n-mixture models) which allow for estimation of population size from such data. The key idea is to view site-specific population sizes, n, as independent random variables distributed according to some mixing distribution (e.g., Poisson). Prior parameters are estimated from the marginal likelihood of the data, having integrated over the prior distribution for n. Carroll and lombard (1985, journal of american statistical association 80, 423-426) proposed a class of estimators based on mixing over a prior distribution for detection probability. Their estimator can be applied in limited settings, but is sensitive to prior parameter values that are fixed a priori. Spatial replication provides additional information regarding the parameters of the prior distribution on n that is exploited by the n-mixture models and which leads to reasonable estimates of abundance from sparse data. A simulation study demonstrates superior operating characteristics (bias, confidence interval coverage) of the n-mixture estimator compared to the caroll and lombard estimator. Both estimators are applied to point count data on six species of birds illustrating the sensitivity to choice of prior on p and substantially different estimates of abundance as a consequence.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Achieving Real-Time Tracking Mobile Wireless Sensors Using SE-KFA
NASA Astrophysics Data System (ADS)
Kadhim Hoomod, Haider, Dr.; Al-Chalabi, Sadeem Marouf M.
2018-05-01
Nowadays, Real-Time Achievement is very important in different fields, like: Auto transport control, some medical applications, celestial body tracking, controlling agent movements, detections and monitoring, etc. This can be tested by different kinds of detection devices, which named "sensors" as such as: infrared sensors, ultrasonic sensor, radars in general, laser light sensor, and so like. Ultrasonic Sensor is the most fundamental one and it has great impact and challenges comparing with others especially when navigating (as an agent). In this paper, concerning to the ultrasonic sensor, sensor(s) detecting and delimitation by themselves then navigate inside a limited area to estimating Real-Time using Speed Equation with Kalman Filter Algorithm as an intelligent estimation algorithm. Then trying to calculate the error comparing to the factual rate of tracking. This paper used Ultrasonic Sensor HC-SR04 with Arduino-UNO as Microcontroller.
Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.
Saiki, Jun; Miyatsuji, Hirofumi
2009-03-23
Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasui, Chikako; Kobayashi, Naoto; Izumi, Natsuko
To study star formation in low-metallicity environments ([M/H] ∼ −1 dex), we obtained deep near-infrared (NIR) images of Sh 2-207 (S207), which is an H ii region in the outer Galaxy with a spectroscopically determined metallicity of [O/H] ≃ −0.8 dex. We identified a young cluster in the western region of S207 with a limiting magnitude of K{sub S} = 19.0 mag (10σ) that corresponds to a mass detection limit of ≲0.1 M{sub ⊙} and enables the comparison of star-forming properties under low metallicity with those of the solar neighborhood. From the fitting of the K-band luminosity function (KLF), the age and distance of the S207more » cluster are estimated at 2–3 Myr and ∼4 kpc, respectively. The estimated age is consistent with the suggestion of small extinctions of stars in the cluster (A{sub V} ∼ 3 mag) and the non-detection of molecular clouds. The reasonably good fit between the observed KLF and the model KLF suggests that the underlying initial mass function (IMF) of the cluster down to the detection limit is not significantly different from the typical IMFs in the solar metallicity. From the fraction of stars with NIR excesses, a low disk fraction (<10%) in the cluster with a relatively young age is suggested, as we had previously proposed.« less
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Lee, Sangyeop; Choo, Jaebum
2016-06-01
A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner.A novel surface-enhanced Raman scattering (SERS)-based lateral flow immunoassay (LFA) biosensor was developed to resolve problems associated with conventional LFA strips (e.g., limits in quantitative analysis and low sensitivity). In our SERS-based biosensor, Raman reporter-labeled hollow gold nanospheres (HGNs) were used as SERS detection probes instead of gold nanoparticles. With the proposed SERS-based LFA strip, the presence of a target antigen can be identified through a colour change in the test zone. Furthermore, highly sensitive quantitative evaluation is possible by measuring SERS signals from the test zone. To verify the feasibility of the SERS-based LFA strip platform, an immunoassay of staphylococcal enterotoxin B (SEB) was performed as a model reaction. The limit of detection (LOD) for SEB, as determined with the SERS-based LFA strip, was estimated to be 0.001 ng mL-1. This value is approximately three orders of magnitude more sensitive than that achieved with the corresponding ELISA-based method. The proposed SERS-based LFA strip sensor shows significant potential for the rapid and sensitive detection of target markers in a simplified manner. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07243c
Results and evaluation of a survey to estimate Pacific walrus population size, 2006
Speckman, Suzann G.; Chernook, Vladimir I.; Burn, Douglas M.; Udevitz, Mark S.; Kochnev, Anatoly A.; Vasilev, Alexander; Jay, Chadwick V.; Lisovsky, Alexander; Fischbach, Anthony S.; Benter, R. Bradley
2011-01-01
In spring 2006, we conducted a collaborative U.S.-Russia survey to estimate abundance of the Pacific walrus (Odobenus rosmarus divergens). The Bering Sea was partitioned into survey blocks, and a systematic random sample of transects within a subset of the blocks was surveyed with airborne thermal scanners using standard strip-transect methodology. Counts of walruses in photographed groups were used to model the relation between thermal signatures and the number of walruses in groups, which was used to estimate the number of walruses in groups that were detected by the scanner but not photographed. We also modeled the probability of thermally detecting various-sized walrus groups to estimate the number of walruses in groups undetected by the scanner. We used data from radio-tagged walruses to adjust on-ice estimates to account for walruses in the water during the survey. The estimated area of available habitat averaged 668,000 km2 and the area of surveyed blocks was 318,204 km2. The number of Pacific walruses within the surveyed area was estimated at 129,000 with 95% confidence limits of 55,000 to 507,000 individuals. This value can be used by managers as a minimum estimate of the total population size.
Dual linear structured support vector machine tracking method via scale correlation filter
NASA Astrophysics Data System (ADS)
Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen
2018-01-01
Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.
Babamoradi, Hamid; van den Berg, Frans; Rinnan, Åsmund
2016-02-18
In Multivariate Statistical Process Control, when a fault is expected or detected in the process, contribution plots are essential for operators and optimization engineers in identifying those process variables that were affected by or might be the cause of the fault. The traditional way of interpreting a contribution plot is to examine the largest contributing process variables as the most probable faulty ones. This might result in false readings purely due to the differences in natural variation, measurement uncertainties, etc. It is more reasonable to compare variable contributions for new process runs with historical results achieved under Normal Operating Conditions, where confidence limits for contribution plots estimated from training data are used to judge new production runs. Asymptotic methods cannot provide confidence limits for contribution plots, leaving re-sampling methods as the only option. We suggest bootstrap re-sampling to build confidence limits for all contribution plots in online PCA-based MSPC. The new strategy to estimate CLs is compared to the previously reported CLs for contribution plots. An industrial batch process dataset was used to illustrate the concepts. Copyright © 2016 Elsevier B.V. All rights reserved.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
Continuous-variable quantum probes for structured environments
NASA Astrophysics Data System (ADS)
Bina, Matteo; Grasselli, Federico; Paris, Matteo G. A.
2018-01-01
We address parameter estimation for structured environments and suggest an effective estimation scheme based on continuous-variables quantum probes. In particular, we investigate the use of a single bosonic mode as a probe for Ohmic reservoirs, and obtain the ultimate quantum limits to the precise estimation of their cutoff frequency. We assume the probe prepared in a Gaussian state and determine the optimal working regime, i.e., the conditions for the maximization of the quantum Fisher information in terms of the initial preparation, the reservoir temperature, and the interaction time. Upon investigating the Fisher information of feasible measurements, we arrive at a remarkable simple result: homodyne detection of canonical variables allows one to achieve the ultimate quantum limit to precision under suitable, mild, conditions. Finally, upon exploiting a perturbative approach, we find the invariant sweet spots of the (tunable) characteristic frequency of the probe, able to drive the probe towards the optimal working regime.
Koneff, M.D.; Royle, J. Andrew; Forsell, D.J.; Wortham, J.S.; Boomer, G.S.; Perry, M.C.
2005-01-01
Survey design for wintering scoters (Melanitta sp.) and other sea ducks that occur in offshore waters is challenging because these species have large ranges, are subject to distributional shifts among years and within a season, and can occur in aggregations. Interest in winter sea duck population abundance surveys has grown in recent years. This interest stems from concern over the population status of some sea ducks, limitations of extant breeding waterfowl survey programs in North America and logistical challenges and costs of conducting surveys in northern breeding regions, high winter area philopatry in some species and potential conservation implications, and increasing concern over offshore development and other threats to sea duck wintering habitats. The efficiency and practicality of statistically-rigorous monitoring strategies for mobile, aggregated wintering sea duck populations have not been sufficiently investigated. This study evaluated a 2-phase adaptive stratified strip transect sampling plan to estimate wintering population size of scoters, long-tailed ducks (Clangua hyemalis), and other sea ducks and provide information on distribution. The sampling plan results in an optimal allocation of a fixed sampling effort among offshore strata in the U.S. mid-Atlantic coast region. Phase I transect selection probabilities were based on historic distribution and abundance data, while Phase 2 selection probabilities were based on observations made during Phase 1 flights. Distance sampling methods were used to estimate detection rates. Environmental variables thought to affect detection rates were recorded during the survey and post-stratification and covariate modeling were investigated to reduce the effect of heterogeneity on detection estimation. We assessed cost-precision tradeoffs under a number of fixed-cost sampling scenarios using Monte Carlo simulation. We discuss advantages and limitations of this sampling design for estimating wintering sea duck abundance and mapping distribution and suggest improvements for future surveys.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics.
Williams, Perry J; Hooten, Mevin B; Womble, Jamie N; Esslinger, George G; Bower, Michael R; Hefley, Trevor J
2017-02-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska. © 2016 by the Ecological Society of America.
WE-H-207A-09: Theoretical Limits to Molecular Biomarker Detection Using Magnetic Nanoparticles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weaver, J; Geisel School of Medicine, Dartmouth College, Hanover, NH
Purpose: Estimate the limits of molecular biomarker detection using magnetic nanoparticle methods like in vivo ELISA. Methods: Magnetic nanoparticles in an alternating magnetic field produce a magnetization that can be detected at exceedingly low levels because the signal at the harmonic frequencies is uniquely produced by the nanoparticles. Because the magnetization can also be used to characterize the nanoparticle rotational freedom, the bound state can be found. If the nanoparticles are coated with molecules that bind the desired biomarker, the rotational freedom reflects the biomarker concentration. The irreducible noise limit is the thermal noise or Johnson noise of the tissuemore » and the contrast that can be measured must be larger than that limit. The contrast produced is a function of the applied field and depends strongly on nanoparticle volume. We have estimated the contrast using a Langevin function of a single composite variable to approximate the full stochastic Langevin equation for nanoparticle dynamics. Results: The thermal noise for a bandwidth reasonable for spectroscopy suggests mid zeptomolar (10–21) to low attomolar (10–18) concentrations can be measured in a volume that is 10cm in scale. The suggested sensitivity is far below the physiologically concentrations of almost all critical biomarkers including cytokines (picomolar), hormones (nanomolar) and heat shock proteins. Conclusion: The sensitivity of in vivo ELISA concentration measurements should be sufficient to measure physiological concentrations of critical biomarkers like cytokines in vivo. Further the sensitivity should be sufficient to measure concentrations of other biomarkers that are six to eight orders of magnitude lower in concentration than immune signaling molecules like cytokines. NIH - 1U54CA151662-01 Department of Radiology.« less
QCL-based standoff and proximal chemical detectors
NASA Astrophysics Data System (ADS)
Dupuis, Julia R.; Hensley, Joel; Cosofret, Bogdan R.; Konno, Daisei; Mulhall, Phillip; Schmit, Thomas; Chang, Shing; Allen, Mark; Marinelli, William J.
2016-05-01
The development of two longwave infrared quantum cascade laser (QCL) based surface contaminant detection platforms supporting government programs will be discussed. The detection platforms utilize reflectance spectroscopy with application to optically thick and thin materials including solid and liquid phase chemical warfare agents, toxic industrial chemicals and materials, and explosives. Operation at standoff (10s of m) and proximal (1 m) ranges will be reviewed with consideration given to the spectral signatures contained in the specular and diffusely reflected components of the signal. The platforms comprise two variants: Variant 1 employs a spectrally tunable QCL source with a broadband imaging detector, and Variant 2 employs an ensemble of broadband QCLs with a spectrally selective detector. Each variant employs a version of the Adaptive Cosine Estimator for detection and discrimination in high clutter environments. Detection limits of 5 μg/cm2 have been achieved through speckle reduction methods enabling detector noise limited performance. Design considerations for QCL-based standoff and proximal surface contaminant detectors are discussed with specific emphasis on speckle-mitigated and detector noise limited performance sufficient for accurate detection and discrimination regardless of the surface coverage morphology or underlying surface reflectivity. Prototype sensors and developmental test results will be reviewed for a range of application scenarios. Future development and transition plans for the QCL-based surface detector platforms are discussed.
Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory
Parpia, Zaheer A.; Kelso, David M.
2010-01-01
Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793
Plasmodium vivax molecular diagnostics in community surveys: pitfalls and solutions.
Gruenberg, Maria; Moniz, Clara Antunes; Hofmann, Natalie Ellen; Wampfler, Rahel; Koepfli, Cristian; Mueller, Ivo; Monteiro, Wuelton Marcelo; Lacerda, Marcus; de Melo, Gisely Cardoso; Kuehn, Andrea; Siqueira, Andre M; Felger, Ingrid
2018-01-30
A distinctive feature of Plasmodium vivax infections is the overall low parasite density in peripheral blood. Thus, identifying asymptomatic infected individuals in endemic communities requires diagnostic tests with high sensitivity. The detection limits of molecular diagnostic tests are primarily defined by the volume of blood analysed and by the copy number of the amplified molecular marker serving as the template for amplification. By using mitochondrial DNA as the multi-copy template, the detection limit can be improved more than tenfold, compared to standard 18S rRNA targets, thereby allowing detection of lower parasite densities. In a very low transmission area in Brazil, application of a mitochondrial DNA-based assay increased prevalence from 4.9 to 6.5%. The usefulness of molecular tests in malaria epidemiological studies is widely recognized, especially when precise prevalence rates are desired. Of concern, however, is the challenge of demonstrating test accuracy and quality control for samples with very low parasite densities. In this case, chance effects in template distribution around the detection limit constrain reproducibility. Rigorous assessment of false positive and false negative test results is, therefore, required to prevent over- or under-estimation of parasite prevalence in epidemiological studies or when monitoring interventions.
Surface Plasmon Resonance Based Sensitive Immunosensor for Benzaldehyde Detection
NASA Astrophysics Data System (ADS)
Onodera, Takeshi; Shimizu, Takuzo; Miura, Norio; Matsumoto, Kiyoshi; Toko, Kiyoshi
Fragrant compounds used to add flavor to beverages remain in the manufacturing line after the beverage manufacturing process. Line cleanliness before the next manufacturing cycle is difficult to estimate by sensory analysis, making excessive washing necessary. A new measurement system to determine line cleanliness is desired. In this study, we attempted to detect benzaldehyde (Bz) using an anti-Bz monoclonal antibody (Bz-Ab) and a surface plasmon resonance (SPR) sensor. We fabricated two types of sensor chips using self-assembled monolayers (SAMs) and investigated which sensor surface exhibited higher sensitivity. In addition, anti-Bz antibody conjugated with horseradish peroxidase (HRP-Bz-Ab) was used to enhance the SPR signal. A detection limit of ca. 9ng/mL (ppb) was achieved using an immobilized 4-carboxybenzaldehyde sensor surface using SAMs containing ethylene glycol. When the HRP-Bz-Ab concentration was reduced to 30ng/mL, a detection limit of ca. 4ng/mL (ppb) was achieved for Bz.
López-Calleja, Inés María; de la Cruz, Silvia; González, Isabel; García, Teresa; Martín, Rosario
2015-06-15
Two real-time polymerase chain reaction (PCR)-based assays for detection of walnut (Juglans regia) and pecan (Carya illinoinensis) traces in a wide range of processed foods are described here. The method consists on a real-time PCR assay targeting the ITS1 region, using a nuclease (TaqMan) probe labeled with FAM and BBQ. The method was positive for walnut and pecan respectively, and negative for all other heterologous plants and animals tested. Using a series of model samples with defined raw walnut in wheat flour and heat-treated walnut in wheat flour with a range of concentrations of 0.1-100,000 mg kg(-1), a practical detection limit of 0.1 mg kg(-1) of walnut content was estimated. Identical binary mixtures were done for pecan, reaching the same limit of detection of 0.1 mg kg(-1). The assay was successfully trialed on a total of 232 commercial foodstuffs. Copyright © 2015 Elsevier Ltd. All rights reserved.
Remote monitoring of fish in small streams: A unified approach using PIT tags
Zydlewski, G.B.; Horton, G.; Dubreuil, T.; Letcher, B.; Casey, S.; Zydlewski, Joseph D.
2006-01-01
Accurate assessments of fish populations are often limited by re-observation or recapture events. Since the early 1990s, passive integrated transponders (PIT tags) have been used to understand the biology of many fish species. Until recently, PIT applications in small streams have been limited to physical recapture events. To maximize recapture probability, we constructed PIT antenna arrays in small streams to remotely detect individual fish. Experiences from two different laboratories (three case studies) allowed us to develop a unified approach to applying PIT technology for enhancing data assessments. Information on equipment, its installation, tag considerations, and array construction is provided. Theoretical and practical definitions are introduced to standardize metrics for assessing detection efficiency. We demonstrate how certain conditions (stream discharge, vibration, and ambient radio frequency noise) affect the detection efficiency and suggest that by monitoring these conditions, expectations of efficiency can be modified. We emphasize the importance of consistently estimating detection efficiency for fisheries applications.
Flux of Kilogram-Sized Meteoroids from Lunar Impact Monitoring
NASA Technical Reports Server (NTRS)
Suggs, Robert; Suggs, Ron; Cooke, William; McNamara, Heather; Diekmann, Anne; Moser, Danielle; Swift, Wesley
2008-01-01
Routine lunar impact monitoring has harvested over 110 impacts in 2 years of observations using 0.25, 0.36 and 0.5 m telescopes and low-light-level video cameras. The night side of the lunar surface provides a large collecting area for detecting these impacts and allows estimation of the flux of meteoroids down to a limiting luminous energy. In order to determine the limiting mass for these observations, models of the sporadic meteoroid environment were used to determine the velocity distribution and new measurements of luminous efficiency were made at the Ames Vertical Gun Range. The flux of meteoroids in this size range has implications for Near Earth Object populations as well as for estimating impact ejecta risk for future lunar missions.
Tran, Damien; Ciret, Pierre; Ciutat, Aurélie; Durrieu, Gilles; Massabuau, Jean-Charles
2003-04-01
Bivalve closure responses to detect contaminants have often been studied in ecotoxicology as an aquatic pollution biosensor. We present a new laboratory procedure to estimate its potential and limits for various contaminants and animal susceptible to stress. The study was performed in the Asiatic clam Corbicula fluminea and applied to cadmium. To take into account the rate of spontaneous closures, we integrated stress problems associated with fixation by a valve in common apparatus and the spontaneous rhythm associated with circadian activity to focus on conditions with the lowest probability of spontaneous closing. Moreover, we developed an original system by impedance valvometry, using light-weight impedance electrodes, to study free-ranging animals in low-stress conditions and a new analytical approach to describe valve closure behavior as a function of response time and concentration of contaminant. In C. fluminea, we show that cadmium concentrations above 50 microg/L can be detected within less than 1 h, concentrations down to 16 microg/L require 5 h of integration time, and values lower than 16 microg/L cannot be distinguished from background noise. Our procedure improved by a factor of six the cadmium sensitivity threshold reported in the literature. Problems of field applications are discussed.
Lewis, Jesse S.; Gerber, Brian D.
2014-01-01
Motion-activated cameras are a versatile tool that wildlife biologists can use for sampling wild animal populations to estimate species occurrence. Occupancy modelling provides a flexible framework for the analysis of these data; explicitly recognizing that given a species occupies an area the probability of detecting it is often less than one. Despite the number of studies using camera data in an occupancy framework, there is only limited guidance from the scientific literature about survey design trade-offs when using motion-activated cameras. A fuller understanding of these trade-offs will allow researchers to maximise available resources and determine whether the objectives of a monitoring program or research study are achievable. We use an empirical dataset collected from 40 cameras deployed across 160 km2 of the Western Slope of Colorado, USA to explore how survey effort (number of cameras deployed and the length of sampling period) affects the accuracy and precision (i.e., error) of the occupancy estimate for ten mammal and three virtual species. We do this using a simulation approach where species occupancy and detection parameters were informed by empirical data from motion-activated cameras. A total of 54 survey designs were considered by varying combinations of sites (10–120 cameras) and occasions (20–120 survey days). Our findings demonstrate that increasing total sampling effort generally decreases error associated with the occupancy estimate, but changing the number of sites or sampling duration can have very different results, depending on whether a species is spatially common or rare (occupancy = ψ) and easy or hard to detect when available (detection probability = p). For rare species with a low probability of detection (i.e., raccoon and spotted skunk) the required survey effort includes maximizing the number of sites and the number of survey days, often to a level that may be logistically unrealistic for many studies. For common species with low detection (i.e., bobcat and coyote) the most efficient sampling approach was to increase the number of occasions (survey days). However, for common species that are moderately detectable (i.e., cottontail rabbit and mule deer), occupancy could reliably be estimated with comparatively low numbers of cameras over a short sampling period. We provide general guidelines for reliably estimating occupancy across a range of terrestrial species (rare to common: ψ = 0.175–0.970, and low to moderate detectability: p = 0.003–0.200) using motion-activated cameras. Wildlife researchers/managers with limited knowledge of the relative abundance and likelihood of detection of a particular species can apply these guidelines regardless of location. We emphasize the importance of prior biological knowledge, defined objectives and detailed planning (e.g., simulating different study-design scenarios) for designing effective monitoring programs and research studies. PMID:25210658
Underwater linear polarization: physical limitations to biological functions
Shashar, Nadav; Johnsen, Sönke; Lerner, Amit; Sabbah, Shai; Chiao, Chuan-Chin; Mäthger, Lydia M.; Hanlon, Roger T.
2011-01-01
Polarization sensitivity is documented in a range of marine animals. The variety of tasks for which animals can use this sensitivity, and the range over which they do so, are confined by the visual systems of these animals and by the propagation of the polarization information in the aquatic environment. We examine the environmental physical constraints in an attempt to reveal the depth, range and other limitations to the use of polarization sensitivity by marine animals. In clear oceanic waters, navigation that is based on the polarization pattern of the sky appears to be limited to shallow waters, while solar-based navigation is possible down to 200–400 m. When combined with intensity difference, polarization sensitivity allows an increase in target detection range by 70–80% with an upper limit of 15 m for large-eyed animals. This distance will be significantly smaller for small animals, such as plankton, and in turbid waters. Polarization-contrast detection, which is relevant to object detection and communication, is strongly affected by water conditions and in clear waters its range limit may reach 15 m as well. We show that polarization sensitivity may also serve for target distance estimation, when examining point source bioluminescent objects in the photic mesopelagic depth range. PMID:21282168
A score to estimate the likelihood of detecting advanced colorectal neoplasia at colonoscopy.
Kaminski, Michal F; Polkowski, Marcin; Kraszewska, Ewa; Rupinski, Maciej; Butruk, Eugeniusz; Regula, Jaroslaw
2014-07-01
This study aimed to develop and validate a model to estimate the likelihood of detecting advanced colorectal neoplasia in Caucasian patients. We performed a cross-sectional analysis of database records for 40-year-old to 66-year-old patients who entered a national primary colonoscopy-based screening programme for colorectal cancer in 73 centres in Poland in the year 2007. We used multivariate logistic regression to investigate the associations between clinical variables and the presence of advanced neoplasia in a randomly selected test set, and confirmed the associations in a validation set. We used model coefficients to develop a risk score for detection of advanced colorectal neoplasia. Advanced colorectal neoplasia was detected in 2544 of the 35,918 included participants (7.1%). In the test set, a logistic-regression model showed that independent risk factors for advanced colorectal neoplasia were: age, sex, family history of colorectal cancer, cigarette smoking (p<0.001 for these four factors), and Body Mass Index (p=0.033). In the validation set, the model was well calibrated (ratio of expected to observed risk of advanced neoplasia: 1.00 (95% CI 0.95 to 1.06)) and had moderate discriminatory power (c-statistic 0.62). We developed a score that estimated the likelihood of detecting advanced neoplasia in the validation set, from 1.32% for patients scoring 0, to 19.12% for patients scoring 7-8. Developed and internally validated score consisting of simple clinical factors successfully estimates the likelihood of detecting advanced colorectal neoplasia in asymptomatic Caucasian patients. Once externally validated, it may be useful for counselling or designing primary prevention studies. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Sorbic and benzoic acid in non-preservative-added food products in Turkey.
Cakir, Ruziye; Cagri-Mehmetoglu, Arzu
2013-01-01
Sorbic acid (SA) and benzoic acid (BA) were determined in yoghurt, tomato and pepper paste, fruit juices, chocolates, soups and chips in Turkey by using high-pressure liquid chromatography (HPLC). Levels were compared with Turkish Food Codex limits. SA was detected only in 2 of 21 yoghurt samples, contrary to BA, which was found in all yoghurt samples but one, ranging from 10.5 to 159.9 mg/kg. Both SA and BA were detected also in 3 and 6 of 23 paste samples in a range of 18.1-526.4 and 21.7-1933.5 mg/kg, respectively. Only 1 of 23 fruit juices contained BA. SA was not detected in any chips, fruit juice, soup, or chocolate sample. Although 16.51% of the samples was not compliant with the Turkish Food Codex limits, estimated daily intake of BA or SA was below the acceptable daily intake.
Photon statistics in scintillation crystals
NASA Astrophysics Data System (ADS)
Bora, Vaibhav Joga Singh
Scintillation based gamma-ray detectors are widely used in medical imaging, high-energy physics, astronomy and national security. Scintillation gamma-ray detectors are eld-tested, relatively inexpensive, and have good detection eciency. Semi-conductor detectors are gaining popularity because of their superior capability to resolve gamma-ray energies. However, they are relatively hard to manufacture and therefore, at this time, not available in as large formats and much more expensive than scintillation gamma-ray detectors. Scintillation gamma-ray detectors consist of: a scintillator, a material that emits optical (scintillation) photons when it interacts with ionization radiation, and an optical detector that detects the emitted scintillation photons and converts them into an electrical signal. Compared to semiconductor gamma-ray detectors, scintillation gamma-ray detectors have relatively poor capability to resolve gamma-ray energies. This is in large part attributed to the "statistical limit" on the number of scintillation photons. The origin of this statistical limit is the assumption that scintillation photons are either Poisson distributed or super-Poisson distributed. This statistical limit is often dened by the Fano factor. The Fano factor of an integer-valued random process is dened as the ratio of its variance to its mean. Therefore, a Poisson process has a Fano factor of one. The classical theory of light limits the Fano factor of the number of photons to a value greater than or equal to one (Poisson case). However, the quantum theory of light allows for Fano factors to be less than one. We used two methods to look at the correlations between two detectors looking at same scintillation pulse to estimate the Fano factor of the scintillation photons. The relationship between the Fano factor and the correlation between the integral of the two signals detected was analytically derived, and the Fano factor was estimated using the measurements for SrI2:Eu, YAP:Ce and CsI:Na. We also found an empirical relationship between the Fano factor and the covariance as a function of time between two detectors looking at the same scintillation pulse. This empirical model was used to estimate the Fano factor of LaBr3:Ce and YAP:Ce using the experimentally measured timing-covariance. The estimates of the Fano factor from the time-covariance results were consistent with the estimates of the correlation between the integral signals. We found scintillation light from some scintillators to be sub-Poisson. For the same mean number of total scintillation photons, sub-Poisson light has lower noise. We then conducted a simulation study to investigate whether this low-noise sub-Poisson light can be used to improve spatial resolution. We calculated the Cramer-Rao bound for dierent detector geometries, position of interactions and Fano factors. The Cramer-Rao calculations were veried by generating simulated data and estimating the variance of the maximum likelihood estimator. We found that the Fano factor has no impact on the spatial resolution in gamma-ray imaging systems.
Adaptive Detection and ISI Mitigation for Mobile Molecular Communication.
Chang, Ge; Lin, Lin; Yan, Hao
2018-03-01
Current studies on modulation and detection schemes in molecular communication mainly focus on the scenarios with static transmitters and receivers. However, mobile molecular communication is needed in many envisioned applications, such as target tracking and drug delivery. Until now, investigations about mobile molecular communication have been limited. In this paper, a static transmitter and a mobile bacterium-based receiver performing random walk are considered. In this mobile scenario, the channel impulse response changes due to the dynamic change of the distance between the transmitter and the receiver. Detection schemes based on fixed distance fail in signal detection in such a scenario. Furthermore, the intersymbol interference (ISI) effect becomes more complex due to the dynamic character of the signal which makes the estimation and mitigation of the ISI even more difficult. In this paper, an adaptive ISI mitigation method and two adaptive detection schemes are proposed for this mobile scenario. In the proposed scheme, adaptive ISI mitigation, estimation of dynamic distance, and the corresponding impulse response reconstruction are performed in each symbol interval. Based on the dynamic channel impulse response in each interval, two adaptive detection schemes, concentration-based adaptive threshold detection and peak-time-based adaptive detection, are proposed for signal detection. Simulations demonstrate that the ISI effect is significantly reduced and the adaptive detection schemes are reliable and robust for mobile molecular communication.
Estimation of Arctic Sea Ice Freeboard and Thickness Using CryoSat-2
NASA Astrophysics Data System (ADS)
Lee, S.; Im, J.; Kim, J. W.; Kim, M.; Shin, M.
2014-12-01
Arctic sea ice is one of the significant components of the global climate system as it plays a significant role in driving global ocean circulation. Sea ice extent has constantly declined since 1980s. Arctic sea ice thickness has also been diminishing along with the decreasing sea ice extent. Because extent and thickness, two main characteristics of sea ice, are important indicators of the polar response to on-going climate change. Sea ice thickness has been measured with numerous field techniques such as surface drilling and deploying buoys. These techniques provide sparse and discontinuous data in spatiotemporal domain. Spaceborne radar and laser altimeters can overcome these limitations and have been used to estimate sea ice thickness. Ice Cloud and land Elevation Satellite (ICEsat), a laser altimeter provided data to detect polar area elevation change between 2003 and 2009. CryoSat-2 launched with Synthetic Aperture Radar (SAR)/Interferometric Radar Altimeter (SIRAL) in April 2010 can provide data to estimate time-series of Arctic sea ice thickness. In this study, Arctic sea ice freeboard and thickness between 2011 and 2014 were estimated using CryoSat-2 SAR and SARIn mode data that have sea ice surface height relative to the reference ellipsoid WGS84. In order to estimate sea ice thickness, freeboard, i.e., elevation difference between the top of sea ice surface should be calculated. Freeboard can be estimated through detecting leads. We proposed a novel lead detection approach. CryoSat-2 profiles such as pulse peakiness, backscatter sigma-0, stack standard deviation, skewness and kurtosis were examined to distinguish leads from sea ice. Near-real time cloud-free MODIS images corresponding to CryoSat-2 data measured were used to visually identify leads. Rule-based machine learning approaches such as See5.0 and random forest were used to identify leads. The proposed lead detection approach better distinguished leads from sea ice than the existing approaches. With the freeboard height calculated using the lead detection approach, sea ice thickness was finally estimated using the Archimedes' buoyancy principle. The estimated sea ice freeboard and thickness were validated using ESA airborne Ku-band interferometric radar and Airborne Electromagnetic (AEM) data.
Shaikh, K A; Patil, S D; Devkhile, A B
2008-12-15
A simple, precise and accurate reversed-phase liquid chromatographic method has been developed for the simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet formulations. The chromatographic separation was achieved on a Xterra RP18 (250 mm x 4.6 mm, 5 microm) analytical column. A Mixture of acetonitrile-dipotassium phosphate (30 mM) (50:50, v/v) (pH 9.0) was used as the mobile phase, at a flow rate of 1.7 ml/min and detector wavelength at 215 nm. The retention time of ambroxol and azithromycin was found to be 5.0 and 11.5 min, respectively. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation and robustness. The linear dynamic ranges were from 30-180 to 250-1500 microg/ml for ambroxol hydrochloride and azithromycin, respectively. The percentage recovery obtained for ambroxol hydrochloride and azithromycin were 99.40 and 99.90%, respectively. Limit of detection and quantification for azithromycin were 0.8 and 2.3 microg/ml, for ambroxol hydrochloride 0.004 and 0.01 microg/ml, respectively. The developed method can be used for routine quality control analysis of titled drugs in combination in tablet formulation.
Helsel, D.R.
2006-01-01
The most commonly used method in environmental chemistry to deal with values below detection limits is to substitute a fraction of the detection limit for each nondetect. Two decades of research has shown that this fabrication of values produces poor estimates of statistics, and commonly obscures patterns and trends in the data. Papers using substitution may conclude that significant differences, correlations, and regression relationships do not exist, when in fact they do. The reverse may also be true. Fortunately, good alternative methods for dealing with nondetects already exist, and are summarized here with references to original sources. Substituting values for nondetects should be used rarely, and should generally be considered unacceptable in scientific research. There are better ways.
Analysis of single ion channel data incorporating time-interval omission and sampling
The, Yu-Kai; Timmer, Jens
2005-01-01
Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220
NASA Astrophysics Data System (ADS)
Wang, Fei
2013-09-01
Geiger-mode detectors have single photon sensitivity and picoseconds timing resolution, which make it a good candidate for low light level ranging applications, especially in the case of flash three dimensional imaging applications where the received laser power is extremely limited. Another advantage of Geiger-mode APD is their capability of large output current which can drive CMOS timing circuit directly, which means that larger format focal plane arrays can be easily fabricated using the mature CMOS technology. However Geiger-mode detector based FPAs can only measure the range information of a scene but not the reflectivity. Reflectivity is a major characteristic which can help target classification and identification. According to Poisson statistic nature, detection probability is tightly connected to the incident number of photon. Employing this relation, a signal intensity estimation method based on probability inversion is proposed. Instead of measuring intensity directly, several detections are conducted, then the detection probability is obtained and the intensity is estimated using this method. The relation between the estimator's accuracy, measuring range and number of detections are discussed based on statistical theory. Finally Monte-Carlo simulation is conducted to verify the correctness of this theory. Using 100 times of detection, signal intensity equal to 4.6 photons per detection can be measured using this method. With slight modification of measuring strategy, intensity information can be obtained using current Geiger-mode detector based FPAs, which can enrich the information acquired and broaden the application field of current technology.
Long-term kinematics and sediment flux of an active earthflow, Eel River, California
B. H. Mackey; J. J. Roering; J. A. McKean
2009-01-01
Although earthflows are the dominant erosion mechanism in many mountainous landscapes, estimates of long-term earthflow-driven sediment flux remain elusive because landslide displacement data are typically limited to contemporary time periods. Combining high-resolution topography from airborne LiDAR (light detection and ranging), total station surveying, orthorectified...
Program SimAssem: software for simulating species assemblages and estimating species richness
Gordon C. Reese; Kenneth R. Wilson; Curtis H. Flather
2013-01-01
1. Species richness, the number of species in a defined area, is the most frequently used biodiversity measure. Despite its intuitive appeal and conceptual simplicity, species richness is often difficult to quantify, even in well surveyed areas, because of sampling limitations such as survey effort and species detection probability....
An experiment to detect gut monopoles
NASA Technical Reports Server (NTRS)
Macneill, G.; Fegan, D. J.
1985-01-01
Recent advances in the development of Grand Unification Theories have led to several interesting predictions. One of these states that Grand Unification Monopoles (GUMs) exist as solutions in may nonabelian gauge theories. Another consequence of Unification is the possibility of baryon decay. The efficiency of the water tank detector in registering a Rubakov type decay will vary with both the interaction length and the GUM's velocity, expressed in terms of beta ( 0.01). The efficiency decreases at large values of because of the limited resolving time of the detector (approx. 50 ns). At lower values of Beta the time between interactions is such that the criterion of 4 events in 2 mu s can no longer be satisfied. The Rubakov experiment has now been in operation for almost 2 years with an estimated live time of 80%. During this time no candidate events have been observed leading to an estimated upper limit on the flux of 7.82 x 0.00001 m(-2) d(-1) Sr(-1). The ionization loss detection system has only recently come on line and as yet no results are available from this experiment.
Resolution Limits of Nanoimprinted Patterns by Fluorescence Microscopy
NASA Astrophysics Data System (ADS)
Kubo, Shoichi; Tomioka, Tatsuya; Nakagawa, Masaru
2013-06-01
The authors investigated optical resolution limits to identify minimum distances between convex lines of fluorescent dye-doped nanoimprinted resist patterns by fluorescence microscopy. Fluorescent ultraviolet (UV)-curable resin and thermoplastic resin films were transformed into line-and-space patterns by UV nanoimprinting and thermal nanoimprinting, respectively. Fluorescence immersion observation needed an immersion medium immiscible to the resist films, and an ionic liquid of triisobutyl methylphosphonium tosylate was appropriate for soluble thermoplastic polystyrene patterns. Observation with various numerical aperture (NA) values and two detection wavelength ranges showed that the resolution limits were smaller than the values estimated by the Sparrow criterion. The space width to identify line patterns became narrower as the line width increased. The space width of 100 nm was demonstrated to be sufficient to resolve 300-nm-wide lines in the detection wavelength range of 575-625 nm using an objective lens of NA= 1.40.
Broadband noise limit in the photodetection of ultralow jitter optical pulses.
Sun, Wenlu; Quinlan, Franklyn; Fortier, Tara M; Deschenes, Jean-Daniel; Fu, Yang; Diddams, Scott A; Campbell, Joe C
2014-11-14
Applications with optical atomic clocks and precision timing often require the transfer of optical frequency references to the electrical domain with extremely high fidelity. Here we examine the impact of photocarrier scattering and distributed absorption on the photocurrent noise of high-speed photodiodes when detecting ultralow jitter optical pulses. Despite its small contribution to the total photocurrent, this excess noise can determine the phase noise and timing jitter of microwave signals generated by detecting ultrashort optical pulses. A Monte Carlo simulation of the photodetection process is used to quantitatively estimate the excess noise. Simulated phase noise on the 10 GHz harmonic of a photodetected pulse train shows good agreement with previous experimental data, leading to the conclusion that the lowest phase noise photonically generated microwave signals are limited by photocarrier scattering well above the quantum limit of the optical pulse train.
Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ≥0.65.
Sargeant, G.A.; Sovada, M.A.; Slivinski, C.C.; Johnson, D.H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997-1999, we searched 355 townships (ca. 93 km2) 1-3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of ?? = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ???1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ???0.65.
Limits on radio emission from meteors using the MWA
NASA Astrophysics Data System (ADS)
Zhang, X.; Hancock, P.; Devillepoix, H. A. R.; Wayth, R. B.; Beardsley, A.; Crosse, B.; Emrich, D.; Franzen, T. M. O.; Gaensler, B. M.; Horsley, L.; Johnston-Hollitt, M.; Kaplan, D. L.; Kenney, D.; Morales, M. F.; Pallot, D.; Steele, K.; Tingay, S. J.; Trott, C. M.; Walker, M.; Williams, A.; Wu, C.; Ji, Jianghui; Ma, Yuehua
2018-07-01
Recently, low-frequency, broad-band radio emission has been observed accompanying bright meteors by the Long Wavelength Array (LWA). The broad-band spectra between 20 and 60 MHz were captured for several events, while the spectral index (dependence of flux density on frequency, with Sν ∝ να) was estimated to be -4 ± 1 during the peak of meteor afterglows. Here we present a survey of meteor emission and other transient events using the Murchison Wide Field Array (MWA) at 72-103 MHz. In our 322 h survey, down to a 5σ detection threshold of 3.5 Jy beam-1, no transient candidates were identified as intrinsic emission from meteors. We derived an upper limit of -3.7 (95 per cent confidence limit) on the spectral index in our frequency range. We also report detections of other transient events, such as reflected FM broadcast signals from small satellites, conclusively demonstrating the ability of the MWA to detect and track space debris on scales as small as 0.1 m in low Earth orbits.
Information theoretic analysis of canny edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2011-06-01
In general edge detection evaluation, the edge detectors are examined, analyzed, and compared either visually or with a metric for specific an application. This analysis is usually independent of the characteristics of the image-gathering, transmission and display processes that do impact the quality of the acquired image and thus, the resulting edge image. We propose a new information theoretic analysis of edge detection that unites the different components of the visual communication channel and assesses edge detection algorithms in an integrated manner based on Shannon's information theory. The edge detection algorithm here is considered to achieve high performance only if the information rate from the scene to the edge approaches the maximum possible. Thus, by setting initial conditions of the visual communication system as constant, different edge detection algorithms could be evaluated. This analysis is normally limited to linear shift-invariant filters so in order to examine the Canny edge operator in our proposed system, we need to estimate its "power spectral density" (PSD). Since the Canny operator is non-linear and shift variant, we perform the estimation for a set of different system environment conditions using simulations. In our paper we will first introduce the PSD of the Canny operator for a range of system parameters. Then, using the estimated PSD, we will assess the Canny operator using information theoretic analysis. The information-theoretic metric is also used to compare the performance of the Canny operator with other edge-detection operators. This also provides a simple tool for selecting appropriate edgedetection algorithms based on system parameters, and for adjusting their parameters to maximize information throughput.
Improvements in estimating proportions of objects from multispectral data
NASA Technical Reports Server (NTRS)
Horwitz, H. M.; Hyde, P. D.; Richardson, W.
1974-01-01
Methods for estimating proportions of objects and materials imaged within the instantaneous field of view of a multispectral sensor were developed further. Improvements in the basic proportion estimation algorithm were devised as well as improved alien object detection procedures. Also, a simplified signature set analysis scheme was introduced for determining the adequacy of signature set geometry for satisfactory proportion estimation. Averaging procedures used in conjunction with the mixtures algorithm were examined theoretically and applied to artificially generated multispectral data. A computationally simpler estimator was considered and found unsatisfactory. Experiments conducted to find a suitable procedure for setting the alien object threshold yielded little definitive result. Mixtures procedures were used on a limited amount of ERTS data to estimate wheat proportion in selected areas. Results were unsatisfactory, partly because of the ill-conditioned nature of the pure signature set.
VLBI imaging of a flare in the Crab nebula: more than just a spot
NASA Astrophysics Data System (ADS)
Lobanov, A. P.; Horns, D.; Muxlow, T. W. B.
2011-09-01
We report on very long baseline interferometry (VLBI) observations of the radio emission from the inner region of the Crab nebula, made at 1.6 GHz and 5 GHz after a recent high-energy flare in this object. The 5 GHz data have provided only upper limits of 0.4 milli-Jansky (mJy) on the flux density of the pulsar and 0.4 mJy/beam on the brightness of the putative flaring region. The 1.6 GHz data have enabled imaging the inner regions of the nebula on scales of up to ≈ 40''. The emission from the inner "wisps" is detected for the first time with VLBI observations. A likely radio counterpart (designated "C1") of the putative flaring region observed with Chandra and HST is detected in the radio image, with an estimated flux density of 0.5 ± 0.3 mJy and a size of 0.2 arcsec - 0.6 arcsec. Another compact feature ("C2") is also detected in the VLBI image closer to the pulsar, with an estimated flux density of 0.4 ± 0.2 mJy and a size smaller than 0.2 arcsec. Combined with the broad-band SED of the flare, the radio properties of C1 yield a lower limit of ≈ 0.5 mG for the magnetic field and a total minimum energy of 1.2 × 1041 erg vested in the flare (corresponding to using about 0.2% of the pulsar spin-down power). The 1.6 GHz observations provide upper limits for the brightness (0.2 mJy/beam) and total flux density (0.4 mJy) of the optical Knot 1 located at 0.6 arcsec from the pulsar. The absolute position of the Crab pulsar is determined, and an estimate of the pulsar proper motion (μα = -13.0 ± 0.2 mas/yr, μδ = + 2.9 ± 0.1 mas/yr) is obtained.
Verification of Minimum Detectable Activity for Radiological Threat Source Search
NASA Astrophysics Data System (ADS)
Gardiner, Hannah; Myjak, Mitchell; Baciak, James; Detwiler, Rebecca; Seifert, Carolyn
2015-10-01
The Department of Homeland Security's Domestic Nuclear Detection Office is working to develop advanced technologies that will improve the ability to detect, localize, and identify radiological and nuclear sources from airborne platforms. The Airborne Radiological Enhanced-sensor System (ARES) program is developing advanced data fusion algorithms for analyzing data from a helicopter-mounted radiation detector. This detector platform provides a rapid, wide-area assessment of radiological conditions at ground level. The NSCRAD (Nuisance-rejection Spectral Comparison Ratios for Anomaly Detection) algorithm was developed to distinguish low-count sources of interest from benign naturally occurring radiation and irrelevant nuisance sources. It uses a number of broad, overlapping regions of interest to statistically compare each newly measured spectrum with the current estimate for the background to identify anomalies. We recently developed a method to estimate the minimum detectable activity (MDA) of NSCRAD in real time. We present this method here and report on the MDA verification using both laboratory measurements and simulated injects on measured backgrounds at or near the detection limits. This work is supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-12-X-00376. This support does not constitute an express or implied endorsement on the part of the Gov't.
Global Biomass Variation and its Geodynamic Effects, 1982-1998
NASA Technical Reports Server (NTRS)
Rodell, M.; Chao, B. F.; Au, A. Y.; Kimball, J. S.; McDonald, K. C.
2005-01-01
Redistribution of mass near Earth's surface alters its rotation, gravity field, and geocenter location. Advanced techniques for measuring these geodetic variations now exist, but the ability to attribute the observed modes to individual Earth system processes has been hampered by a shortage of reliable global data on such processes, especially hydrospheric processes. To address one aspect of this deficiency, 17 yrs of monthly, global maps of vegetation biomass were produced by applying field-based relationships to satellite-derived vegetation type and leaf area index. The seasonal variability of biomass was estimated to be as large as 5 kg m(exp -2). Of this amount, approximately 4 kg m(exp -2) is due to vegetation water storage variations. The time series of maps was used to compute geodetic anomalies, which were then compared with existing geodetic observations as well as the estimated measurement sensitivity of the Gravity Recovery and Climate Experiment (GRACE). For gravity, the seasonal amplitude of biomass variations may be just within GRACE'S limits of detectability, but it is still an order of magnitude smaller than current observation uncertainty using the satellite-laser-ranging technique. The contribution of total biomass variations to seasonal polar motion amplitude is detectable in today's measurement, but it is obscured by contributions from various other sources, some of which are two orders of magnitude larger. The influence on the length of day is below current limits of detectability. Although the nonseasonal geodynamic signals show clear interannual variability, they are too small to be detected.
McAnally, Ken I.; Morris, Adam P.; Best, Christopher
2017-01-01
Metacognitive monitoring and control of situation awareness (SA) are important for a range of safety-critical roles (e.g., air traffic control, military command and control). We examined the factors affecting these processes using a visual change detection task that included representative tactical displays. SA was assessed by asking novice observers to detect changes to a tactical display. Metacognitive monitoring was assessed by asking observers to estimate the probability that they would correctly detect a change, either after study of the display and before the change (judgement of learning; JOL) or after the change and detection response (judgement of performance; JOP). In Experiment 1, observers failed to detect some changes to the display, indicating imperfect SA, but JOPs were reasonably well calibrated to objective performance. Experiment 2 examined JOLs and JOPs in two task contexts: with study-time limits imposed by the task or with self-pacing to meet specified performance targets. JOPs were well calibrated in both conditions as were JOLs for high performance targets. In summary, observers had limited SA, but good insight about their performance and learning for high performance targets and allocated study time appropriately. PMID:28915244
Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach
NASA Astrophysics Data System (ADS)
Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.
2017-12-01
One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.
Nejdl, Lukas; Kudr, Jiri; Cihalova, Kristyna; Chudobova, Dagmar; Zurek, Michal; Zalud, Ludek; Kopecny, Lukas; Burian, Frantisek; Ruttkay-Nedecky, Branislav; Krizkova, Sona; Konecna, Marie; Hynek, David; Kopel, Pavel; Prasek, Jan; Adam, Vojtech; Kizek, Rene
2014-08-01
Remote-controlled robotic systems are being used for analysis of various types of analytes in hostile environment including those called extraterrestrial. The aim of our study was to develop a remote-controlled robotic platform (ORPHEUS-HOPE) for bacterial detection. For the platform ORPHEUS-HOPE a 3D printed flow chip was designed and created with a culture chamber with volume 600 μL. The flow rate was optimized to 500 μL/min. The chip was tested primarily for detection of 1-naphthol by differential pulse voltammetry with detection limit (S/N = 3) as 20 nM. Further, the way how to capture bacteria was optimized. To capture bacterial cells (Staphylococcus aureus), maghemite nanoparticles (1 mg/mL) were prepared and modified with collagen, glucose, graphene, gold, hyaluronic acid, and graphene with gold or graphene with glucose (20 mg/mL). The most up to 50% of the bacteria were captured by graphene nanoparticles modified with glucose. The detection limit of the whole assay, which included capturing of bacteria and their detection under remote control operation, was estimated as 30 bacteria per μL. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Wang, Jun; Yue, Yun; Wang, Yi; Ichoku, Charles; Ellison, Luke; Zeng, Jing
2018-01-01
Largely used in several independent estimates of fire emissions, fire products based on MODIS sensors aboard the Terra and Aqua polar-orbiting satellites have a number of inherent limitations, including (a) inability to detect fires below clouds, (b) significant decrease of detection sensitivity at the edge of scan where pixel sizes are much larger than at nadir, and (c) gaps between adjacent swaths in tropical regions. To remedy these limitations, an empirical method is developed here and applied to correct fire emission estimates based on MODIS pixel level fire radiative power measurements and emission coefficients from the Fire Energetics and Emissions Research (FEER) biomass burning emission inventory. The analysis was performed for January 2010 over the northern sub-Saharan African region. Simulations from WRF-Chem model using original and adjusted emissions are compared with the aerosol optical depth (AOD) products from MODIS and AERONET as well as aerosol vertical profile from CALIOP data. The comparison confirmed an 30-50% improvement in the model simulation performance (in terms of correlation, bias, and spatial pattern of AOD with respect to observations) by the adjusted emissions that not only increases the original emission amount by a factor of two but also results in the spatially continuous estimates of instantaneous fire emissions at daily time scales. Such improvement cannot be achieved by simply scaling the original emission across the study domain. Even with this improvement, a factor of two underestimations still exists in the modeled AOD, which is within the current global fire emissions uncertainty envelope.
Detection of the lunar body tide by the Lunar Orbiter Laser Altimeter.
Mazarico, Erwan; Barker, Michael K; Neumann, Gregory A; Zuber, Maria T; Smith, David E
2014-04-16
The Lunar Orbiter Laser Altimeter instrument onboard the Lunar Reconnaissance Orbiter spacecraft collected more than 5 billion measurements in the nominal 50 km orbit over ∼10,000 orbits. The data precision, geodetic accuracy, and spatial distribution enable two-dimensional crossovers to be used to infer relative radial position corrections between tracks to better than ∼1 m. We use nearly 500,000 altimetric crossovers to separate remaining high-frequency spacecraft trajectory errors from the periodic radial surface tidal deformation. The unusual sampling of the lunar body tide from polar lunar orbit limits the size of the typical differential signal expected at ground track intersections to ∼10 cm. Nevertheless, we reliably detect the topographic tidal signal and estimate the associated Love number h 2 to be 0.0371 ± 0.0033, which is consistent with but lower than recent results from lunar laser ranging. Altimetric data are used to create radial constraints on the tidal deformationThe body tide amplitude is estimated from the crossover dataThe estimated Love number is consistent with previous estimates but more precise.
Kapoor, Upasana; Srivastava, M K; Srivastava, Ashutosh Kumar; Patel, D K; Garg, Veena; Srivastava, L P
2013-03-01
A total of 250 samples-including fruits, fruit juices, and baby foods (50 samples each), vegetables (70 samples), and cereals (30 samples)-were collected from Lucknow, India, and analyzed for the presence of imidacloprid residues. The QuEChERS (quick, easy, cheap, effective, rugged, and safe) method of extraction coupled with high-performance liquid chromatographic analysis were carried out, and imidacloprid residues were qualitatively confirmed by liquid chromatography-mass spectrometry. Imidacloprid was not detected in samples of fruit juices and baby foods. It was, however, detected in 38 samples of fruits, vegetables, and cereals, which is about 15.20% of the total samples. Of samples of fruits, 22% showed the presence of imidacloprid, and 2% of samples showed residues above the maximal residue limit. Although imidacloprid was detected in 24% of vegetable samples, only 5.71% showed the presence of imidacloprid above the maximal residue limit. However, 33% of cereal samples showed the presence of imidacloprid, and about 3% of samples were above the maximal residue limit. The calculated estimated daily intake ranged between 0.004 and 0.131 µg/kg body weight, and the hazard indices ranged from 0.007 to 0.218 for these food commodities. It is therefore indicated that lifetime consumption of vegetables, fruits, fruit juices, baby foods, wheat, rice, and pulses may not pose a health hazard for the population of Lucknow because the hazard indices for imidacloprid residues were below one. Copyright © 2012 SETAC.
Nagarale, Girish; Ravindra, S; Thakur, Srinath; Setty, Swati
2010-10-01
C-reactive protein [CRP] levels increase to hundreds of mg/mL within hours following infection. Studies have shown that serum CRP levels were elevated in periodontal disease. However, in all the previous studies, CRP levels were measured by using high-sensitivity CRP assay kits with minimal detection limits of 0.1 to 3 mg/L, which was much below the normal value of 10 mg/L. These high-sensitivity CRP assays need a proper laboratory setup, and these methods cannot be used as a routine chair-side test in the dental office. The purpose of this study was to investigate the serum CRP levels in subjects with periodontal disease by using a rapid chair-side diagnostic test kit with a lower detection limit of 6 mg/L and to compare the CRP levels before and after periodontal therapy. A total of 45 systemically healthy subjects were selected for the study. Subjects were divided into three groups: group A: healthy controls, group B: gingivitis, group C: periodontitis. Serum levels of CRP were determined by using a latex slide agglutination method with commercially available kit with lower detection limit of 6 mg/L. CRP was negative in all the 15 subjects in groups A and B at baseline, 7th and 30th day. CRP was positive only in 2 subjects in Group C at baseline and 7th day. Estimation of serum CRP by using a rapid chair-side diagnostic test kit is not of any significance in subjects with periodontitis.
Nagarale, Girish; Ravindra, S.; Thakur, Srinath; Setty, Swati
2010-01-01
Background: C-reactive protein [CRP] levels increase to hundreds of mg/mL within hours following infection. Studies have shown that serum CRP levels were elevated in periodontal disease. However, in all the previous studies, CRP levels were measured by using high-sensitivity CRP assay kits with minimal detection limits of 0.1 to 3 mg/L, which was much below the normal value of 10 mg/L. These high-sensitivity CRP assays need a proper laboratory setup, and these methods cannot be used as a routine chair-side test in the dental office. Aim: The purpose of this study was to investigate the serum CRP levels in subjects with periodontal disease by using a rapid chair-side diagnostic test kit with a lower detection limit of 6 mg/L and to compare the CRP levels before and after periodontal therapy. Materials and Methods: A total of 45 systemically healthy subjects were selected for the study. Subjects were divided into three groups: group A: healthy controls, group B: gingivitis, group C: periodontitis. Serum levels of CRP were determined by using a latex slide agglutination method with commercially available kit with lower detection limit of 6 mg/L. Results: CRP was negative in all the 15 subjects in groups A and B at baseline, 7th and 30th day. CRP was positive only in 2 subjects in Group C at baseline and 7th day. Conclusion: Estimation of serum CRP by using a rapid chair-side diagnostic test kit is not of any significance in subjects with periodontitis. PMID:21731244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Seong-Wook; Tian, Chao; Martini, Rainer, E-mail: rmartini@stevens.edu
We demonstrated highly sensitive detection of explosive dissolved in solvent with a portable spectroscopy system (Q-MACS) by tracing the explosive byproduct, N{sub 2}O, in combination with a pulsed electric discharge system for safe explosive decomposition. Using Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), the gas was monitored and analyzed by Q-MACS and the presence of the dissolved explosive clearly detected. While HMX presence could be identified directly in the air above the solutions even without plasma, much better results were achieved under the decomposition. The experiment results give an estimated detection limit of 10 ppb, which corresponds to a 15 pg of HMX.
NASA Astrophysics Data System (ADS)
Liu, Xinyu; Chen, Si; Luo, Yuemei; Bo, En; Wang, Nanshuo; Yu, Xiaojun; Liu, Linbo
2016-02-01
The evaluation of the endothelium coverage on the vessel wall is most wanted by cardiologists. Arterial endothelial cells play a crucial role in keeping low-density lipoprotein and leukocytes from entering into the intima. The damage of endothelial cells is considered as the first step of atherosclerosis development and the presence of endothelial cells is an indicator of arterial healing after stent implantation. Intravascular OCT (IVOCT) is the highest-resolution coronary imaging modality, but it is still limited by an axial resolution of 10-15 µm. This limitation in axial resolution hinders our ability to visualize cellular level details associated with coronary atherosclerosis. Spectral estimation optical coherence tomography (SE-OCT) uses modern spectral estimation techniques and may help reveal the microstructures underlying the resolution limit. In this presentation, we conduct an ex vivo study using SE-OCT to image the endothelium cells on the fresh swine aorta. We find that in OCT images with an axial resolution of 10 µm, we may gain the visibility of individual endothelium cells by applying the autoregressive spectral estimation techniques to enhance the axial resolution. We believe the SE-OCT can provide a potential to evaluate the coverage of endothelium cells using current IVOCT with a 10-µm axial resolution.
Quantifiable long-term monitoring on parks and nature preserves
Beck, Scott; Moorman, Christopher; DePerno, Christopher S.; Simons, Theodore R.
2013-01-01
Herpetofauna have declined globally, and monitoring is a useful approach to document local and long-term changes. However, monitoring efforts often fail to account for detectability or follow standardized protocols. We performed a case study at Hemlock Bluffs Nature Preserve in Cary, NC to model occupancy of focal species and demonstrate a replicable long-term protocol useful to parks and nature preserves. From March 2010 to 2011, we documented occupancy of Ambystoma opacum(Marbled Salamander), Plethodon cinereus (Red-backed Salamander), Carphophis amoenus (Eastern Worm Snake), and Diadophis punctatus (Ringneck Snake) at coverboard sites and estimated breeding female Ambystoma maculatum (Spotted Salamander) abundance via dependent double-observer egg-mass counts in ephemeral pools. Temperature influenced detection of both Marbled and Red-backed Salamanders. Based on egg-mass data, we estimated Spotted Salamander abundance to be between 21 and 44 breeding females. We detected 43 of 53 previously documented herpetofauna species. Our approach demonstrates a monitoring protocol that accounts for factors that influence species detection and is replicable by parks or nature preserves with limited resources.
[Analysis of phthalates in aromatic and deodorant aerosol products and evaluation of exposure risk].
Sato, Yoshiki; Sugaya, Naeko; Nakagawa, Tomoo; Morita, Masatoshi
2015-01-01
We established an analytical method for the detection of seven phthalates, dimethyl phthalate, diethyl phthalate (DEP), benzyl butyl phthalate, di-i-butyl phthalate, dibutyl phthalate (DBP), diethylhexyl phthalate (DEHP), and di-n-octhyl phthalate, using an ultra high performance liquid chromatograph equipped with a photodiode array detector. This method is quick, with minimal contamination, and was applied to the analysis of aromatic and deodorant aerosol products. Phthalates were detected in 15 of 52 samples purchased from 1999 to 2012 in Yokohama. Three types of phthalate (DEP, DBP, DEHP) were detected, and their concentrations ranged from 0.0085-0.23% DEP in nine samples, 0.012-0.045% DBP in four samples, and 0.012-0.033% DEHP in four samples. No other phthalate esters were detected. Furthermore, we estimated phthalate exposure via breathing in commonly used aromatic and deodorant aerosol products, then evaluated the associated risk. The estimated levels of phthalate exposure were lower than the tolerated daily limit, but the results indicated that aromatic and deodorant aerosol products could be a significant source of phthalate exposure.
Evaluation of Shiryaev-Roberts Procedure for On-line Environmental Radiation Monitoring
NASA Astrophysics Data System (ADS)
Watson, Mara Mae
An on-line radiation monitoring system that simultaneously concentrates and detects radioactivity is needed to detect an accidental leakage from a nuclear waste disposal facility or clandestine nuclear activity. Previous studies have shown that classical control chart methods can be applied to on-line radiation monitoring data to quickly detect these events as they occur; however, Bayesian control chart methods were not included in these studies. This work will evaluate the performance of a Bayesian control chart method, the Shiryaev-Roberts (SR) procedure, compared to classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), for use in on-line radiation monitoring of 99Tc in water using extractive scintillating resin. Measurements were collected by pumping solutions containing 0.1-5 Bq/L of 99Tc, as 99T cO4-, through a flow cell packed with extractive scintillating resin coupled to a Beta-RAM Model 5 HPLC detector. While 99T cO4- accumulated on the resin, simultaneous measurements were acquired in 10-s intervals and then re-binned to 100-s intervals. The Bayesian statistical method, Shiryaev-Roberts procedure, and classical control chart methods, Shewhart 3-sigma and cumulative sum (CUSUM), were applied to the data using statistical algorithms developed in MATLAB RTM. Two SR control charts were constructed using Poisson distributions and Gaussian distributions to estimate the likelihood ratio, and are referred to as Poisson SR and Gaussian SR to indicate the distribution used to calculate the statistic. The Poisson and Gaussian SR methods required as little as 28.9 mL less solution at 5 Bq/L and as much as 170 mL less solution at 0.5 Bq/L to exceed the control limit than the Shewhart 3-sigma method. The Poisson SR method needed as little as 6.20 mL less solution at 5 Bq/L and up to 125 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. The Gaussian SR and CUSUM method required comparable solution volumes for test solutions containing at least 1.5 Bq/L of 99T c. For activity concentrations less than 1.5 Bq/L, the Gaussian SR method required as much as 40.8 mL less solution at 0.5 Bq/L to exceed the control limit than the CUSUM method. Both SR methods were able to consistently detect test solutions containing 0.1 Bq/L, unlike the Shewhart 3-sigma and CUSUM methods. Although the Poisson SR method required as much as 178 mL less solution to exceed the control limit than the Gaussian SR method, the Gaussian SR false positive of 0% was much lower than the Poisson SR false positive rate of 1.14%. A lower false positive rate made it easier to differentiate between a false positive and an increase in mean count rate caused by activity accumulating on the resin. The SR procedure is thus the ideal tool for low-level on-line radiation monitoring using extractive scintillating resin, because it needed less volume in most cases to detect an upward shift in the mean count rate than the Shewhart 3-sigma and CUSUM methods and consistently detected lower activity concentrations. The desired results for the monitoring scheme, however, need to be considered prior to choosing between the Poisson and Gaussian distribution to estimate the likelihood ratio, because each was advantageous under different circumstances. Once the control limit was exceeded, activity concentrations were estimated from the SR control chart using the slope of the control chart on a semi-logarithmic plot. Five of nine test solutions for the Poisson SR control chart produced concentration estimates within 30% of the actual value, but the worst case was 263.2% different than the actual value. The estimations for the Gaussian SR control chart were much more precise, with six of eight solutions producing estimates within 30%. Although the activity concentrations estimations were only mediocre for the Poisson SR control chart and satisfactory for the Gaussian SR control chart, these results demonstrate that a relationship exists between activity concentration and the SR control chart magnitude that can be exploited to determine the activity concentration from the SR control chart. More complex methods should be investigated to improve activity concentration estimations from the SR control charts.
NASA Technical Reports Server (NTRS)
Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.
2002-01-01
Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.
Lensing bias to CMB polarization measurements of compensated isocurvature perturbations
NASA Astrophysics Data System (ADS)
Heinrich, Chen
2018-01-01
Compensated isocurvature perturbations (CIPs) are opposite spatial fluctuations in the baryon and dark matter (DM) densities. They arise in the curvaton model and some models of baryogenesis. While the gravitational effects of baryon fluctuations are compensated by those of DM, leaving no observable impacts on the cosmic microwave background (CMB) at first order, they modulate the sound horizon at recombination, thereby correlating CMB anisotropies at different multipoles. As a result, CIPs can be reconstructed using quadratic estimators similarly to CMB detection of gravitational lensing. Because of these similarities, however, the CIP estimators are biased with lensing contributions that must be subtracted. These lensing contributions for CMB polarization measurement of CIPs are found to roughly triple the noise power of the total CIP estimator on large scales. In addition, the cross power with temperature and E -mode polarization are contaminated by lensing-ISW (integrated Sachs-Wolfe) correlations and reionization-lensing correlations respectively. For a cosmic-variance-limited temperature and polarization experiment measuring out to multipoles lmax=2500 , the lensing noise raises the detection threshold by a factor of 1.5, leaving a 2.7 σ detection possible for the maximal CIP signal in the curvaton model.
Joy, Abraham; Anim-Danso, Emmanuel; Kohn, Joachim
2009-01-01
Methods for the detection and estimation of diphosgene and triphosgene are described. These compounds are widely used phosgene precursors which produce an intensely colored purple pentamethine oxonol dye when reacted with 1,3-dimethylbarbituric acid (DBA) and pyridine (or a pyridine derivative). Two quantitative methods are described, based on either UV absorbance or fluorescence of the oxonol dye. Detection limits are ~ 4 µmol/L by UV and <0.4 µmol/L by fluorescence. The third method is a test strip for the simple and rapid detection and semi-quantitative estimation of diphosgene and triphosgene, using a filter paper embedded with dimethylbarbituric acid and poly(4-vinylpyridine). Addition of a test solution to the paper causes a color change from white to light blue at low concentrations and to pink at higher concentrations of triphosgene. The test strip is useful for quick on-site detection of triphosgene and diphosgene in reaction mixtures. The test strip is easy to perform and provides clear signal readouts indicative of the presence of phosgene precursors. The utility of this method was demonstrated by the qualitative determination of residual triphosgene during the production of poly(Bisphenol A carbonate). PMID:19782219
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Yang, Na; Liu, Yi
2018-04-01
A novel organic small molecule with D-Pi-A structure was prepared, which was found to be a promising colorimetric and ratiometric UV-vis spetral probe for detection of phosphorylated proteins with the help of tetravalent zirconium ion. Such optical probe based on chromophore WYF-1 shows a rapid response (within 10 s) and high selectivity and sensitivity for phosphorylated proteins, giving distinct colorimetric and ratiometric UV-vis changes at 720 and 560 nm. The detection limit for phosphorylated proteins was estimated to be 100 nM. In addition, detection of phosphorylated proteins in placental tissue samples with this probe was successfully applied, which indicates that this probe holds great potential for phosphorylated proteins detection.
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
NASA Astrophysics Data System (ADS)
Tyul'Bashev, S. A.
2009-01-01
A complete sample of radio sources has been studied using the interplanetary scintillation method. In total, 32 sources were observed, with scintillations detected in 12 of them. The remaining sources have upper limits for the flux densities of their compact components. Integrated flux densities are estimated for 18 sources.
Integration of lidar and Landsat ETM+ data for estimating and mapping forest canopy height.
Andrew T. Hudak; Michael A. Lefsky; Warren B. Cohen; Mercedes Berterretche
2002-01-01
Light detection and ranging (LIDAR) data provide accurate measurements of forest canopy structure in the vertical plane; however, current LIDAR sensors have limited coverage in the horizontal plane. Landsat data provide extensive coverage of generalized forest structural classes in the horizontal plane but are relatively insensitive to variation in forest canopy height...
Lanier, Wendy E.; Bailey, Larissa L.; Muths, Erin L.
2016-01-01
Conservation of imperiled species often requires knowledge of vital rates and population dynamics. However, these can be difficult to estimate for rare species and small populations. This problem is further exacerbated when individuals are not available for detection during some surveys due to limited access, delaying surveys and creating mismatches between the breeding behavior and survey timing. Here we use simulations to explore the impacts of this issue using four hypothetical boreal toad (Anaxyrus boreas boreas) populations, representing combinations of logistical access (accessible, inaccessible) and breeding behavior (synchronous, asynchronous). We examine the bias and precision of survival and breeding probability estimates generated by survey designs that differ in effort and timing for these populations. Our findings indicate that the logistical access of a site and mismatch between the breeding behavior and survey design can greatly limit the ability to yield accurate and precise estimates of survival and breeding probabilities. Simulations similar to what we have performed can help researchers determine an optimal survey design(s) for their system before initiating sampling efforts.
Limits on the fluctuating part of y-type distortion monopole from Planck and SPT results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khatri, Rishi; Sunyaev, Rashid, E-mail: khatri@mpa-garching.mpg.de, E-mail: sunyaev@mpa-garching.mpg.de
2015-08-01
We use the published Planck and SPT cluster catalogs [1,2] and recently published y-distortion maps [3] to put strong observational limits on the contribution of the fluctuating part of the y-type distortions to the y-distortion monopole. Our bounds are 5.4× 10{sup −8} < ( y) < 2.2× 10{sup −6}. Our upper bound is a factor of 6.8 stronger than the currently best upper 95% confidence limit from COBE-FIRAS of ( y) <15× 10{sup −6}. In the standard cosmology, large scale structure is the only source of such distortions and our limits therefore constrain the baryonic physics involved in the formation of the large scale structure. Our lower limit, from themore » detected clusters in the Planck and SPT catalogs, also implies that a Pixie-like experiment should detect the y-distortion monopole at >27-σ. The biggest sources of uncertainty in our upper limit are the monopole offsets between different HFI channel maps that we estimate to be <10{sup −6}.« less
Detecting Close-In Extrasolar Giant Planets with the Kepler Photometer via Scattered Light
NASA Astrophysics Data System (ADS)
Jenkins, J. M.; Doyle, L. R.; Kepler Discovery Mission Team
2003-05-01
NASA's Kepler Mission will be launched in 2007 primarily to search for transiting Earth-sized planets in the habitable zones of solar-like stars. In addition, it will be poised to detect the reflected light component from close-in extrasolar giant planets (CEGPs) similar to 51 Peg b. Here we use the DIARAD/SOHO time series along with models for the reflected light signatures of CEGPs to evaluate Kepler's ability to detect such planets. We examine the detectability as a function of stellar brightness, stellar rotation period, planetary orbital inclination angle, and planetary orbital period, and then estimate the total number of CEGPs that Kepler will detect over its four year mission. The analysis shows that intrinsic stellar variability of solar-like stars is a major obstacle to detecting the reflected light from CEGPs. Monte Carlo trials are used to estimate the detection threshold required to limit the total number of expected false alarms to no more than one for a survey of 100,000 stellar light curves. Kepler will likely detect 100-760 51 Peg b-like planets by reflected light with orbital periods up to 7 days. LRD was supported by the Carl Sagan Chair at the Center for the Study of Life in the Universe, a division of the SETI Institute. JMJ received support from the Kepler Mission Photometer and Science Office at NASA Ames Research Center.
Detection of semi-volatile organic compounds in permeable ...
Abstract The Edison Environmental Center (EEC) has a research and demonstration permeable parking lot comprised of three different permeable systems: permeable asphalt, porous concrete and interlocking concrete permeable pavers. Water quality and quantity analysis has been ongoing since January, 2010. This paper describes a subset of the water quality analysis, analysis of semivolatile organic compounds (SVOCs) to determine if hydrocarbons were in water infiltrated through the permeable surfaces. SVOCs were analyzed in samples collected from 11 dates over a 3 year period, from 2/8/2010 to 4/1/2013.Results are broadly divided into three categories: 42 chemicals were never detected; 12 chemicals (11 chemical test) were detected at a rate of less than 10% or less; and 22 chemicals were detected at a frequency of 10% or greater (ranging from 10% to 66.5% detections). Fundamental and exploratory statistical analyses were performed on these latter analyses results by grouping results by surface type. The statistical analyses were limited due to low frequency of detections and dilutions of samples which impacted detection limits. The infiltrate data through three permeable surfaces were analyzed as non-parametric data by the Kaplan-Meier estimation method for fundamental statistics; there were some statistically observable difference in concentration between pavement types when using Tarone-Ware Comparison Hypothesis Test. Additionally Spearman Rank order non-parame
Lidar-Based Navigation Algorithm for Safe Lunar Landing
NASA Technical Reports Server (NTRS)
Myers, David M.; Johnson, Andrew E.; Werner, Robert A.
2011-01-01
The purpose of Hazard Relative Navigation (HRN) is to provide measurements to the Navigation Filter so that it can limit errors on the position estimate after hazards have been detected. The hazards are detected by processing a hazard digital elevation map (HDEM). The HRN process takes lidar images as the spacecraft descends to the surface and matches these to the HDEM to compute relative position measurements. Since the HDEM has the hazards embedded in it, the position measurements are relative to the hazards, hence the name Hazard Relative Navigation.
Adam, Vojtech; Zitka, Ondrej; Dolezal, Petr; Zeman, Ladislav; Horna, Ales; Hubalek, Jaromir; Sileny, Jan; Krizkova, Sona; Trnkova, Libuse; Kizek, Rene
2008-01-01
Lactoferrin is a multifunctional protein with antimicrobial activity and others to health beneficial properties. The main aim of this work was to propose easy to use technique for lactoferrin isolation from cow colostrum samples. Primarily we utilized sodium dodecyl sulphate – polyacrylamide gel electrophoresis for isolation of lactoferrin from the real samples. Moreover we tested automated microfluidic Experion electrophoresis system to isolate lactoferrin from the collostrum sample. The well developed signal of lactoferrin was determined with detection limit (3 S/N) of 20 ng/ml. In spite of the fact that Experion is faster than SDS-PAGE both separation techniques cannot be used in routine analysis. Therefore we have tested third separation technique, ion exchange chromatography, using monolithic column coupled with UV-VIS detector (LC-UV-VIS). We optimized wave length (280 nm), ionic strength of the elution solution (1.5 M NaCl) and flow rate of the retention and elution solutions (0.25 ml/min and 0.75 ml/min. respectively). Under the optimal conditions the detection limit was estimated as 0.1 μg/ml of lactoferrin measured. Using LC-UV-VIS we determined that lactoferrin concentration varied from 0.5 g/l to 1.1 g/l in cow colostrums collected in the certain time interval up to 72 hours after birth. Further we focused on miniaturization of detection device. We tested amperometric detection at carbon electrode. The results encouraged us to attempt to miniaturise whole detection system and to test it on analysis of real samples of human faeces, because lactoferrin level in faeces is closely associated with the inflammations of intestine mucous membrane. For the purpose of miniaturization we employed the technology of printed electrodes. The detection limit of lactoferrin was estimated as 10 μg/ml measured by the screen-printed electrodes fabricated by us. The fabricated electrodes were compared with commercially available ones. It follows from the obtained results that the responses measured by commercial electrodes are app. ten times higher compared with those measured by the electrodes fabricated by us. This phenomenon relates with smaller working electrode surface area of the electrodes fabricated by us (about 50 %) compared to the commercial ones. The screen-printed electrodes fabricated by us were utilized for determination of lactoferrin faeces. Regarding to fact that sample of faeces was obtained from young and healthy man the amount of lactoferrin in sample was under the limit of detection of this method. PMID:27879717
Non-invasive detection of cocaine dissolved in beverages using displaced Raman spectroscopy.
Eliasson, C; Macleod, N A; Matousek, P
2008-01-21
We demonstrate the potential of Raman spectroscopy to detect cocaine concealed inside transparent glass bottles containing alcoholic beverages. A clear Raman signature of cocaine with good signal-to-noise was obtained from a approximately 300 g solution of adulterated cocaine (purity 75%) in a 0.7 L authentic brown bottle of rum with 1 s acquisition time. The detection limit was estimated to be of the order of 9 g of pure cocaine per 0.7 L (approximately 0.04 moles L(-1)) with 1 s acquisition time. The technique holds great promise for the fast, non-invasive, detection of concealed illicit compounds inside beverages using portable Raman instruments, thus permitting drug trafficking to be combated more effectively.
Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection
NASA Astrophysics Data System (ADS)
Corvaja, Roberto
2017-02-01
In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.
Constraints on the FRB rate at 700-900 MHz
NASA Astrophysics Data System (ADS)
Connor, Liam; Lin, Hsiu-Hsien; Masui, Kiyoshi; Oppermann, Niels; Pen, Ue-Li; Peterson, Jeffrey B.; Roman, Alexander; Sievers, Jonathan
2016-07-01
Estimating the all-sky rate of fast radio bursts (FRBs) has been difficult due to small-number statistics and the fact that they are seen by disparate surveys in different regions of the sky. In this paper we provide limits for the FRB rate at 800 MHz based on the only burst detected at frequencies below 1.4 GHz, FRB 110523. We discuss the difficulties in rate estimation, particularly in providing an all-sky rate above a single fluence threshold. We find an implied rate between 700 and 900 MHz that is consistent with the rate at 1.4 GHz, scaling to 6.4^{+29.5}_{-5.0} × 10^3 sky-1 d-1 for an HTRU-like survey. This is promising for upcoming experiments below a GHz like CHIME and UTMOST, for which we forecast detection rates. Given 110523's discovery at 32σ with nothing weaker detected, down to the threshold of 8σ, we find consistency with a Euclidean flux distribution but disfavour steep distributions, ruling out γ > 2.2.
The importance of accounting for larval detectability in mosquito habitat-association studies.
Low, Matthew; Tsegaye, Admasu Tassew; Ignell, Rickard; Hill, Sharon; Elleby, Rasmus; Feltelius, Vilhelm; Hopkins, Richard
2016-05-04
Mosquito habitat-association studies are an important basis for disease control programmes and/or vector distribution models. However, studies do not explicitly account for incomplete detection during larval presence and abundance surveys, with potential for significant biases because of environmental influences on larval behaviour and sampling efficiency. Data were used from a dip-sampling study for Anopheles larvae in Ethiopia to evaluate the effect of six factors previously associated with larval sampling (riparian vegetation, direct sunshine, algae, water depth, pH and temperature) on larval presence and detectability. Comparisons were made between: (i) a presence-absence logistic regression where samples were pooled at the site level and detectability ignored, (ii) a success versus trials binomial model, and (iii) a presence-detection mixture model that separately estimated presence and detection, and fitted different explanatory variables to these estimations. Riparian vegetation was consistently highlighted as important, strongly suggesting it explains larval presence (-). However, depending on how larval detectability was estimated, the other factors showed large variations in their statistical importance. The presence-detection mixture model provided strong evidence that larval detectability was influenced by sunshine and water temperature (+), with weaker evidence for algae (+) and water depth (-). For larval presence, there was also some evidence that water depth (-) and pH (+) influenced site occupation. The number of dip-samples needed to determine if larvae were likely present at a site was condition dependent: with sunshine and warm water requiring only two dips, while cooler water and cloud cover required 11. Environmental factors influence true larval presence and larval detectability differentially when sampling in field conditions. Researchers need to be more aware of the limitations and possible biases in different analytical approaches used to associate larval presence or abundance with local environmental conditions. These effects can be disentangled using data that are routinely collected (i.e., multiple dip samples at each site) by employing a modelling approach that separates presence from detectability.
Video Salient Object Detection via Fully Convolutional Networks.
Wang, Wenguan; Shen, Jianbing; Shao, Ling
This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).
Estimation of the limit of detection using information theory measures.
Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago
2014-01-31
Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.
JAMES WEBB SPACE TELESCOPE CAN DETECT KILONOVAE IN GRAVITATIONAL WAVE FOLLOW-UP SEARCH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartos, I.; Márka, S.; Huard, T. L., E-mail: ibartos@phys.columbia.edu
Kilonovae represent an important electromagnetic counterpart for compact binary mergers, which could become the most commonly detected gravitational-wave (GW) source. Follow-up observations of kilonovae, triggered by GW events, are nevertheless difficult due to poor localization by GW detectors and due to their faint near-infrared peak emission, which has limited observational capability. We show that the Near-Infrared Camera (NIRCam) on the James Webb Space Telescope will be able to detect kilonovae within the relevant GW-detection range of ∼200 Mpc in short (≲12-s) exposure times for a week following the merger. Despite this sensitivity, a kilonova search fully covering a fiducial localizedmore » area of 10 deg{sup 2} will not be viable with NIRCam due to its limited field of view. However, targeted surveys may be developed to optimize the likelihood of discovering kilonovae efficiently within limited observing time. We estimate that a survey of 10 deg{sup 2} focused on galaxies within 200 Mpc would require about 13 hr, dominated by overhead times; a survey further focused on galaxies exhibiting high star formation rates would require ∼5 hr. The characteristic time may be reduced to as little as ∼4 hr, without compromising the likelihood of detecting kilonovae, by surveying sky areas associated with 50%, rather than 90%, confidence regions of 3 GW events, rather than a single event. Upon the detection and identification of a kilonova, a limited number of NIRCam follow-up observations could constrain the properties of matter ejected by the binary and the equation of state of dense nuclear matter.« less
Plucinski, Mateusz M; Rogier, Eric; Dimbu, Pedro Rafael; Fortes, Filomeno; Halsey, Eric S; Aidoo, Michael
2017-10-01
Most malaria testing is by rapid diagnostic tests (RDTs) that detect Plasmodium falciparum histidine-rich protein 2 (HRP2). Recently, several RDT manufacturers have developed highly sensitive RDTs (hsRDTs), promising a limit of detection (LOD) orders of magnitude lower than conventional RDTs. To model the added utility of hsRDTs, HRP2 concentration in Angolan outpatients was measured quantitatively using an ultrasensitive bead-based assay. The distribution of HRP2 concentration was bimodal in both afebrile and febrile patients. The conventional RDT was able to detect 81% of all HRP2-positive febrile patients and 52-77% of HRP2-positive afebrile patients. The added utility of hsRDTs was estimated to be greater in afebrile patients, where an hsRDT with a LOD of 200 pg/mL would detect an additional 50-60% of HRP2-positive persons compared with a conventional RDT with a LOD of 3,000 pg/mL. In febrile patients, the hsRDT would detect an additional 10-20% of cases. Conventional RDTs already capture the vast majority of symptomatic HRP2-positive individuals, and hsRDTs would have to reach a sufficiently low LOD approaching 200 pg/mL to provide added utility in identifying HRP2-positive, asymptomatic individuals.
Solvent cleaning of pole transformers containing PCB contaminated insulating oil.
Kanbe, H; Shibuya, M
2001-01-01
In 1989, it was discovered that the recycled insulation oil in pole transformers for electric power supply was contaminated with trace amounts of polychlorinated biphenyls (PCBs; maximum 50 mg-PCB/kg-insulation oil). In order to remove the PCBs from transformer components using n-hexane as a solvent, we investigated the relationship between progressive stages of dismantling and cleaning results. The results are summarized as follows: (1) Based on the cleaning test results, we made an estimate of the residual PCB amount on iron and copper components. By dismantling the test pole transformers into the "iron core and coil portion" and cleaning the components, we achieved a residual PCB amount that was below the limit of detection (0.05 mg-PCB/kg-material). To achieve a residual PCB amount below the limit of detection for the transformer paper component, it was necessary to cut the paper into pieces smaller than 5 mm. We were unable to achieve a residual PCB amount below the limit of detection for the wood component. (2) Compared to Japan's stipulated limited concentration standard values for PCBs, the results of the cleaning test show that cleaning iron or copper components with PCBs only on their surface with the solvent n-hexane will satisfy the limited concentration standard values when care is taken to ensure the component surfaces have adequate contact with the cleaning solvent.
El Aswad, Bahaa El Deen Wade; Doenhoff, Michael J; El Hadidi, Abeer Shawky; Schwaeble, Wilhelm J; Lynch, Nicholas J
2011-03-01
Schistosomiasis is traditionally diagnosed by microscopic detection of ova in stool samples, but this method is labour intensive and its sensitivity is limited by low and variable egg secretion in many patients. An alternative is an ELISA using Schistosoma mansoni soluble egg antigen (SEA) to detect anti-schistosome antibody in patient samples. SEA is a good diagnostic marker in non-endemic regions but is of limited value in endemic regions, mainly because of its high cost and limited specificity. Here we assess seven novel antigens for the detection of S. mansoni antibody in an endemic region (the Northern Nile Delta). Using recombinant S. mansoni calreticulin (CRT) and fragments thereof, anti-CRT antibodies were detected in the majority of 97 patients sera. The diagnostic value of some of these antigens was, however, limited by the presence of cross-reacting antibody in the healthy controls, even those recruited in non-endemic areas. Cercarial transformation fluid (CTF), a supernatant that contains soluble material released by the cercariae upon transformation to the schistosomula, is cheaper and easier to produce than SEA. An ELISA using CTF as the detection antigen had a sensitivity of 89.7% and an estimated specificity of 100% when used in non-endemic regions, matching the performance of the established SEA ELISA. CTF was substantially more specific than SEA for diagnosis in the endemic region, and less susceptible than SEA to cross-reacting antibody in the sera of controls with other protozoan and metazoan infections. Copyright © 2010 Elsevier GmbH. All rights reserved.
Huang, Yang; Siwo, Geoffrey; Wuchty, Stefan; Ferdig, Michael T; Przytycka, Teresa M
2012-04-01
It is being increasingly recognized that many important phenotypic traits, including various diseases, are governed by a combination of weak genetic effects and their interactions. While the detection of epistatic interactions that involve a non-additive effect of two loci on a quantitative trait is particularly challenging, this interaction type is fundamental for the understanding of genome organization and gene regulation. However, current methods that detect epistatic interactions typically rely on the existence of a strong primary effect, considerably limiting the sensitivity of the search. To fill this gap, we developed a new method, SEE (Symmetric Epistasis Estimation), allowing the genome-wide detection of epistatic interactions without the need for a strong primary effect. We applied our approach to progeny crosses of the human malaria parasite P. falciparum and S. cerevisiae. We found an abundance of epistatic interactions in the parasite and a much smaller number of such interactions in yeast. The genome of P. falciparum also harboured several epistatic interaction hotspots that putatively play a role in drug resistance mechanisms. The abundance of observed epistatic interactions might suggest a mechanism of compensation for the extremely limited repertoire of transcription factors. Interestingly, epistatic interaction hotspots were associated with elevated levels of linkage disequilibrium, an observation that suggests selection pressure acting on P. falciparum, potentially reflecting host-pathogen interactions or drug-induced selection.
LOW-METALLICITY YOUNG CLUSTERS IN THE OUTER GALAXY. II. SH 2-208
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yasui, Chikako; Kobayashi, Naoto; Izumi, Natsuko
We obtained deep near-infrared images of Sh 2-208, one of the lowest-metallicity H ii regions in the Galaxy, [O/H] = −0.8 dex. We detected a young cluster in the center of the H ii region with a limiting magnitude of K = 18.0 mag (10 σ ), which corresponds to a mass detection limit of ∼0.2 M {sub ⊙}. This enables the comparison of star-forming properties under low metallicity with those of the solar neighborhood. We identified 89 cluster members. From the fitting of the K -band luminosity function (KLF), the age and distance of the cluster are estimated to be ∼0.5more » Myr and ∼4 kpc, respectively. The estimated young age is consistent with the detection of strong CO emission in the cluster region and the estimated large extinction of cluster members ( A{sub V} ∼ 4–25 mag). The observed KLF suggests that the underlying initial mass function (IMF) of the low-metallicity cluster is not significantly different from canonical IMFs in the solar neighborhood in terms of both high-mass slope and IMF peak (characteristic mass). Despite the very young age, the disk fraction of the cluster is estimated at only 27% ± 6%, which is significantly lower than those in the solar metallicity. Those results are similar to Sh 2-207, which is another star-forming region close to Sh 2-208 with a separation of 12 pc, suggesting that their star-forming activities in low-metallicity environments are essentially identical to those in the solar neighborhood, except for the disk dispersal timescale. From large-scale mid-infrared images, we suggest that sequential star formation is taking place in Sh 2-207, Sh 2-208, and the surrounding region, triggered by an expanding bubble with a ∼30 pc radius.« less
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
NASA Astrophysics Data System (ADS)
Zhou, Ping; Zev Rymer, William
2004-12-01
The number of motor unit action potentials (MUAPs) appearing in the surface electromyogram (EMG) signal is directly related to motor unit recruitment and firing rates and therefore offers potentially valuable information about the level of activation of the motoneuron pool. In this paper, based on morphological features of the surface MUAPs, we try to estimate the number of MUAPs present in the surface EMG by counting the negative peaks in the signal. Several signal processing procedures are applied to the surface EMG to facilitate this peak counting process. The MUAP number estimation performance by this approach is first illustrated using the surface EMG simulations. Then, by evaluating the peak counting results from the EMG records detected by a very selective surface electrode, at different contraction levels of the first dorsal interosseous (FDI) muscles, the utility and limitations of such direct peak counts for MUAP number estimation in surface EMG are further explored.
NASA Astrophysics Data System (ADS)
Zhang, Ziyang; Sun, Di; Han, Tongshuai; Guo, Chao; Liu, Jin
2016-10-01
In the non-invasive blood components measurement using near infrared spectroscopy, the useful signals caused by the concentration variation in the interested components, such as glucose, hemoglobin, albumin etc., are relative weak. Then the signals may be greatly disturbed by a lot of noises in various ways. We improved the signals by using the optimum path-length for the used wavelength to get a maximum variation of transmitted light intensity when the concentration of a component varies. And after the path-length optimization for every wavelength in 1000-2500 nm, we present the detection limits for the components, including glucose, hemoglobin and albumin, when measuring them in a tissue phantom. The evaluated detection limits could be the best reachable precision level since it assumed the measurement uses a high signal-to-noise ratio (SNR) signal and the optimum path-length. From the results, available wavelengths in 1000-2500 nm for the three component measurements can be screened by comparing their detection limit values with their measurement limit requirements. For other blood components measurement, the evaluation their detection limits could also be designed using the method proposed in this paper. Moreover, we use an equation to estimate the absorbance at the optimum path-length for every wavelength in 1000-2500 nm caused by the three components. It could be an easy way to realize the evaluation because adjusting the sample cell's size to the precise path-length value for every wavelength is not necessary. This equation could also be referred to other blood components measurement using the optimum path-length for every used wavelength.
Estimation of the transmission dynamics of African swine fever virus within a swine house.
Nielsen, J P; Larsen, T S; Halasa, T; Christiansen, L E
2017-10-01
The spread of African swine fever virus (ASFV) threatens to reach further parts of Europe. In countries with a large swine production, an outbreak of ASF may result in devastating economic consequences for the swine industry. Simulation models can assist decision makers setting up contingency plans. This creates a need for estimation of parameters. This study presents a new analysis of a previously published study. A full likelihood framework is presented including the impact of model assumptions on the estimated transmission parameters. As animals were only tested every other day, an interpretation was introduced to cover the weighted infectiousness on unobserved days for the individual animals (WIU). Based on our model and the set of assumptions, the within- and between-pen transmission parameters were estimated to β w = 1·05 (95% CI 0·62-1·72), β b = 0·46 (95% CI 0·17-1·00), respectively, and the WIU = 1·00 (95% CI 0-1). Furthermore, we simulated the spread of ASFV within a pig house using a modified SEIR-model to establish the time from infection of one animal until ASFV is detected in the herd. Based on a chosen detection limit of 2·55% equivalent to 10 dead pigs out of 360, the disease would be detected 13-19 days after introduction.
Hunter, Margaret; Meigs-Friend, Gaia; Ferrante, Jason; Takoukam Kamla, Aristide; Dorazio, Robert; Keith Diagne, Lucy; Luna, Fabia; Lanyon, Janet M.; Reid, James P.
2018-01-01
Environmental DNA (eDNA) detection is a technique used to non-invasively detect cryptic, low density, or logistically difficult-to-study species, such as imperiled manatees. For eDNA measurement, genetic material shed into the environment is concentrated from water samples and analyzed for the presence of target species. Cytochrome bquantitative PCR and droplet digital PCR eDNA assays were developed for the 3 Vulnerable manatee species: African, Amazonian, and both subspecies of the West Indian (Florida and Antillean) manatee. Environmental DNA assays can help to delineate manatee habitat ranges, high use areas, and seasonal population changes. To validate the assay, water was analyzed from Florida’s east coast containing a high-density manatee population and produced 31564 DNA molecules l-1on average and high occurrence (ψ) and detection (p) estimates (ψ = 0.84 [0.40-0.99]; p = 0.99 [0.95-1.00]; limit of detection 3 copies µl-1). Similar occupancy estimates were produced in the Florida Panhandle (ψ = 0.79 [0.54-0.97]) and Cuba (ψ = 0.89 [0.54-1.00]), while occupancy estimates in Cameroon were lower (ψ = 0.49 [0.09-0.95]). The eDNA-derived detection estimates were higher than those generated using aerial survey data on the west coast of Florida and may be effective for population monitoring. Subsequent eDNA studies could be particularly useful in locations where manatees are (1) difficult to identify visually (e.g. the Amazon River and Africa), (2) are present in patchy distributions or are on the verge of extinction (e.g. Jamaica, Haiti), and (3) where repatriation efforts are proposed (e.g. Brazil, Guadeloupe). Extension of these eDNA techniques could be applied to other imperiled marine mammal populations such as African and Asian dugongs.
Hughes-Jones, N C; Hunt, V A; Maycock, W D; Wesley, E D; Vallet, L
1978-01-01
An analysis of the assay of 28 preparations of anti-D immunoglobulin using a radioisotope method carried out at 6-montly intervals for 2--4.5 years showed an average fall in anti-D concentration of 10.6% each year, with 99% confidence limits of 6.8--14.7%. The fall in anti-D concentration after storage at 37 degrees C for 1 month was less than 8%, the minimum change that could be detected. No significant change in physical characteristics of the immunoglobulin were detected. The error of a single estimate of anti-D by the radioisotope method (125I-labelled anti-IgG) used here was calculated to be such that the true value probably (p = 0.95) lay between 66 and 150% of the estimated value.
Lodge, David M; Turner, Cameron R; Jerde, Christopher L; Barnes, Matthew A; Chadderton, Lindsay; Egan, Scott P; Feder, Jeffrey L; Mahon, Andrew R; Pfrender, Michael E
2012-06-01
Three mantras often guide species and ecosystem management: (i) for preventing invasions by harmful species, 'early detection and rapid response'; (ii) for conserving imperilled native species, 'protection of biodiversity hotspots'; and (iii) for assessing biosecurity risk, 'an ounce of prevention equals a pound of cure.' However, these and other management goals are elusive when traditional sampling tools (e.g. netting, traps, electrofishing, visual surveys) have poor detection limits, are too slow or are not feasible. One visionary solution is to use an organism's DNA in the environment (eDNA), rather than the organism itself, as the target of detection. In this issue of Molecular Ecology, Thomsen et al. (2012) provide new evidence demonstrating the feasibility of this approach, showing that eDNA is an accurate indicator of the presence of an impressively diverse set of six aquatic or amphibious taxa including invertebrates, amphibians, a fish and a mammal in a wide range of freshwater habitats. They are also the first to demonstrate that the abundance of eDNA, as measured by qPCR, correlates positively with population abundance estimated with traditional tools. Finally, Thomsen et al. (2012) demonstrate that next-generation sequencing of eDNA can quantify species richness. Overall, Thomsen et al. (2012) provide a revolutionary roadmap for using eDNA for detection of species, estimates of relative abundance and quantification of biodiversity. © 2012 Blackwell Publishing Ltd.
Elmore, Stacey A; Samelius, Gustaf; Al-Adhami, Batol; Huyvaert, Kathryn P; Bailey, Larissa L; Alisauskas, Ray T; Gajadhar, Alvin A; Jenkins, Emily J
2016-01-01
Although the protozoan parasite Toxoplasma gondii is ubiquitous in birds and mammals worldwide, the full suite of hosts and transmission routes is not completely understood, especially in the Arctic. Toxoplasma gondii occurrence in humans and wildlife can be high in Arctic regions, despite apparently limited opportunities for transmission of oocysts shed by felid definitive hosts. Arctic foxes (Vulpes lagopus) are under increasing anthropogenic and ecologic pressure, leading to population declines in parts of their range. Our understanding of T. gondii occurrence in arctic foxes is limited to only a few regions, but mortality events caused by this parasite have been reported. We investigated the exposure of arctic foxes to T. gondii in the Karrak Lake goose colony, Queen Maud Gulf Migratory Bird Sanctuary, Nunavut, Canada. Following an occupancy-modeling framework, we performed replicated antibody testing on serum samples by direct agglutination test (DAT), indirect fluorescent antibody test (IFAT), and an indirect enzyme-linked immunosorbent assay (ELISA) that can be used in multiple mammalian host species. As a metric of test performance, we then estimated the probability of detecting T. gondii antibodies for each of the tests. Occupancy estimates for T. gondii antibodies in arctic foxes under this framework were between 0.430 and 0.758. Detection probability was highest for IFAT (0.716) and lower for DAT (0.611) and ELISA (0.464), indicating that the test of choice for antibody detection in arctic foxes might be the IFAT. We document a new geographic record of T. gondii exposure in arctic foxes and demonstrate an emerging application of ecologic modeling techniques to account for imperfect performance of diagnostic tests in wildlife species.
Low contrast detection in abdominal CT: comparing single-slice and multi-slice tasks
NASA Astrophysics Data System (ADS)
Ba, Alexandre; Racine, Damien; Viry, Anaïs.; Verdun, Francis R.; Schmidt, Sabine; Bochud, François O.
2017-03-01
Image quality assessment is crucial for the optimization of computed tomography (CT) protocols. Human and mathematical model observers are increasingly used for the detection of low contrast signal in abdominal CT, but are frequently limited to the use of a single image slice. Another limitation is that most of them only consider the detection of a signal embedded in a uniform background phantom. The purpose of this paper was to test if human observer performance is significantly different in CT images read in single or multiple slice modes and if these differences are the same for anatomical and uniform clinical images. We investigated detection performance and scrolling trends of human observers of a simulated liver lesion embedded in anatomical and uniform CT backgrounds. Results show that observers don't take significantly benefit of additional information provided in multi-slice reading mode. Regarding the background, performances are moderately higher for uniform than for anatomical images. Our results suggest that for low contrast detection in abdominal CT, the use of multi-slice model observers would probably only add a marginal benefit. On the other hand, the quality of a CT image is more accurately estimated with clinical anatomical backgrounds.
Maximized exoEarth candidate yields for starshades
NASA Astrophysics Data System (ADS)
Stark, Christopher C.; Shaklan, Stuart; Lisman, Doug; Cady, Eric; Savransky, Dmitry; Roberge, Aki; Mandell, Avi M.
2016-10-01
The design and scale of a future mission to directly image and characterize potentially Earth-like planets will be impacted, to some degree, by the expected yield of such planets. Recent efforts to increase the estimated yields, by creating observation plans optimized for the detection and characterization of Earth-twins, have focused solely on coronagraphic instruments; starshade-based missions could benefit from a similar analysis. Here we explore how to prioritize observations for a starshade given the limiting resources of both fuel and time, present analytic expressions to estimate fuel use, and provide efficient numerical techniques for maximizing the yield of starshades. We implemented these techniques to create an approximate design reference mission code for starshades and used this code to investigate how exoEarth candidate yield responds to changes in mission, instrument, and astrophysical parameters for missions with a single starshade. We find that a starshade mission operates most efficiently somewhere between the fuel- and exposuretime-limited regimes and, as a result, is less sensitive to photometric noise sources as well as parameters controlling the photon collection rate in comparison to a coronagraph. We produced optimistic yield curves for starshades, assuming our optimized observation plans are schedulable and future starshades are not thrust-limited. Given these yield curves, detecting and characterizing several dozen exoEarth candidates requires either multiple starshades or an η≳0.3.
Simon, Steven L.; Bouville, André; Kleinerman, Ruth
2009-01-01
Biodosimetry measurements can potentially be an important and integral part of the dosimetric methods used in long-term studies of health risk following radiation exposure. Such studies rely on accurate estimation of doses to the whole body or to specific organs of individuals in order to derive reliable estimates of cancer risk. However, dose estimates based on analytical dose reconstruction (i.e., models) or personnel monitoring measurements, e.g., film-badges, can have substantial uncertainty. Biodosimetry can potentially reduce uncertainty in health risk studies by corroboration of model-based dose estimates or by using them to assess bias in dose models. While biodosimetry has begun to play a more significant role in long-term health risk studies, its use is still generally limited in that context due to one or more factors including, inadequate limits of detection, large inter-individual variability of the signal measured, high per-sample cost, and invasiveness. Presently, the most suitable biodosimetry methods for epidemiologic studies are chromosome aberration frequencies from fluorescence in situ hybridization (FISH) of peripheral blood lymphocytes and electron paramagnetic resonance (EPR) measurements made on tooth enamel. Both types of measurements, however, are usually invasive and require difficult to obtain biological samples. Moreover, doses derived from these methods are not always directly relevant to the tissues of interest. To increase the value of biodosimetry to epidemiologic studies, a number of issues need to be considered including limits of detection, effects of inhomogenous exposure of the body, how to extrapolate from the tissue sampled to the tissues of interest, and how to adjust dosimetry models applied to large populations based on sparse biodosimetry measurements. The requirements of health risk studies suggest a set of characteristics that, if satisfied by new biodosimetry methods, would increase the overall usefulness of biodosimetry to determining radiation health risks. PMID:20065672
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
Meador, M.R.; McIntyre, J.P.; Pollock, K.H.
2003-01-01
Two-pass backpack electrofishing data collected as part of the U.S. Geological Survey's National Water-Quality Assessment Program were analyzed to assess the efficacy of single-pass backpack electrofishing. A two-capture removal model was used to estimate, within 10 river basins across the United States, proportional fish species richness from one-pass electrofishing and probabilities of detection for individual fish species. Mean estimated species richness from first-pass sampling (ps1) ranged from 80.7% to 100% of estimated total species richness for each river basin, based on at least seven samples per basin. However, ps1 values for individual sites ranged from 40% to 100% of estimated total species richness. Additional species unique to the second pass were collected in 50.3% of the samples. Of these, cyprinids and centrarchids were collected most frequently. Proportional fish species richness estimated for the first pass increased significantly with decreasing stream width for 1 of the 10 river basins. When used to calculate probabilities of detection of individual fish species, the removal model failed 48% of the time because the number of individuals of a species was greater in the second pass than in the first pass. Single-pass backpack electrofishing data alone may make it difficult to determine whether characterized fish community structure data are real or spurious. The two-pass removal model can be used to assess the effectiveness of sampling species richness with a single electrofishing pass. However, the two-pass removal model may have limited utility to determine probabilities of detection of individual species and, thus, limit the ability to assess the effectiveness of single-pass sampling to characterize species relative abundances. Multiple-pass (at least three passes) backpack electrofishing at a large number of sites may not be cost-effective as part of a standardized sampling protocol for large-geographic-scale studies. However, multiple-pass electrofishing at some sites may be necessary to better evaluate the adequacy of single-pass electrofishing and to help make meaningful interpretations of fish community structure.
McGann, Patrick T.; Tyburski, Erika A.; de Oliveira, Vysolela; Santos, Brigida; Ware, Russell E.; Lam, Wilbur A.
2016-01-01
Severe anemia is an important cause of morbidity and mortality among children in resource-poor settings, but laboratory diagnostics are often limited in these locations. To address this need, we developed a simple, inexpensive, and color-based point-of-care (POC) assay to detect severe anemia. The purpose of this study was to evaluate the accuracy of this novel POC assay to detect moderate and severe anemia in a limited-resource setting. The study was a cross-sectional study conducted on children with sickle cell anemia in Luanda, Angola. The hemoglobin concentrations obtained by the POC assay were compared to reference values measured by a calibrated automated hematology analyzer. A total of 86 samples were analyzed (mean hemoglobin concentration 6.6 g/dL). There was a strong correlation between the hemoglobin concentrations obtained by the POC assay and reference values obtained from an automated hematology analyzer (r=0.88, P<0.0001). The POC assay demonstrated excellent reproducibility (r=0.93, P<0.0001) and the reagents appeared to be durable in a tropical setting (r=0.93, P<0.0001). For the detection of severe anemia that may require blood transfusion (hemoglobin <5 g/dL), the POC assay had sensitivity of 88.9% and specificity of 98.7%. These data demonstrate that an inexpensive (<$0.25 USD) POC assay accurately estimates low hemoglobin concentrations and has the potential to become a transformational diagnostic tool for severe anemia in limited-resource settings. PMID:26317494
Uemoto, Michihisa; Makino, Masanori; Ota, Yuji; Sakaguchi, Hiromi; Shimizu, Yukari; Sato, Kazuhiro
2018-01-01
Minor and trace metals in aluminum and aluminum alloys have been determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) as an interlaboratory testing toward standardization. The trueness of the measured data was successfully investigated to improve the analytical protocols, using certified reference materials of aluminum. Their precision could also be evaluated, feasible to estimate the uncertainties separately. The accuracy (trueness and precision) of the data were finally in good agreement with the certified values and assigned uncertainties. Repeated measurements of aluminum solutions with different concentrations of the analytes revealed the relative standard deviations of the measurements with concentrations, thus enabling their limits of quantitation. They differed separately and also showed slightly higher values with an aluminum matrix than those without one. In addition, the upper limit of the detectable concentration of silicon with simple acid digestion was estimated to be 0.03 % in the mass fraction.
Assessment of target detection limits in hyperspectral data
NASA Astrophysics Data System (ADS)
Gross, W.; Boehler, J.; Schilling, H.; Middelmann, W.; Weyermann, J.; Wellig, P.; Oechslin, R.; Kneubuehler, M.
2015-10-01
Hyperspectral remote sensing data can be used for civil and military applications to detect and classify target objects that cannot be reliably separated using broadband sensors. The comparably low spatial resolution is compensated by the fact that small targets, even below image resolution, can still be classified. The goal of this paper is to determine the target size to spatial resolution ratio for successful classification of different target and background materials. Airborne hyperspectral data is used to simulate data with known mixture ratios and to estimate the detection threshold for given false alarm rates. The data was collected in July 2014 over Greding, Germany, using airborne aisaEAGLE and aisaHAWK hyperspectral sensors. On the ground, various target materials were placed on natural background. The targets were four quadratic molton patches with an edge length of 7 meters in the colors black, white, grey and green. Also, two different types of polyethylene (camouflage nets) with an edge length of approximately 5.5 meters were deployed. Synthetic data is generated from the original data using spectral mixtures. Target signatures are linearly combined with different background materials in specific ratios. The simulated mixtures are appended to the original data and the target areas are removed for evaluation. Commonly used classification algorithms, e.g. Matched Filtering, Adaptive Cosine Estimator are used to determine the detection limit. Fixed false alarm rates are employed to find and analyze certain regions where false alarms usually occur first. A combination of 18 targets and 12 backgrounds is analyzed for three VNIR and two SWIR data sets of the same area.
Novel approaches in diagnosing tuberculosis
NASA Astrophysics Data System (ADS)
Kolk, Arend H. J.; Dang, Ngoc A.; Kuijper, Sjoukje; Gibson, Tim; Anthony, Richard; Claassens, Mareli M.; Kaal, Erwin; Janssen, Hans-Gerd
2011-06-01
The WHO declared tuberculosis (TB) a global emergency. An estimated 8-9 million new cases occur each year with 2-3 million deaths. Currently, TB is diagnosed mostly by chest-X ray and staining of the mycobacteria in sputum with a detection limit of 1x104 bacteria /ml. There is an urgent need for better diagnostic tools for TB especially for developing countries. We have validated the electronic nose from TD Technology for the detection of Mycobacterium tuberculosis by headspace analysis of 284 sputum samples from TB patients. We used linear discriminant function analysis resulting in a sensitivity of 75% a specificity of 67% and an accuracy of 69%. Further research is still required to improve the results by choosing more selective sensors and sampling techniques. We used a fast gas chromatography- mass spectrometry method (GC-MS). The automated procedure is based on the injection of sputum samples which are methylated inside the GC injector using thermally assisted hydrolysis and methylation (THM-GC-MS). Hexacosanoic acid in combination with tuberculostearic acid was found to be specific for the presence of M. tuberculosis. The detection limit was similar to microscopy. We found no false positives, all microscopy and culture positive samples were also found positive with the THM-GC-MS method. The detection of ribosomal RNA from the infecting organism offers great potential since rRNA molecules outnumber chromosomal DNA by a factor 1000. It thus may possible to detect the organism without amplification of the nucleic acids (NA). We used a capture and a tagged detector probe for the direct detection of M. tuberculosis in sputum. So far the detection limit is 1x106 bacteria / ml. Currently we are testing a Lab-On-A-Chip Interferometer detection system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Protat, Alain; Young, Stuart; McFarlane, Sally A.
2014-02-01
The objective of this paper is to investigate whether estimates of the cloud frequency of occurrence and associated cloud radiative forcing as derived from ground-based and satellite active remote sensing and radiative transfer calculations can be reconciled over a well instrumented active remote sensing site located in Darwin, Australia, despite the very different viewing geometry and instrument characteristics. It is found that the ground-based radar-lidar combination at Darwin does not detect most of the cirrus clouds above 10 km (due to limited lidar detection capability and signal obscuration by low-level clouds) and that the CloudSat radar - Cloud-Aerosol Lidar withmore » Orthogonal Polarization (CALIOP) combination underreports the hydrometeor frequency of occurrence below 2 km height, due to instrument limitations at these heights. The radiative impact associated with these differences in cloud frequency of occurrence is large on the surface downwelling shortwave fluxes (ground and satellite) and the top-of atmosphere upwelling shortwave and longwave fluxes (ground). Good agreement is found for other radiative fluxes. Large differences in radiative heating rate as derived from ground and satellite radar-lidar instruments and RT calculations are also found above 10 km (up to 0.35 Kday-1 for the shortwave and 0.8 Kday-1 for the longwave). Given that the ground-based and satellite estimates of cloud frequency of occurrence and radiative impact cannot be fully reconciled over Darwin, caution should be exercised when evaluating the representation of clouds and cloud-radiation interactions in large-scale models and limitations of each set of instrumentation should be considered when interpreting model-observations differences.« less
VizieR Online Data Catalog: CANDID code for interferometric observations (Gallenne+, 2015)
NASA Astrophysics Data System (ADS)
Gallenne, A.; Merand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; Ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.
2015-07-01
This is a suite of Python2.7 tools to find faint companion around star in interferometric data in the OIFITS format. This tool allows to systematically search for faint companions in OIFITS data, and if not found, estimates the detection limit. All files are also available at https://github.com/amerand/CANDID . (3 data files).
Dai, Haichao; Shi, Yan; Wang, Yilin; Sun, Yujing; Hu, Jingting; Ni, Pengjuan; Li, Zhuang
2014-03-15
In this work, we proposed a facile, environmentally friendly and cost-effective assay for melamine with BSA-stabilized gold nanoclusters (AuNCs) as a fluorescence reader. Melamine, which has a multi-nitrogen heterocyclic ring, is prone to coordinate with Hg(2+). This property causes the anti-quenching ability of Hg(2+) to AuNCs through decreasing the metallophilic interaction between Hg(2+) and Au(+). By this method, detection limit down to 0.15 µM is obtained, which is approximately 130 times lower than that of the US food and Drug Administration estimated melamine safety limit of 20 µM. Furthermore, several real samples spiked with melamine, including raw milk and milk powder, are analyzed using the sensing system with excellent recoveries. This gold-nanocluster-based fluorescent method could find applications in highly sensitive detection of melamine in real samples. © 2013 Elsevier B.V. All rights reserved.
Amperometric Sensor for Detection of Chloride Ions.
Trnkova, Libuse; Adam, Vojtech; Hubalek, Jaromir; Babula, Petr; Kizek, Rene
2008-09-15
Chloride ion sensing is important in many fields such as clinical diagnosis, environmental monitoring and industrial applications. We have measured chloride ions at a carbon paste electrode (CPE) and at a CPE modified with solid AgNO₃, a solution of AgNO₃ and/or solid silver particles. Detection limits (3 S/N) for chloride ions were 100 μM, 100 μM and 10 μM for solid AgNO₃, solution of AgNO₃ and/or solid silver particles, respectively. The CPE modified with silver particles is the most sensitive to the presence chloride ions. After that we approached to the miniaturization of the whole electrochemical instrument. Measurements were carried out on miniaturized instrument consisting of a potentiostat with dimensions 35 × 166 × 125 mm, screen printed electrodes, a peristaltic pump and a PC with control software. Under the most suitable experimental conditions (Britton-Robinson buffer, pH 1.8 and working electrode potential 550 mV) we estimated the limit of detection (3 S/N) as 500 nM.
Artificial neural networks in mammography interpretation and diagnostic decision making.
Ayer, Turgay; Chen, Qiushi; Burnside, Elizabeth S
2013-01-01
Screening mammography is the most effective means for early detection of breast cancer. Although general rules for discriminating malignant and benign lesions exist, radiologists are unable to perfectly detect and classify all lesions as malignant and benign, for many reasons which include, but are not limited to, overlap of features that distinguish malignancy, difficulty in estimating disease risk, and variability in recommended management. When predictive variables are numerous and interact, ad hoc decision making strategies based on experience and memory may lead to systematic errors and variability in practice. The integration of computer models to help radiologists increase the accuracy of mammography examinations in diagnostic decision making has gained increasing attention in the last two decades. In this study, we provide an overview of one of the most commonly used models, artificial neural networks (ANNs), in mammography interpretation and diagnostic decision making and discuss important features in mammography interpretation. We conclude by discussing several common limitations of existing research on ANN-based detection and diagnostic models and provide possible future research directions.
Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission
NASA Technical Reports Server (NTRS)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.
2015-01-01
The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.
Study of gamma detection capabilities of the REWARD mobile spectroscopic system
NASA Astrophysics Data System (ADS)
Balbuena, J. P.; Baptista, M.; Barros, S.; Dambacher, M.; Disch, C.; Fiederle, M.; Kuehn, S.; Parzefall, U.
2017-07-01
REWARD is a novel mobile spectroscopic radiation detector system for Homeland Security applications. The system integrates gamma and neutron detection equipped with wireless communication. A comprehensive simulation study on its gamma detection capabilities in different radioactive scenarios is presented in this work. The gamma detection unit consists of a precise energy resolution system based on two stacked (Cd,Zn)Te sensors working in coincidence sum mode. The volume of each of these CZT sensors is 1 cm3. The investigated energy windows used to determine the detection capabilities of the detector correspond to the gamma emissions from 137Cs and 60Co radioactive sources (662 keV and 1173/1333 keV respectively). Monte Carlo and Technology Computer-Aided Design (TCAD) simulations are combined to determine its sensing capabilities for different radiation sources and estimate the limits of detection of the sensing unit as a function of source activity for several shielding materials.
Wang, Yanfang; Yang, Na; Liu, Yi
2018-04-05
A novel organic small molecule with D-Pi-A structure was prepared, which was found to be a promising colorimetric and ratiometric UV-vis spetral probe for detection of phosphorylated proteins with the help of tetravalent zirconium ion. Such optical probe based on chromophore WYF-1 shows a rapid response (within 10s) and high selectivity and sensitivity for phosphorylated proteins, giving distinct colorimetric and ratiometric UV-vis changes at 720 and 560nm. The detection limit for phosphorylated proteins was estimated to be 100nM. In addition, detection of phosphorylated proteins in placental tissue samples with this probe was successfully applied, which indicates that this probe holds great potential for phosphorylated proteins detection. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schmidt, K. B.; Treu, T.; Bradač, M.; Vulcani, B.; Huang, K.-H.; Hoag, A.; Maseda, M.; Guaita, L.; Pentericci, L.; Brammer, G. B.; Dijkstra, M.; Dressler, A.; Fontana, A.; Henry, A. L.; Jones, T. A.; Mason, C.; Trenti, M.; Wang, X.
2016-02-01
We present a census of Lyα emission at z≳ 7, utilizing deep near-infrared Hubble Space Telescope grism spectroscopy from the first six completed clusters of the Grism Lens-Amplified Survey from Space (GLASS). In 24/159 photometrically selected galaxies we detect emission lines consistent with Lyα in the GLASS spectra. Based on the distribution of signal-to-noise ratios and on simulations, we expect the completeness and the purity of the sample to be 40%-100% and 60%-90%, respectively. For the objects without detected emission lines we show that the observed (not corrected for lensing magnification) 1σ flux limits reach 5 × 10-18 erg s-1 cm-2 per position angle over the full wavelength range of GLASS (0.8-1.7 μm). Based on the conditional probability of Lyα emission measured from the ground at z˜ 7, we would have expected 12-18 Lyα emitters. This is consistent with the number of detections, within the uncertainties, confirming the drop in Lyα emission with respect to z˜ 6. Deeper follow-up spectroscopy, here exemplified by Keck spectroscopy, is necessary to improve our estimates of completeness and purity and to confirm individual candidates as true Lyα emitters. These candidates include a promising source at z = 8.1. The spatial extent of Lyα in a deep stack of the most convincing Lyα emitters with < z> = 7.2 is consistent with that of the rest-frame UV continuum. Extended Lyα emission, if present, has a surface brightness below our detection limit, consistent with the properties of lower-redshift comparison samples. From the stack we estimate upper limits on rest-frame UV emission line ratios and find {f}{{C}{{IV}}}/{f}{Lyα }≲ 0.32 and {f}{{C}{{III}}]}/{f}{Lyα }≲ 0.23, in good agreement with other values published in the literature.
A Validation of Remotely Sensed Fires Using Ground Reports
NASA Astrophysics Data System (ADS)
Ruminski, M. G.; Hanna, J.
2007-12-01
A satellite based analysis of fire detections and smoke emissions for North America is produced daily by NOAA/NESDIS. The analysis incorporates data from the MODIS (Terra and Aqua) and AVHRR (NOAA-15/16/17) polar orbiting instruments and GOES East and West geostationary spacecraft with nominal resolutions of 1km and 4 km for the polar and geostationary platforms respectively. Automated fire detection algorithms are utilized for each of the sensors. Analysts perform a quality control procedure on the automated detects by deleting points that are deemed to be false detects and adding points that the algorithms did not detect. A limited validation of the final quality controlled product was performed using high resolution (30 m) ASTER data in the summer of 2006. Some limitations in using ASTER data are that each scene is only approximately 3600 square km, the data acquisition time is relatively constant at around 1030 local solar time and ASTER is another remotely sensed data source. This study expands on the ASTER validation by using ground reports of prescribed burns in Montana and Idaho for 2003 and 2004. It provides a non-remote sensing data source for comparison. While the ground data do not have the limitations noted above for ASTER there are still limitations. For example, even though the data set covers a much larger area (nearly 600,000 square km) than even several ASTER scenes, it still represents a single region of North America. And while the ground data are not restricted to a narrow time window, only a date is provided with each report, limiting the ability to make detailed conclusions about the detection capabilities for specific instruments, especially for the less temporally frequent polar orbiting MODIS and AVHRR sensors. Comparison of the ground data reports to the quality controlled fire analysis revealed a low rate of overall detection of 23.00% over the entire study period. Examination of the daily detection rates revealed a wide variation, with some days resulting in as little as 5 detects out of 107 reported fires while other days had as many as 84 detections out of 160 reports. Inspection of the satellite imagery from the days with very low detection rates revealed that extensive cloud cover prohibited satellite fire detection. On days when cloud cover was at a minimum, detection rates were substantially higher. An estimate of the fire size was also provided with the ground data set. Statistics will be presented for days with minimal cloud cover which will indicate the probability of detection for fires of various sizes.
NASA Astrophysics Data System (ADS)
Lin, Tzu-Yung; Green, Roger J.; O'Connor, Peter B.
2011-12-01
The nature of the ion signal from a 12-T Fourier-transform ion cyclotron resonance mass spectrometer and the electronic noise were studied to further understand the electronic detection limit. At minimal cost, a new transimpedance preamplifier was designed, computer simulated, built, and tested. The preamplifier design pushes the electronic signal-to-noise performance at room temperature to the limit, because of its enhanced tolerance of the capacitance of the detection device, lower intrinsic noise, and larger flat mid-band gain (input current noise spectral density of around 1 pA/sqrt{Hz} when the transimpedance is about 85 dBΩ). The designed preamplifier has a bandwidth of ˜3 kHz to 10 MHz, which corresponds to the mass-to-charge ratio, m/z, of approximately 18 to 61 k at 12 T. The transimpedance and the bandwidth can be easily adjusted by changing the value of passive components. The feedback limitation of the circuit is discussed. With the maximum possible transimpedance of 5.3 MΩ when using an 0402 surface mount resistor, the preamplifier was estimated to be able to detect ˜110 charges in a single scan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Tzu-Yung; Green, Roger J.; O'Connor, Peter B.
2011-12-15
The nature of the ion signal from a 12-T Fourier-transform ion cyclotron resonance mass spectrometer and the electronic noise were studied to further understand the electronic detection limit. At minimal cost, a new transimpedance preamplifier was designed, computer simulated, built, and tested. The preamplifier design pushes the electronic signal-to-noise performance at room temperature to the limit, because of its enhanced tolerance of the capacitance of the detection device, lower intrinsic noise, and larger flat mid-band gain (input current noise spectral density of around 1 pA/{radical}(Hz) when the transimpedance is about 85 dB{Omega}). The designed preamplifier has a bandwidth of {approx}3more » kHz to 10 MHz, which corresponds to the mass-to-charge ratio, m/z, of approximately 18 to 61 k at 12 T. The transimpedance and the bandwidth can be easily adjusted by changing the value of passive components. The feedback limitation of the circuit is discussed. With the maximum possible transimpedance of 5.3 M{Omega} when using an 0402 surface mount resistor, the preamplifier was estimated to be able to detect {approx}110 charges in a single scan.« less
Limits to the Fraction of High-energy Photon Emitting Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Akerlof, Carl W.; Zheng, WeiKang
2013-02-01
After almost four years of operation, the two instruments on board the Fermi Gamma-ray Space Telescope have shown that the number of gamma-ray bursts (GRBs) with high-energy photon emission above 100 MeV cannot exceed roughly 9% of the total number of all such events, at least at the present detection limits. In a recent paper, we found that GRBs with photons detected in the Large Area Telescope have a surprisingly broad distribution with respect to the observed event photon number. Extrapolation of our empirical fit to numbers of photons below our previous detection limit suggests that the overall rate of such low flux events could be estimated by standard image co-adding techniques. In this case, we have taken advantage of the excellent angular resolution of the Swift mission to provide accurate reference points for 79 GRB events which have eluded any previous correlations with high-energy photons. We find a small but significant signal in the co-added field. Guided by the extrapolated power-law fit previously obtained for the number distribution of GRBs with higher fluxes, the data suggest that only a small fraction of GRBs are sources of high-energy photons.
NASA Astrophysics Data System (ADS)
Haneberg, W. C.
2017-12-01
Remote characterization of new landslides or areas of ongoing movement using differences in high resolution digital elevation models (DEMs) created through time, for example before and after major rains or earthquakes, is an attractive proposition. In the case of large catastrophic landslides, changes may be apparent enough that simple subtraction suffices. In other cases, statistical noise can obscure landslide signatures and place practical limits on detection. In ideal cases on land, GPS surveys of representative areas at the time of DEM creation can quantify the inherent errors. In less-than-ideal terrestrial cases and virtually all submarine cases, it may be impractical or impossible to independently estimate the DEM errors. Examining DEM difference statistics for areas reasonably inferred to have no change, however, can provide insight into the limits of detectability. Data from inferred no-change areas of airborne LiDAR DEM difference maps of the 2014 Oso, Washington landslide and landslide-prone colluvium slopes along the Ohio River valley in northern Kentucky, show that DEM difference maps can have non-zero mean and slope dependent error components consistent with published studies of DEM errors. Statistical thresholds derived from DEM difference error and slope data can help to distinguish between DEM differences that are likely real—and which may indicate landsliding—from those that are likely spurious or irrelevant. This presentation describes and compares two different approaches, one based upon a heuristic assumption about the proportion of the study area likely covered by new landslides and another based upon the amount of change necessary to ensure difference at a specified level of probability.
Rizzo, Austin A.; Brown, Donald J.; Welsh, Stuart A.; Thompson, Patricia A.
2017-01-01
Population monitoring is an essential component of endangered species recovery programs. The federally endangered Diamond Darter Crystallaria cincotta is in need of an effective monitoring design to improve our understanding of its distribution and track population trends. Because of their small size, cryptic coloration, and nocturnal behavior, along with limitations associated with current sampling methods, individuals are difficult to detect at known occupied sites. Therefore, research is needed to determine if survey efforts can be improved by increasing probability of individual detection. The primary objective of this study was to determine if there are seasonal and diel patterns in Diamond Darter detectability during population surveys. In addition to temporal factors, we also assessed five habitat variables that might influence individual detection. We used N-mixture models to estimate site abundances and relationships between covariates and individual detectability and ranked models using Akaike's information criteria. During 2015 three known occupied sites were sampled 15 times each between May and Oct. The best supported model included water temperature as a quadratic function influencing individual detectability, with temperatures around 22 C resulting in the highest detection probability. Detection probability when surveying at the optimal temperature was approximately 6% and 7.5% greater than when surveying at 16 C and 29 C, respectively. Time of Night and day of year were not strong predictors of Diamond Darter detectability. The results of this study will allow researchers and agencies to maximize detection probability when surveying populations, resulting in greater monitoring efficiency and likely more precise abundance estimates.
Bowden, Vanessa K; Loft, Shayne; Tatasciore, Monica; Visser, Troy A W
2017-01-01
Speed enforcement reduces incidences of speeding, thus reducing traffic accidents. Accordingly, it has been argued that stricter speed enforcement thresholds could further improve road safety. Effective speed monitoring however requires driver attention and effort, and human information-processing capacity is limited. Emphasizing speed monitoring may therefore reduce resource availability for other aspects of safe vehicle operation. We investigated whether lowering enforcement thresholds in a simulator setting would introduce further competition for limited cognitive and visual resources. Eighty-four young adult participants drove under conditions where they could be fined for travelling 1, 6, or 11km/h over a 50km/h speed-limit. Stricter speed enforcement led to greater subjective workload and significant decrements in peripheral object detection. These data indicate that the benefits of reduced speeding with stricter enforcement may be at least partially offset by greater mental demands on drivers, reducing their responses to safety-critical stimuli on the road. It is likely these results under-estimate the impact of stricter speed enforcement on real-world drivers who experience significantly greater pressures to drive at or above the speed limit. Copyright © 2016 Elsevier Ltd. All rights reserved.
What is the limit to case detection under the DOTS strategy for tuberculosis control?
Dye, Christopher; Watt, Catherine J; Bleed, Daniel M; Williams, Brian G
2003-01-01
In year 2000, the WHO DOTS strategy for tuberculosis (TB) control had been adopted by 148 out of 212 countries, but only 27% of all estimated sputum smear-positive patients were notified under DOTS in that year. Here we investigate the way in which gains in case detection under DOTS were made up until 2000 in an attempt to anticipate future progress towards the global target of 70% case detection. The analysis draws on annual reports of DOTS geographical coverage and case notifications, and focuses on the 22 high-burden countries (HBCs) that account for about 80% of new TB cases arising globally each year. Our principal observation is that, as TB programmes in the 22 HBCs have expanded geographically, the fraction of the estimated number of sputum smear-positive cases detected within designated DOTS areas has remained constant at 40-50% although there are significant differences between countries. This fraction is about the same as the percentage of all smear-positive cases notified annually to WHO via public health systems worldwide. The implication is that, unless the DOTS strategy can reach beyond traditional public health reporting systems, or unless these systems can be improved, case detection will not rise much above 40% in the 22 HBCs, or in the world as a whole, even when the geographical coverage of DOTS is nominally 100%. We estimate that, under full DOTS coverage, three-quarters of the undetected smear-positive cases will be living in India, China, Indonesia, Nigeria, Bangladesh and Pakistan. But case detection could also remain low in countries with smaller populations: in year 2000, over half of all smear-positive TB cases were living in 49 countries that detected less than 40% of cases within DOTS areas. Substantial efforts are therefore needed (a) to develop new case finding and management methods to bridge the gap between current and target case detection, and (b) to improve the accuracy of national estimates of TB incidence, above all by reinforcing and expanding routine surveillance.
Early detection of emerging forest disease using dispersal estimation and ecological niche modeling.
Meentemeyer, Ross K; Anacker, Brian L; Mark, Walter; Rizzo, David M
2008-03-01
Distinguishing the manner in which dispersal limitation and niche requirements control the spread of invasive pathogens is important for prediction and early detection of disease outbreaks. Here, we use niche modeling augmented by dispersal estimation to examine the degree to which local habitat conditions vs. force of infection predict invasion of Phytophthora ramorum, the causal agent of the emerging infectious tree disease sudden oak death. We sampled 890 field plots for the presence of P. ramorum over a three-year period (2003-2005) across a range of host and abiotic conditions with variable proximities to known infections in California, USA. We developed and validated generalized linear models of invasion probability to analyze the relative predictive power of 12 niche variables and a negative exponential dispersal kernel estimated by likelihood profiling. Models were developed incrementally each year (2003, 2003-2004, 2003-2005) to examine annual variability in model parameters and to create realistic scenarios for using models to predict future infections and to guide early-detection sampling. Overall, 78 new infections were observed up to 33.5 km from the nearest known site of infection, with slightly increasing rates of prevalence across time windows (2003, 6.5%; 2003-2004, 7.1%; 2003-2005, 9.6%). The pathogen was not detected in many field plots that contained susceptible host vegetation. The generalized linear modeling indicated that the probability of invasion is limited by both dispersal and niche constraints. Probability of invasion was positively related to precipitation and temperature in the wet season and the presence of the inoculum-producing foliar host Umbellularia californica and decreased exponentially with distance to inoculum sources. Models that incorporated niche and dispersal parameters best predicted the locations of new infections, with accuracies ranging from 0.86 to 0.90, suggesting that the modeling approach can be used to forecast locations of disease spread. Application of the combined niche plus dispersal models in a geographic information system predicted the presence of P. ramorum across approximately 8228 km2 of California's 84785 km2 (9.7%) of land area with susceptible host species. This research illustrates how probabilistic modeling can be used to analyze the relative roles of niche and dispersal limitation in controlling the distribution of invasive pathogens.
Detection of the lunar body tide by the Lunar Orbiter Laser Altimeter
Mazarico, Erwan; Barker, Michael K; Neumann, Gregory A; Zuber, Maria T; Smith, David E
2014-01-01
The Lunar Orbiter Laser Altimeter instrument onboard the Lunar Reconnaissance Orbiter spacecraft collected more than 5 billion measurements in the nominal 50 km orbit over ∼10,000 orbits. The data precision, geodetic accuracy, and spatial distribution enable two-dimensional crossovers to be used to infer relative radial position corrections between tracks to better than ∼1 m. We use nearly 500,000 altimetric crossovers to separate remaining high-frequency spacecraft trajectory errors from the periodic radial surface tidal deformation. The unusual sampling of the lunar body tide from polar lunar orbit limits the size of the typical differential signal expected at ground track intersections to ∼10 cm. Nevertheless, we reliably detect the topographic tidal signal and estimate the associated Love number h2 to be 0.0371 ± 0.0033, which is consistent with but lower than recent results from lunar laser ranging. Key Points Altimetric data are used to create radial constraints on the tidal deformationThe body tide amplitude is estimated from the crossover dataThe estimated Love number is consistent with previous estimates but more precise PMID:26074646
Kernel canonical-correlation Granger causality for multiple time series
NASA Astrophysics Data System (ADS)
Wu, Guorong; Duan, Xujun; Liao, Wei; Gao, Qing; Chen, Huafu
2011-04-01
Canonical-correlation analysis as a multivariate statistical technique has been applied to multivariate Granger causality analysis to infer information flow in complex systems. It shows unique appeal and great superiority over the traditional vector autoregressive method, due to the simplified procedure that detects causal interaction between multiple time series, and the avoidance of potential model estimation problems. However, it is limited to the linear case. Here, we extend the framework of canonical correlation to include the estimation of multivariate nonlinear Granger causality for drawing inference about directed interaction. Its feasibility and effectiveness are verified on simulated data.
Studying the P c ( 4450 ) resonance in J / ψ photoproduction off protons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blin, A. N. Hiller; Fernandez-Ramirez, C.; Jackura, A.
2016-08-01
In this study, a resonance-like structure, the P c(4450), has recently been observed in the J/ψ p spectrum by the LHCb collaboration. We discuss the feasibility of detecting this structure in J/ψ photoproduction in the CLAS12 experiment at JLab. We present a first estimate of the upper limit for the branching ratio of the P c(4450) to J/ψ p. Our estimates, which take into account the experimental resolution effects, lead to a sizable cross section close to the J/ψ production threshold, which makes future experiments covering this region very promising.
Gravitational Waves from Binary Mergers of Subsolar Mass Dark Black Holes
NASA Astrophysics Data System (ADS)
Shandera, Sarah; Jeong, Donghui; Gebhardt, Henry S. Grasshorn
2018-06-01
We explore the possible spectrum of binary mergers of subsolar mass black holes formed out of dark matter particles interacting via a dark electromagnetism. We estimate the properties of these dark black holes by assuming that their formation process is parallel to Population-III star formation, except that dark molecular cooling can yield a smaller opacity limit. We estimate the binary coalescence rates for the Advanced LIGO and Einstein telescope, and find that scenarios compatible with all current constraints could produce dark black holes at rates high enough for detection by Advanced LIGO.
Fetterly, Kenneth A; Favazza, Christopher P
2016-08-07
Channelized Hotelling model observer (CHO) methods were developed to assess performance of an x-ray angiography system. The analytical methods included correction for known bias error due to finite sampling. Detectability indices ([Formula: see text]) corresponding to disk-shaped objects with diameters in the range 0.5-4 mm were calculated. Application of the CHO for variable detector target dose (DTD) in the range 6-240 nGy frame(-1) resulted in [Formula: see text] estimates which were as much as 2.9× greater than expected of a quantum limited system. Over-estimation of [Formula: see text] was presumed to be a result of bias error due to temporally variable non-stationary noise. Statistical theory which allows for independent contributions of 'signal' from a test object (o) and temporally variable non-stationary noise (ns) was developed. The theory demonstrates that the biased [Formula: see text] is the sum of the detectability indices associated with the test object [Formula: see text] and non-stationary noise ([Formula: see text]). Given the nature of the imaging system and the experimental methods, [Formula: see text] cannot be directly determined independent of [Formula: see text]. However, methods to estimate [Formula: see text] independent of [Formula: see text] were developed. In accordance with the theory, [Formula: see text] was subtracted from experimental estimates of [Formula: see text], providing an unbiased estimate of [Formula: see text]. Estimates of [Formula: see text] exhibited trends consistent with expectations of an angiography system that is quantum limited for high DTD and compromised by detector electronic readout noise for low DTD conditions. Results suggest that these methods provide [Formula: see text] estimates which are accurate and precise for [Formula: see text]. Further, results demonstrated that the source of bias was detector electronic readout noise. In summary, this work presents theory and methods to test for the presence of bias in Hotelling model observers due to temporally variable non-stationary noise and correct this bias when the temporally variable non-stationary noise is independent and additive with respect to the test object signal.
Measurements of electron detection efficiencies in solid state detectors.
NASA Technical Reports Server (NTRS)
Lupton, J. E.; Stone, E. C.
1972-01-01
Detailed laboratory measurement of the electron response of solid state detectors as a function of incident electron energy, detector depletion depth, and energy-loss discriminator threshold. These response functions were determined by exposing totally depleted silicon surface barrier detectors with depletion depths between 50 and 1000 microns to the beam from a magnetic beta-ray spectrometer. The data were extended to 5000 microns depletion depth using the results of previously published Monte Carlo electron calculations. When the electron counting efficiency of a given detector is plotted as a function of energy-loss threshold for various incident energies, the efficiency curves are bounded by a smooth envelope which represents the upper limit to the detection efficiency. These upper limit curves, which scale in a simple way, make it possible to easily estimate the electron sensitivity of solid-state detector systems.
Density of American black bears in New Mexico
Gould, Matthew J.; Cain, James W.; Roemer, Gary W.; Gould, William R.; Liley, Stewart
2018-01-01
Considering advances in noninvasive genetic sampling and spatially explicit capture–recapture (SECR) models, the New Mexico Department of Game and Fish sought to update their density estimates for American black bear (Ursus americanus) populations in New Mexico, USA, to aide in setting sustainable harvest limits. We estimated black bear density in the Sangre de Cristo, Sandia, and Sacramento Mountains, New Mexico, 2012–2014. We collected hair samples from black bears using hair traps and bear rubs and used a sex marker and a suite of microsatellite loci to individually genotype hair samples. We then estimated density in a SECR framework using sex, elevation, land cover type, and time to model heterogeneity in detection probability and the spatial scale over which detection probability declines. We sampled the populations using 554 hair traps and 117 bear rubs and collected 4,083 hair samples. We identified 725 (367 male, 358 female) individuals. Our density estimates varied from 16.5 bears/100 km2 (95% CI = 11.6–23.5) in the southern Sacramento Mountains to 25.7 bears/100 km2 (95% CI = 13.2–50.1) in the Sandia Mountains. Overall, detection probability at the activity center (g0) was low across all study areas and ranged from 0.00001 to 0.02. The low values of g0 were primarily a result of half of all hair samples for which genotypes were attempted failing to produce a complete genotype. We speculate that the low success we had genotyping hair samples was due to exceedingly high levels of ultraviolet (UV) radiation that degraded the DNA in the hair. Despite sampling difficulties, we were able to produce density estimates with levels of precision comparable to those estimated for black bears elsewhere in the United States.
Quantifying cannabis: A field study of marijuana quantity estimation.
Prince, Mark A; Conner, Bradley T; Pearson, Matthew R
2018-06-01
The assessment of marijuana use quantity poses unique challenges. These challenges have limited research efforts on quantity assessments. However, quantity estimates are critical to detecting associations between marijuana use and outcomes. We examined accuracy of marijuana users' estimations of quantities of marijuana they prepared to ingest and predictors of both how much was prepared for a single dose and the degree of (in)accuracy of participants' estimates. We recruited a sample of 128 regular-to-heavy marijuana users for a field study wherein they prepared and estimated quantities of marijuana flower in a joint or a bowl as well as marijuana concentrate using a dab tool. The vast majority of participants overestimated the quantity of marijuana that they used in their preparations. We failed to find robust predictors of estimation accuracy. Self-reported quantity estimates are inaccurate, which has implications for studying the link between quantity and marijuana use outcomes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Cohn, Timothy A.
2005-01-01
This paper presents an adjusted maximum likelihood estimator (AMLE) that can be used to estimate fluvial transport of contaminants, like phosphorus, that are subject to censoring because of analytical detection limits. The AMLE is a generalization of the widely accepted minimum variance unbiased estimator (MVUE), and Monte Carlo experiments confirm that it shares essentially all of the MVUE's desirable properties, including high efficiency and negligible bias. In particular, the AMLE exhibits substantially less bias than alternative censored‐data estimators such as the MLE (Tobit) or the MLE followed by a jackknife. As with the MLE and the MVUE the AMLE comes close to achieving the theoretical Frechet‐Cramér‐Rao bounds on its variance. This paper also presents a statistical framework, applicable to both censored and complete data, for understanding and estimating the components of uncertainty associated with load estimates. This can serve to lower the cost and improve the efficiency of both traditional and real‐time water quality monitoring.
A subagging regression method for estimating the qualitative and quantitative state of groundwater
NASA Astrophysics Data System (ADS)
Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.
3D change detection in staggered voxels model for robotic sensing and navigation
NASA Astrophysics Data System (ADS)
Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.
2016-05-01
3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.
Yoshii, K; Kaihara, A; Tsumura, Y; Ishimitsu, S; Tonogai, Y
2001-01-01
A liquid chromatographic (LC) method was developed for the determination of emamectin and its metabolites (8,9-Z-isomer, N-demethylated, N-formylated, and N-methylformylated emamectin) in various crops. The analytes were extracted with acetone, cleaned up on cartridge columns (C18 and NH2), derivatized with trifluoroacetic anhydride and 1-methylimidazole, and determined by LC with fluorescence detection. Because radish inhibited the formation of the fluorescent derivatives, an additional Bond Elut PRS cartridge was used in the cleanup of Japanese radish samples. During sample preparation, N-formylated emamectin partially degraded to emamectin B1b and emamectin B1a, and the 8,9-Z-isomer partially degraded to N-demethylated emamectin. Therefore, emamectin and its metabolites were determined as total emamectin, i.e., their sum was estimated as emamectin benzoate. Their recoveries from most crops were approximately 80-110% with the developed method. The detection limits for the analytes in vegetables were 0.1-0.3 parts per trillion (ppt). The results for these compounds were confirmed by LC/mass spectrometry (LC/MS; electrospray ionization mode). Because the fluorescent derivative of emamectin was undetectable by LC/MS, the results for the analyte were confirmed by using a sample solution without derivatization. Limits of detection by LC/MS were similar to the fluorescence detection limits, 0.1-0.3 ppt in vegetables. In addition to the emamectins, milbemectin, ivermectin, and abamectin were also determined by the developed method.
Analyzing Responses of Chemical Sensor Arrays
NASA Technical Reports Server (NTRS)
Zhou, Hanying
2007-01-01
NASA is developing a third-generation electronic nose (ENose) capable of continuous monitoring of the International Space Station s cabin atmosphere for specific, harmful airborne contaminants. Previous generations of the ENose have been described in prior NASA Tech Briefs issues. Sensor selection is critical in both (prefabrication) sensor material selection and (post-fabrication) data analysis of the ENose, which detects several analytes that are difficult to detect, or that are at very low concentration ranges. Existing sensor selection approaches usually include limited statistical measures, where selectivity is more important but reliability and sensitivity are not of concern. When reliability and sensitivity can be major limiting factors in detecting target compounds reliably, the existing approach is not able to provide meaningful selection that will actually improve data analysis results. The approach and software reported here consider more statistical measures (factors) than existing approaches for a similar purpose. The result is a more balanced and robust sensor selection from a less than ideal sensor array. The software offers quick, flexible, optimal sensor selection and weighting for a variety of purposes without a time-consuming, iterative search by performing sensor calibrations to a known linear or nonlinear model, evaluating the individual sensor s statistics, scoring the individual sensor s overall performance, finding the best sensor array size to maximize class separation, finding optimal weights for the remaining sensor array, estimating limits of detection for the target compounds, evaluating fingerprint distance between group pairs, and finding the best event-detecting sensors.
Trap configuration and spacing influences parameter estimates in spatial capture-recapture models
Sun, Catherine C.; Fuller, Angela K.; Royle, J. Andrew
2014-01-01
An increasing number of studies employ spatial capture-recapture models to estimate population size, but there has been limited research on how different spatial sampling designs and trap configurations influence parameter estimators. Spatial capture-recapture models provide an advantage over non-spatial models by explicitly accounting for heterogeneous detection probabilities among individuals that arise due to the spatial organization of individuals relative to sampling devices. We simulated black bear (Ursus americanus) populations and spatial capture-recapture data to evaluate the influence of trap configuration and trap spacing on estimates of population size and a spatial scale parameter, sigma, that relates to home range size. We varied detection probability and home range size, and considered three trap configurations common to large-mammal mark-recapture studies: regular spacing, clustered, and a temporal sequence of different cluster configurations (i.e., trap relocation). We explored trap spacing and number of traps per cluster by varying the number of traps. The clustered arrangement performed well when detection rates were low, and provides for easier field implementation than the sequential trap arrangement. However, performance differences between trap configurations diminished as home range size increased. Our simulations suggest it is important to consider trap spacing relative to home range sizes, with traps ideally spaced no more than twice the spatial scale parameter. While spatial capture-recapture models can accommodate different sampling designs and still estimate parameters with accuracy and precision, our simulations demonstrate that aspects of sampling design, namely trap configuration and spacing, must consider study area size, ranges of individual movement, and home range sizes in the study population.
On the Detectability of CO Molecules in the Interstellar Medium via X-Ray Spectroscopy
NASA Technical Reports Server (NTRS)
Joachimi, Katerine; Gatuzz, Efrain; Garcia, Javier; Kallman, Timothy R.
2016-01-01
We present a study of the detectability of CO molecules in the Galactic interstellar medium using high-resolution X-ray spectra obtained with the XMM-Newton Reflection Grating Spectrometer. We analysed 10 bright low mass X-ray binaries (LMXBs) to study the CO contribution in their line of sights. A total of 25 observations were fitted with the ISMabs X-ray absorption model which includes photoabsorption cross-sections for Oi, Oii, Oiii and CO. We performed a Monte Carlo (MC) simulation analysis of the goodness of fit in order to estimate the significance of the CO detection. We determine that the statistical analysis prevents a significant detection of CO molecular X-ray absorption features, except for the lines of sight towards XTE J1718-330 and 4U 1636-53. In the case of XTE J1817-330, this is the first report of the presence of CO along its line of sight. Our results reinforce the conclusion that molecules have a minor contribution to the absorption features in the O K-edge spectral region. We estimate a CO column density lower limit to perform a significant detection with XMM-Newton of N(CO) greater than 6 x 10(exp 16) per sq cm for typical exposure times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snell, N.S.
1976-09-24
NETWORTH is a computer program which calculates the detection and location capability of seismic networks. A modified version of NETWORTH has been developed. This program has been used to evaluate the effect of station 'downtime', the signal amplitude variance, and the station detection threshold upon network detection capability. In this version all parameters may be changed separately for individual stations. The capability of using signal amplitude corrections has been added. The function of amplitude corrections is to remove possible bias in the magnitude estimate due to inhomogeneous signal attenuation. These corrections may be applied to individual stations, individual epicenters, ormore » individual station/epicenter combinations. An option has been added to calculate the effect of station 'downtime' upon network capability. This study indicates that, if capability loss due to detection errors can be minimized, then station detection threshold and station reliability will be the fundamental limits to network performance. A baseline network of thirteen stations has been performed. These stations are as follows: Alaskan Long Period Array, (ALPA); Ankara, (ANK); Chiang Mai, (CHG); Korean Seismic Research Station, (KSRS); Large Aperture Seismic Array, (LASA); Mashhad, (MSH); Mundaring, (MUN); Norwegian Seismic Array, (NORSAR); New Delhi, (NWDEL); Red Knife, Ontario, (RK-ON); Shillong, (SHL); Taipei, (TAP); and White Horse, Yukon, (WH-YK).« less
Yang, Tingzhen; Vdovenko, Marina; Jin, Xue; Sakharov, Ivan Yu; Zhao, Shulin
2014-07-01
A microfluidic competitive enzyme immunoassay based on chemiluminescence resonance energy transfer (CRET) was developed for highly sensitive detection of neuron-specific enolase (NSE). The CRET system consisted of horseradish peroxidase (HRP)/luminol as a light donor and fluorescein isothiocyanate as an acceptor. When fluorescein isothiocyanate-labeled antibody binds with HRP-labeled antigen to form immunocomplex, the donor and acceptor are brought close each other and CRET occurs in the immunocomplex. In the MCE, the immunocomplex and excess HRP-NSE were separated, and the chemiluminescense intensity of immunocomplex was used to estimate NSE concentration. The calibration curve showed a linearity in the range of NSE concentrations from 9.0 to 950 pM with a correlation coefficient of 0.9964. Based on a S/N of 3, the detection limit for NSE determination was estimated to be 4.5 pM, which is two-order magnitude lower than that of without CRET detection. This assay was applied for NSE quantification in human serum. The obtained results demonstrated that the proposed immunoassay may serve as an alternative tool for clinical analysis of NSE. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning.
Maidenbaum, Shachar; Hanassy, Shlomi; Abboud, Sami; Buchs, Galit; Chebat, Daniel-Robert; Levy-Tzedek, Shelly; Amedi, Amir
2014-01-01
Independent mobility is one of the most pressing problems facing people who are blind. We present the EyeCane, a new mobility aid aimed at increasing perception of environment beyond what is provided by the traditional White Cane for tasks such as distance estimation, navigation and obstacle detection. The "EyeCane" enhances the traditional White Cane by using tactile and auditory output to increase detectable distance and angles. It circumvents the technical pitfalls of other devices, such as weight, short battery life, complex interface schemes, and slow learning curve. It implements multiple beams to enables detection of obstacles at different heights, and narrow beams to provide active sensing that can potentially increase the user's spatial perception of the environment. Participants were tasked with using the EyeCane for several basic tasks with minimal training. Blind and blindfolded-sighted participants were able to use the EyeCane successfully for distance estimation, simple navigation and simple obstacle detection after only several minutes of training. These results demonstrate the EyeCane's potential for mobility rehabilitation. The short training time is especially important since available mobility training resources are limited, not always available, and can be quite expensive and/or entail long waiting periods.
NASA Astrophysics Data System (ADS)
Yamada, T.; Ide, S.
2007-12-01
Earthquake early warning is an important and challenging issue for the reduction of the seismic damage, especially for the mitigation of human suffering. One of the most important problems in earthquake early warning systems is how immediately we can estimate the final size of an earthquake after we observe the ground motion. It is relevant to the problem whether the initial rupture of an earthquake has some information associated with its final size. Nakamura (1988) developed the Urgent Earthquake Detection and Alarm System (UrEDAS). It calculates the predominant period of the P wave (τp) and estimates the magnitude of an earthquake immediately after the P wave arrival from the value of τpmax, or the maximum value of τp. The similar approach has been adapted by other earthquake alarm systems (e.g., Allen and Kanamori (2003)). To investigate the characteristic of the parameter τp and the effect of the length of the time window (TW) in the τpmax calculation, we analyze the high-frequency recordings of earthquakes at very close distances in the Mponeng mine in South Africa. We find that values of τpmax have upper and lower limits. For larger earthquakes whose source durations are longer than TW, the values of τpmax have an upper limit which depends on TW. On the other hand, the values for smaller earthquakes have a lower limit which is proportional to the sampling interval. For intermediate earthquakes, the values of τpmax are close to their typical source durations. These two limits and the slope for intermediate earthquakes yield an artificial final size dependence of τpmax in a wide size range. The parameter τpmax is useful for detecting large earthquakes and broadcasting earthquake early warnings. However, its dependence on the final size of earthquakes does not suggest that the earthquake rupture is deterministic. This is because τpmax does not always have a direct relation to the physical quantities of an earthquake.
NASA Astrophysics Data System (ADS)
Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh
2010-11-01
A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.
Caldwell, John T.; Kunz, Walter E.; Cates, Michael R.; Franks, Larry A.
1985-01-01
Simultaneous photon and neutron interrogation of samples for the quantitative determination of total fissile nuclide and total fertile nuclide material present is made possible by the use of an electron accelerator. Prompt and delayed neutrons produced from resulting induced fissions are counted using a single detection system and allow the resolution of the contributions from each interrogating flux leading in turn to the quantitative determination sought. Detection limits for .sup.239 Pu are estimated to be about 3 mg using prompt fission neutrons and about 6 mg using delayed neutrons.
Caldwell, J.T.; Kunz, W.E.; Cates, M.R.; Franks, L.A.
1982-07-07
Simultaneous photon and neutron interrogation of samples for the quantitative determination of total fissile nuclide and total fertile nuclide material present is made possible by the use of an electron accelerator. Prompt and delayed neutrons produced from resulting induced fission are counted using a single detection system and allow the resolution of the contributions from each interrogating flux leading in turn to the quantitative determination sought. Detection limits for /sup 239/Pu are estimated to be about 3 mg using prompt fission neutrons and about 6 mg using delayed neutrons.
Voltammetry as a Tool for Characterization of CdTe Quantum Dots
Sobrova, Pavlina; Ryvolova, Marketa; Hubalek, Jaromir; Adam, Vojtech; Kizek, Rene
2013-01-01
Electrochemical detection of quantum dots (QDs) has already been used in numerous applications. However, QDs have not been well characterized using voltammetry, with respect to their characterization and quantification. Therefore, the main aim was to characterize CdTe QDs using cyclic and differential pulse voltammetry. The obtained peaks were identified and the detection limit (3 S/N) was estimated down to 100 fg/mL. Based on the convincing results, a new method for how to study stability and quantify the dots was suggested. Thus, the approach was further utilized for the testing of QDs stability. PMID:23807507
Extreme Brightness Temperatures and Refractive Substructure in 3C273 with RadioAstron
NASA Astrophysics Data System (ADS)
Johnson, Michael D.; Kovalev, Yuri Y.; Gwinn, Carl R.; Gurvits, Leonid I.; Narayan, Ramesh; Macquart, Jean-Pierre; Jauncey, David L.; Voitsik, Peter A.; Anderson, James M.; Sokolovsky, Kirill V.; Lisakov, Mikhail M.
2016-03-01
Earth-space interferometry with RadioAstron provides the highest direct angular resolution ever achieved in astronomy at any wavelength. RadioAstron detections of the classic quasar 3C 273 on interferometric baselines up to 171,000 km suggest brightness temperatures exceeding expected limits from the “inverse-Compton catastrophe” by two orders of magnitude. We show that at 18 cm, these estimates most likely arise from refractive substructure introduced by scattering in the interstellar medium. We use the scattering properties to estimate an intrinsic brightness temperature of 7× {10}12 {{K}}, which is consistent with expected theoretical limits, but which is ˜15 times lower than estimates that neglect substructure. At 6.2 cm, the substructure influences the measured values appreciably but gives an estimated brightness temperature that is comparable to models that do not account for the substructure. At 1.35 {{cm}}, the substructure does not affect the extremely high inferred brightness temperatures, in excess of {10}13 {{K}}. We also demonstrate that for a source having a Gaussian surface brightness profile, a single long-baseline estimate of refractive substructure determines an absolute minimum brightness temperature, if the scattering properties along a given line of sight are known, and that this minimum accurately approximates the apparent brightness temperature over a wide range of total flux densities.
Natal dispersal in the cooperatively breeding Acorn Woodpecker
Koenig, Walter D.; Hooge, P.N.; Stanback, M.T.; Haydock, J.
2000-01-01
Dispersal data are inevitably biased toward short-distance events, often highly so. We illustrate this problem using our long-term study of Acorn Woodpeckers (Melanerpes formicivorus) in central coastal California. Estimating the proportion of birds disappearing from the study area and correcting for detectability within the maximum observable distance are the first steps toward achieving a realistic estimate of dispersal distributions. Unfortunately, there is generally no objective way to determine the fates of birds not accounted for by these procedures, much less estimating the distances they may have moved. Estimated mean and root-mean-square dispersal distances range from 0.22-2.90 km for males and 0.53-9.57 km for females depending on what assumptions and corrections are made. Three field methods used to help correct for bias beyond the limits of normal study areas include surveying alternative study sites, expanding the study site (super study sites), and radio-tracking dispersers within a population. All of these methods have their limitations or can only be used in special cases. New technologies may help alleviate this problem in the near future. Until then, we urge caution in interpreting observed dispersal data from all but the most isolated of avian populations.
Development of gait segmentation methods for wearable foot pressure sensors.
Crea, S; De Rossi, S M M; Donati, M; Reberšek, P; Novak, D; Vitiello, N; Lenzi, T; Podobnik, J; Munih, M; Carrozza, M C
2012-01-01
We present an automated segmentation method based on the analysis of plantar pressure signals recorded from two synchronized wireless foot insoles. Given the strict limits on computational power and power consumption typical of wearable electronic components, our aim is to investigate the capability of a Hidden Markov Model machine-learning method, to detect gait phases with different levels of complexity in the processing of the wearable pressure sensors signals. Therefore three different datasets are developed: raw voltage values, calibrated sensor signals and a calibrated estimation of total ground reaction force and position of the plantar center of pressure. The method is tested on a pool of 5 healthy subjects, through a leave-one-out cross validation. The results show high classification performances achieved using estimated biomechanical variables, being on average the 96%. Calibrated signals and raw voltage values show higher delays and dispersions in phase transition detection, suggesting a lower reliability for online applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yan; Mohanty, Soumya D.; Center for Gravitational Wave Astronomy, Department of Physics and Astronomy, University of Texas at Brownsville, 80 Fort Brown, Brownsville, Texas 78520
2010-03-15
The detection and estimation of gravitational wave signals belonging to a parameterized family of waveforms requires, in general, the numerical maximization of a data-dependent function of the signal parameters. Because of noise in the data, the function to be maximized is often highly multimodal with numerous local maxima. Searching for the global maximum then becomes computationally expensive, which in turn can limit the scientific scope of the search. Stochastic optimization is one possible approach to reducing computational costs in such applications. We report results from a first investigation of the particle swarm optimization method in this context. The method ismore » applied to a test bed motivated by the problem of detection and estimation of a binary inspiral signal. Our results show that particle swarm optimization works well in the presence of high multimodality, making it a viable candidate method for further applications in gravitational wave data analysis.« less
Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis
NASA Astrophysics Data System (ADS)
Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.
2015-01-01
Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.
Assessment of dermal exposure of greenhouse workers to the pesticide bupirimate.
Jongen, M J; Engel, R; Leenheers, L H
1992-01-01
An HPLC method was developed for estimation of dermal exposure of greenhouse workers to the pesticide bupirimate. Chromatography was performed on a cyano-modified silica column with methanol-water (6:4 by volume) containing 5 g/L ammonium sulfate as eluent. UV detection at 310 nm was used for quantitation. Dermal exposure was assessed by letting the workers wear cotton gloves and by measuring foliar dislodgable residues in the greenhouses as potential exposure. The analytical procedure was validated for measurement of bupirimate on cotton gloves and in solutions used for the estimation of foliar dislodgable residues. Gloves were extracted with methanol. Recovery of bupirimate from fortified gloves was complete. Methanol extracts with one volume of water added and solutions containing dislodgable residues were injected directly onto the HPLC system. The limit of detection was 30 micrograms/L. Between-day coefficients of variation were 7 and 4% at concentrations of 0.6 and 28 mg/L, respectively.
Bojanić, Krunoslav; Midwinter, Anne Camilla; Marshall, Jonathan Craig; Rogers, Lynn Elizabeth; Biggs, Patrick Jon; Acke, Els
2016-08-01
Campylobacter enteritis in humans is primarily associated with C. jejuni/coli infection. The impact of other Campylobacter spp. is likely to be underestimated due to the bias of culture methods towards Campylobacter jejuni/coli diagnosis. Stool antigen tests are becoming increasingly popular and appear generally less species-specific. A review of independent studies of the ProSpecT® Campylobacter Microplate enzyme immunoassay (EIA) developed for C. jejuni/coli showed comparable diagnostic results to culture methods but the examination of non-jejuni/coli Campylobacter spp. was limited and the limit-of-detection (LOD), where reported, varied between studies. This study investigated LOD of EIA for Campylobacter upsaliensis, Campylobacter hyointestinalis and Campylobacter helveticus spiked in human stools. Multiple stools and Campylobacter isolates were used in three different concentrations (10(4)-10(9)CFU/ml) to reflect sample heterogeneity. All Campylobacter species evaluated were detectable by EIA. Multivariate analysis showed LOD varied between Campylobacter spp. and faecal consistency as fixed effects and individual faecal samples as random effects. EIA showed excellent performance in replicate testing for both within and between batches of reagents, in agreement between visual and spectrophotometric reading of results, and returned no discordance between the bacterial concentrations within independent dilution test runs (positive results with lower but not higher concentrations). This study shows how limitations in experimental procedures lead to an overestimation of consistency and uniformity of LOD for EIA that may not hold under routine use in diagnostic laboratories. Benefits and limitations for clinical practice and the influence on estimates of performance characteristics from detection of multiple Campylobacter spp. by EIA are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D
2008-09-01
Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.
Scent Lure Effect on Camera-Trap Based Leopard Density Estimates
Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke
2016-01-01
Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816
Field ion spectrometry: a new technology for cocaine and heroin detection
NASA Astrophysics Data System (ADS)
Carnahan, Byron L.; Day, Stephen; Kouznetsov, Viktor; Tarassov, Alexandre
1997-02-01
Field ion spectrometry, also known as transverse field compensation ion mobility spectrometry, is a new technique for trace gas analysis that can be applied to the detection of cocaine and heroin. Its principle is based on filtering ion species according to the functional dependence of their mobilities with electric field strength. Field ion spectrometry eliminates the gating electrodes needed in conventional IMS to pulse ions into the spectrometer; instead, ions are injected in to the spectrometer and reach the detector continuously, resulting in improved sensitivity. The technique enables analyses that are difficult with conventional constant field strength ion mobility spectrometers. We have shown that a filed ion spectrometer can selectively detect the vapors from cocaine and heroin emitted from both their base and hydrochloride forms. The estimated volumetric limits of detection are in the low pptv range, based on testing with standardized drug vapor generation systems. The spectrometer can detect cocaine base in the vapor phase, at concentrations well below its estimated 100 pptv vapor pressure equivalent at 20 degrees C. This paper describes the underlying principles of field ion spectrometry in relation to narcotic drug detection, and recent results obtained for cocaine and heroin. The work has been sponsored in part by the United States Advanced Research Projects Agency under contract DAAB10-95C-0004, for the DOD Counterdrug Technology Development Program.
Development of neuraminidase detection using gold nanoparticles boron-doped diamond electrodes.
Wahyuni, Wulan T; Ivandini, Tribidasari A; Saepudin, Endang; Einaga, Yasuaki
2016-03-15
Gold nanoparticles-modified boron-doped diamond (AuNPs-BDD) electrodes, which were prepared with a self-assembly deposition of AuNPs at amine-terminated boron-doped diamond, were examined for voltammetric detection of neuraminidase (NA). The detection method was performed based on the difference of electrochemical responses of zanamivir at gold surface before and after the reaction with NA in phosphate buffer solution (PBS, pH 5.5). A linear calibration curve for zanamivir in 0.1 M PBS in the absence of NA was achieved in the concentration range of 1 × 10(-6) to 1 × 10(-5) M (R(2) = 0.99) with an estimated limit of detection (LOD) of 2.29 × 10(-6) M. Furthermore, using its reaction with 1.00 × 10(-5) M zanamivir, a linear calibration curve of NA can be obtained in the concentration range of 0-12 mU (R(2) = 0.99) with an estimated LOD of 0.12 mU. High reproducibility was shown with a relative standard deviation (RSD) of 1.14% (n = 30). These performances could be maintained when the detection was performed in mucin matrix. Comparison performed using gold-modified BDD (Au-BDD) electrodes suggested that the good performance of the detection method is due to the stability of the gold particles position at the BDD surface. Copyright © 2016 Elsevier Inc. All rights reserved.
Peraman, Ramalingam; Mallikarjuna, Sasikala; Ammineni, Pravalika; Kondreddy, Vinod kumar
2014-10-01
A simple, selective, rapid, precise and economical reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for simultaneous estimation of atorvastatin calcium (ATV) and pioglitazone hydrochloride (PIO) from pharmaceutical formulation. The method is carried out on a C8 (25 cm × 4.6 mm i.d., 5 μm) column with a mobile phase consisting of acetonitrile (ACN):water (pH adjusted to 6.2 using o-phosphoric acid) in the ratio of 45:55 (v/v). The retention time of ATV and PIO is 4.1 and 8.1 min, respectively, with the flow rate of 1 mL/min with diode array detector detection at 232 nm. The linear regression analysis data from the linearity plot showed good linear relationship with a correlation coefficient (R(2)) value for ATV and PIO of 0.9998 and 0.9997 in the concentration range of 10-80 µg mL(-1), respectively. The relative standard deviation for intraday precision has been found to be <2.0%. The method is validated according to the ICH guidelines. The developed method is validated in terms of specificity, selectivity, accuracy, precision, linearity, limit of detection, limit of quantitation and solution stability. The proposed method can be used for simultaneous estimation of these drugs in marketed dosage forms. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The Rotation of M Dwarfs Observed by the Apache Point Galactic Evolution Experiment
NASA Astrophysics Data System (ADS)
Gilhool, Steven H.; Blake, Cullen H.; Terrien, Ryan C.; Bender, Chad; Mahadevan, Suvrath; Deshpande, Rohit
2018-01-01
We present the results of a spectroscopic analysis of rotational velocities in 714 M-dwarf stars observed by the SDSS-III Apache Point Galactic Evolution Experiment (APOGEE) survey. We use a template-fitting technique to estimate v\\sin i while simultaneously estimating {log}g, [{{M}}/{{H}}], and {T}{eff}. We conservatively estimate that our detection limit is 8 km s‑1. We compare our results to M-dwarf rotation studies in the literature based on both spectroscopic and photometric measurements. Like other authors, we find an increase in the fraction of rapid rotators with decreasing stellar temperature, exemplified by a sharp increase in rotation near the M4 transition to fully convective stellar interiors, which is consistent with the hypothesis that fully convective stars are unable to shed angular momentum as efficiently as those with radiative cores. We compare a sample of targets observed both by APOGEE and the MEarth transiting planet survey and find no cases where the measured v\\sin i and rotation period are physically inconsistent, requiring \\sin i> 1. We compare our spectroscopic results to the fraction of rotators inferred from photometric surveys and find that while the results are broadly consistent, the photometric surveys exhibit a smaller fraction of rotators beyond the M4 transition by a factor of ∼2. We discuss possible reasons for this discrepancy. Given our detection limit, our results are consistent with a bimodal distribution in rotation that is seen in photometric surveys.
Uchiho, Yuichi; Goto, Yusuke; Kamahori, Masao; Aota, Toshimichi; Morisaki, Atsuki; Hosen, Yusuke; Koda, Kimiyoshi
2015-12-11
A far-ultraviolet (FUV)-absorbance detector with a transmission flow cell was developed and applied to detect absorbance of sugars and peptides by HPLC. The main inherent limitation of FUV-absorbance detection is the strong absorptions of solvents and atmospheric oxygen in the optical system as well as dissolved oxygen in the solvent. High absorptivity of the solvent and oxygen decreases transmission-light intensity in the flow cell and hinders the absorbance measurement. To solve the above drawbacks, the transmission-light intensity in the flow cell was increased by introducing a new optical system and a nitrogen-purging unit to remove the atmospheric oxygen. The optical system has a photodiode for detecting the reference light at a position of the minus-first-order diffracted light. In addition, acetonitrile and water were selected as usable solvents because of their low absorptivity in the FUV region. As a result of these implementations, the detectable wavelength of the FUV-absorbance detector (with a flow cell having an effective optical path length of 0.5mm) can be extended down to 175nm. Three sugars (glucose, fructose, and sucrose) were successfully detected with the FUV-absorbance detector. These detection results reveal that the absorption peak of sugar in liquid phase lies at around 178nm. The detection limit (S/N=3) in absorbance with a 0.5-mm flow cell at 180nm was 21μAU, which corresponds to 33, 60 and 60μM (198, 360, and 360pmol) for fructose, glucose, and sucrose, respectively. Also, the peptide Met-enkephalin could be detected with a high sensitivity at 190nm. The estimated detection limit (S/N=3) for Met-enkephalin is 29nM (0.29pmol), which is eight times lower than that at 220nm. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantum limits to gravity estimation with optomechanics
NASA Astrophysics Data System (ADS)
Armata, F.; Latmiral, L.; Plato, A. D. K.; Kim, M. S.
2017-10-01
We present a table-top quantum estimation protocol to measure the gravitational acceleration g by using an optomechanical cavity. In particular, we exploit the nonlinear quantum light-matter interaction between an optical field and a massive mirror acting as mechanical oscillator. The gravitational field influences the system dynamics affecting the phase of the cavity field during the interaction. Reading out such a phase carried by the radiation leaking from the cavity, we provide an estimate of the gravitational acceleration through interference measurements. Contrary to previous studies, having adopted a fully quantum description, we are able to propose a quantum analysis proving the ultimate bound to the estimability of the gravitational acceleration and verifying optimality of homodyne detection. Noticeably, thanks to the light-matter decoupling at the measurement time, no initial cooling of the mechanical oscillator is demanded in principle.
Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.
Allen, Jeff; Ghattas, Andrew
2016-06-01
Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.
Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas
NASA Astrophysics Data System (ADS)
Jackson, T.; Halford, K. J.; Gardner, P.
2017-12-01
Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ETg) from desert playas is a significant component of the groundwater budget. This result occurs because desert playa ETg rates are poorly constrained by Bowen Ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ETg from desert playas have resulted in ETg rates that are below the detection limit of micrometeorological approaches. This study uses numerical models to further constrain desert playa ETg rates that are below the detection limit of EC (0.1 mm/d) and BREB (0.3 mm/d) approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater-density contrasts on desert playa ETg rates. Numerical models simulated ETg rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ETg rates from desert playas are significantly below the upper detection limits provided by the BREB- and EC-based micrometeorological measurements. Discharge from desert playas contribute less than 2 percent of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas is negligible in other basins. Numerical simulation results also show that ETg from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts.
Frederick, R I
2000-01-01
Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.
Maximum likelihood sequence estimation for optical complex direct modulation.
Che, Di; Yuan, Feng; Shieh, William
2017-04-17
Semiconductor lasers are versatile optical transmitters in nature. Through the direct modulation (DM), the intensity modulation is realized by the linear mapping between the injection current and the light power, while various angle modulations are enabled by the frequency chirp. Limited by the direct detection, DM lasers used to be exploited only as 1-D (intensity or angle) transmitters by suppressing or simply ignoring the other modulation. Nevertheless, through the digital coherent detection, simultaneous intensity and angle modulations (namely, 2-D complex DM, CDM) can be realized by a single laser diode. The crucial technique of CDM is the joint demodulation of intensity and differential phase with the maximum likelihood sequence estimation (MLSE), supported by a closed-form discrete signal approximation of frequency chirp to characterize the MLSE transition probability. This paper proposes a statistical method for the transition probability to significantly enhance the accuracy of the chirp model. Using the statistical estimation, we demonstrate the first single-channel 100-Gb/s PAM-4 transmission over 1600-km fiber with only 10G-class DM lasers.
Non-thermal emission in the core of Perseus: results from a long XMM-Newton observation
NASA Astrophysics Data System (ADS)
Molendi, S.; Gastaldello, F.
2009-01-01
We employ a long XMM-Newton observation of the core of the Perseus cluster to validate claims of a non-thermal component discovered with Chandra. From a meticulous analysis of our dataset, which includes a detailed treatment of systematic errors, we find the 2-10 keV surface brightness of the non-thermal component to be less than about 5 × 10-16 erg~cm-2 s-1 arcsec-2. The most likely explanation for the discrepancy between the XMM-Newton and Chandra estimates is a problem in the effective area calibration of the latter. Our EPIC-based magnetic field lower limits do not disagree with Faraday rotation measure estimates on a few cool cores and with a minimum energy estimate on Perseus. In the not too distant future Simbol-X may allow detection of non-thermal components with intensities more than 10 times lower than those that can be measured with EPIC; nonetheless even the exquisite sensitivity within reach for Simbol-X might be insufficient to detect the IC emission from Perseus.
A reduced estimate of the number of kilometre-sized near-Earth asteroids.
Rabinowitz, D; Helin, E; Lawrence, K; Pravdo, S
2000-01-13
Near-Earth asteroids are small (diameters < 10 km), rocky bodies with orbits that approach that of the Earth (they come within 1.3 AU of the Sun). Most have a chance of approximately 0.5% of colliding with the Earth in the next million years. The total number of such bodies with diameters > 1 km has been estimated to be in the range 1,000-2,000, which translates to an approximately 1% chance of a catastrophic collision with the Earth in the next millennium. These numbers are, however, poorly constrained because of the limitations of previous searches using photographic plates. (One kilometre is below the size of a body whose impact on the Earth would produce global effects.) Here we report an analysis of our survey for near-Earth asteroids that uses improved detection technologies. We find that the total number of asteroids with diameters > 1 km is about half the earlier estimates. At the current rate of discovery of near-Earth asteroids, 90% will probably have been detected within the next 20 years.
Brudecki, K; Kowalska, A; Zagrodzki, P; Szczodry, A; Mroz, T; Janowski, P; Mietelski, J W
2017-03-01
This paper presents results of 131 I thyroid activity measurements in 30 members of the nuclear medicine personnel of the Department of Endocrinology and Nuclear Medicine Holy Cross Cancer Centre in Kielce, Poland. A whole-body spectrometer equipped with two semiconductor gamma radiation detectors served as the basic research instrument. In ten out of 30 examined staff members, the determined 131 I activity was found to be above the detection limit (DL = 5 Bq of 131 I in the thyroid). The measured activities ranged from (5 ± 2) Bq to (217 ± 56) Bq. The highest activities in thyroids were detected for technical and cleaning personnel, whereas the lowest values were recorded for medical doctors. Having measured the activities, an attempt has been made to estimate the corresponding annual effective doses, which were found to range from 0.02 to 0.8 mSv. The highest annual equivalent doses have been found for thyroid, ranging from 0.4 to 15.4 mSv, detected for a cleaner and a technician, respectively. The maximum estimated effective dose corresponds to 32% of the annual background dose in Poland, and to circa 4% of the annual limit for the effective dose due to occupational exposure of 20 mSv per year, which is in compliance with the value recommended by the International Commission on Radiological Protection.
Ramírez, Juan Carlos; Cura, Carolina Inés; Moreira, Otacilio da Cruz; Lages-Silva, Eliane; Juiz, Natalia; Velázquez, Elsa; Ramírez, Juan David; Alberti, Anahí; Pavia, Paula; Flores-Chávez, María Delmans; Muñoz-Calderón, Arturo; Pérez-Morales, Deyanira; Santalla, José; Guedes, Paulo Marcos da Matta; Peneau, Julie; Marcet, Paula; Padilla, Carlos; Cruz-Robles, David; Valencia, Edward; Crisante, Gladys Elena; Greif, Gonzalo; Zulantay, Inés; Costales, Jaime Alfredo; Alvarez-Martínez, Miriam; Martínez, Norma Edith; Villarroel, Rodrigo; Villarroel, Sandro; Sánchez, Zunilda; Bisio, Margarita; Parrado, Rudy; Galvão, Lúcia Maria da Cunha; da Câmara, Antonia Cláudia Jácome; Espinoza, Bertha; de Noya, Belkisyole Alarcón; Puerta, Concepción; Riarte, Adelina; Diosque, Patricio; Sosa-Estani, Sergio; Guhl, Felipe; Ribeiro, Isabela; Aznar, Christine; Britto, Constança; Yadón, Zaida Estela; Schijman, Alejandro G.
2015-01-01
An international study was performed by 26 experienced PCR laboratories from 14 countries to assess the performance of duplex quantitative real-time PCR (qPCR) strategies on the basis of TaqMan probes for detection and quantification of parasitic loads in peripheral blood samples from Chagas disease patients. Two methods were studied: Satellite DNA (SatDNA) qPCR and kinetoplastid DNA (kDNA) qPCR. Both methods included an internal amplification control. Reportable range, analytical sensitivity, limits of detection and quantification, and precision were estimated according to international guidelines. In addition, inclusivity and exclusivity were estimated with DNA from stocks representing the different Trypanosoma cruzi discrete typing units and Trypanosoma rangeli and Leishmania spp. Both methods were challenged against 156 blood samples provided by the participant laboratories, including samples from acute and chronic patients with varied clinical findings, infected by oral route or vectorial transmission. kDNA qPCR showed better analytical sensitivity than SatDNA qPCR with limits of detection of 0.23 and 0.70 parasite equivalents/mL, respectively. Analyses of clinical samples revealed a high concordance in terms of sensitivity and parasitic loads determined by both SatDNA and kDNA qPCRs. This effort is a major step toward international validation of qPCR methods for the quantification of T. cruzi DNA in human blood samples, aiming to provide an accurate surrogate biomarker for diagnosis and treatment monitoring for patients with Chagas disease. PMID:26320872
Rehan, I; Gondal, M A; Rehan, K
2018-05-15
A detection system based on Laser Induced Breakdown Spectroscopy (LIBS) was designed, optimized, and successfully employed for the estimation of lead (Pb) content in drilling fueled soil (DFS) collected from oil field drilling areas in Pakistan. The concentration of Pb was evaluated by the standard calibration curve method as well as by using an approach based on the integrated intensity of strongest emission of an element of interest. Remarkably, our investigation clearly demonstrated that the concentration of Pb in drilling fueled soil collected at the exact drilling site was greater than the safe permissible limits. Furthermore, the Pb concentration was observed to decline with increasing distance away from the specific drilling point. Analytical determinations were carried out under the assumptions that laser generated plasma was optically thin and in local thermodynamic equilibrium (LTE). In order to improve the sensitivity of our LIBS detection system, various parametric dependence studies were performed. To further validate the precision of our LIBS results, the concentration of Pb present in the acquired samples were also quantified via a standard analytical tool like inductively coupled plasma/optical emission spectroscopy (ICP/OES). Both results were in excellent agreement, implying remarkable reliability for the LIBS data. Furthermore, the Limit of detection (LOD) of our LIBS system for Pb was estimated to be 125.14 mg L -1 . Copyright © 2018 Elsevier B.V. All rights reserved.
Multiple symbol partially coherent detection of MPSK
NASA Technical Reports Server (NTRS)
Simon, M. K.; Divsalar, D.
1992-01-01
It is shown that by using the known (or estimated) value of carrier tracking loop signal to noise ratio (SNR) in the decision metric, it is possible to improve the error probability performance of a partially coherent multiple phase-shift-keying (MPSK) system relative to that corresponding to the commonly used ideal coherent decision rule. Using a maximum-likeihood approach, an optimum decision metric is derived and shown to take the form of a weighted sum of the ideal coherent decision metric (i.e., correlation) and the noncoherent decision metric which is optimum for differential detection of MPSK. The performance of a receiver based on this optimum decision rule is derived and shown to provide continued improvement with increasing length of observation interval (data symbol sequence length). Unfortunately, increasing the observation length does not eliminate the error floor associated with the finite loop SNR. Nevertheless, in the limit of infinite observation length, the average error probability performance approaches the algebraic sum of the error floor and the performance of ideal coherent detection, i.e., at any error probability above the error floor, there is no degradation due to the partial coherence. It is shown that this limiting behavior is virtually achievable with practical size observation lengths. Furthermore, the performance is quite insensitive to mismatch between the estimate of loop SNR (e.g., obtained from measurement) fed to the decision metric and its true value. These results may be of use in low-cost Earth-orbiting or deep-space missions employing coded modulations.
NASA Astrophysics Data System (ADS)
Malmberg, Filip; Sandberg-Melin, Camilla; Söderberg, Per G.
2016-03-01
The aim of this project was to investigate the possibility of using OCT optic nerve head 3D information captured with a Topcon OCT 2000 device for detection of the shortest distance between the inner limit of the retina and the central limit of the pigment epithelium around the circumference of the optic nerve head. The shortest distance between these boundaries reflects the nerve fiber layer thickness and measurement of this distance is interesting for follow-up of glaucoma.
NASA Astrophysics Data System (ADS)
Chatzidakis, S.; Choi, C. K.; Tsoukalas, L. H.
2016-08-01
The potential non-proliferation monitoring of spent nuclear fuel sealed in dry casks interacting continuously with the naturally generated cosmic ray muons is investigated. Treatments on the muon RMS scattering angle by Moliere, Rossi-Greisen, Highland and, Lynch-Dahl were analyzed and compared with simplified Monte Carlo simulations. The Lynch-Dahl expression has the lowest error and appears to be appropriate when performing conceptual calculations for high-Z, thick targets such as dry casks. The GEANT4 Monte Carlo code was used to simulate dry casks with various fuel loadings and scattering variance estimates for each case were obtained. The scattering variance estimation was shown to be unbiased and using Chebyshev's inequality, it was found that 106 muons will provide estimates of the scattering variances that are within 1% of the true value at a 99% confidence level. These estimates were used as reference values to calculate scattering distributions and evaluate the asymptotic behavior for small variations on fuel loading. It is shown that the scattering distributions between a fully loaded dry cask and one with a fuel assembly missing initially overlap significantly but their distance eventually increases with increasing number of muons. One missing fuel assembly can be distinguished from a fully loaded cask with a small overlapping between the distributions which is the case of 100,000 muons. This indicates that the removal of a standard fuel assembly can be identified using muons providing that enough muons are collected. A Bayesian algorithm was developed to classify dry casks and provide a decision rule that minimizes the risk of making an incorrect decision. The algorithm performance was evaluated and the lower detection limit was determined.
Bondu, Joseph Dian; Selvakumar, R; Fleming, Jude Joseph
2018-01-01
A variety of methods, including the Ion Selective Electrode (ISE), have been used for estimation of fluoride levels in drinking water. But as these methods suffer many drawbacks, the newer method of IC has replaced many of these methods. The study aimed at (1) validating IC for estimation of fluoride levels in drinking water and (2) to assess drinking water fluoride levels of villages in and around Vellore district using IC. Forty nine paired drinking water samples were measured using ISE and IC method (Metrohm). Water samples from 165 randomly selected villages in and around Vellore district were collected for fluoride estimation over 1 year. Standardization of IC method showed good within run precision, linearity and coefficient of variance with correlation coefficient R 2 = 0.998. The limit of detection was 0.027 ppm and limit of quantification was 0.083 ppm. Among 165 villages, 46.1% of the villages recorded water fluoride levels >1.00 ppm from which 19.4% had levels ranging from 1 to 1.5 ppm, 10.9% had recorded levels 1.5-2 ppm and about 12.7% had levels of 2.0-3.0 ppm. Three percent of villages had more than 3.0 ppm fluoride in the water tested. Most (44.42%) of these villages belonged to Jolarpet taluk with moderate to high (0.86-3.56 ppm) water fluoride levels. Ion Chromatography method has been validated and is therefore a reliable method in assessment of fluoride levels in the drinking water. While the residents of Jolarpet taluk (Vellore distict) are found to be at a high risk of developing dental and skeletal fluorosis.
A comparison of two ELISAs for the detection of antibodies to bovine leucosis virus in bulk-milk.
Ridge, S E; Galvin, J W
2005-07-01
To estimate the sensitivity, specificity and detection limits for two bulk-milk enzyme-linked immunosorbent assays, the Svanovir BLV-gp51-Ab and the Lactelisa BLV Ab Bi indirect tank 250, for the detection of antibody to bovine leucosis virus in milk. Milk samples from 27 cows known to have enzootic bovine leucosis (EBL) were serially diluted with milk from a herd known to be free from the disease. The dilution at which antibodies could no longer be detected by each test was determined. A total of 1959 bulk-milk samples submitted to a laboratory for the Victorian (EBL) eradication program were tested with both the Svanovir and the Lactelisa assays. A Bayesian approach was used to calculate maximum-likelihood estimates of test sensitivity and specificity. An additional 660 bulk-milk samples were tested with both the Svanovir and the Lactelisa assays. Herds that had positive results on either or both of the assays were subjected to blood or milk testing of individual cattle. The dilution of milk at which the Svanovir assay failed to detect enzootic bovine leucosis antibody in half of the samples was 1 in 40, whereas the comparable value for the Lactelisa was 1 in 200. Computer modeling of the operating characteristics of the Svanovir assay indicated that the sensitivity of that assay would be considerably lower than that for the Lactelisa, and the specificity was estimated to be higher. Evaluation of the assays using 660 bulk-milk samples showed that the Lactelisa assay detected four infected herds that were not detected by the Svanovir test. No false positive results were recorded for either assay. Use of the Lactelisa assay in the Victorian EBL eradication program will enhance disease detection and eradication, but may also result in an increased frequency of false positive bulk-milk test results.
1.25-mm observations of luminous infrared galaxies
NASA Technical Reports Server (NTRS)
Carico, David P.; Keene, Jocelyn; Soifer, B. T.; Neugebauer, G.
1992-01-01
Measurements at a wavelength of 1.25 mm have been obtained for 17 IRAS galaxies selected on the basis of high far-infrared luminosity. These measurements are used to estimate the lower and upper limits to the mass of cold dust in infrared galaxies. As a lower limit on dust mass, all of the galaxies can be successfully modeled without invoking any dust colder than the dust responsible for the 60 and 100 micron emission that was detected by IRAS. As an upper limit, it is possible that the dust mass in a number of the galaxies may actually be dominated by cold dust. This large difference between the lower and upper limits is due primarily to uncertainty in the long-wavelength absorption efficiency of the astrophysical dust grains.
Meng, Bowen; Lee, Ho; Xing, Lei; Fahimian, Benjamin P.
2013-01-01
Purpose: X-ray scatter results in a significant degradation of image quality in computed tomography (CT), representing a major limitation in cone-beam CT (CBCT) and large field-of-view diagnostic scanners. In this work, a novel scatter estimation and correction technique is proposed that utilizes peripheral detection of scatter during the patient scan to simultaneously acquire image and patient-specific scatter information in a single scan, and in conjunction with a proposed compressed sensing scatter recovery technique to reconstruct and correct for the patient-specific scatter in the projection space. Methods: The method consists of the detection of patient scatter at the edges of the field of view (FOV) followed by measurement based compressed sensing recovery of the scatter through-out the projection space. In the prototype implementation, the kV x-ray source of the Varian TrueBeam OBI system was blocked at the edges of the projection FOV, and the image detector in the corresponding blocked region was used for scatter detection. The design enables image data acquisition of the projection data on the unblocked central region of and scatter data at the blocked boundary regions. For the initial scatter estimation on the central FOV, a prior consisting of a hybrid scatter model that combines the scatter interpolation method and scatter convolution model is estimated using the acquired scatter distribution on boundary region. With the hybrid scatter estimation model, compressed sensing optimization is performed to generate the scatter map by penalizing the L1 norm of the discrete cosine transform of scatter signal. The estimated scatter is subtracted from the projection data by soft-tuning, and the scatter-corrected CBCT volume is obtained by the conventional Feldkamp-Davis-Kress algorithm. Experimental studies using image quality and anthropomorphic phantoms on a Varian TrueBeam system were carried out to evaluate the performance of the proposed scheme. Results: The scatter shading artifacts were markedly suppressed in the reconstructed images using the proposed method. On the Catphan©504 phantom, the proposed method reduced the error of CT number to 13 Hounsfield units, 10% of that without scatter correction, and increased the image contrast by a factor of 2 in high-contrast regions. On the anthropomorphic phantom, the spatial nonuniformity decreased from 10.8% to 6.8% after correction. Conclusions: A novel scatter correction method, enabling unobstructed acquisition of the high frequency image data and concurrent detection of the patient-specific low frequency scatter data at the edges of the FOV, is proposed and validated in this work. Relative to blocker based techniques, rather than obstructing the central portion of the FOV which degrades and limits the image reconstruction, compressed sensing is used to solve for the scatter from detection of scatter at the periphery of the FOV, enabling for the highest quality reconstruction in the central region and robust patient-specific scatter correction. PMID:23298098
NASA Astrophysics Data System (ADS)
Behringer, Reinhold
1995-12-01
A system for visual road recognition in far look-ahead distance, implemented in the autonomous road vehicle VaMP (a passenger car), is described. Visual cues of a road in a video image are the bright lane markings and the edges formed at the road borders. In a distance of more than 100 m, the most relevant road cue is the homogeneous road area, limited by the two border edges. These cues can be detected by the image processing module KRONOS applying edge detection techniques and areal 2D segmentation based on resolution triangles (analogous to a resolution pyramid). An estimation process performs an update of a state vector, which describes spatial road shape and vehicle orientation relative to the road. This state vector is estimated every 40 ms by exploiting knowledge about the vehicle movement (spatio-temporal model of vehicle dynamics) and the road design rules (clothoidal segments). Kalman filter techniques are applied to obtain an optimal estimate of the state vector by evaluating the measurements of the road border positions in the image sequence taken by a set of CCD cameras. The road consists of segments with piecewise constant curvature parameters. The borders between these segments can be detected by applying methods which have been developed for detection of discontinuities during time-discrete measurements. The road recognition system has been tested in autonomous rides with VaMP on public Autobahnen in real traffic at speeds up to 130 km/h.
Nocturnal field use by fall migrating American woodcock in the Delta of Arkansas
Krementz, David G.; Crossett, Richard; Lehnen, Sarah E.
2014-01-01
The American woodcock (Scolopax minor) population has declined since the late 1960s across its range and is now considered a species of special concern. Research on woodcock habitat use during migration and migratory routes through the Central Flyway has been limited. We assessed woodcock phenology, estimated density, and nocturnal habitat use in fields on public lands in the lower Mississippi Alluvial Valley portion of Arkansas during November and December of 2010 and 2011. We used all-terrain vehicles to survey woodcock along transects in 67 fields of 8 field types. We analyzed data using hierarchical distance sampling. We detected woodcock from the first week in November through the third week in December but in low numbers. We did not detect woodcock in millet or rice fields, whereas woodcock had the highest estimated densities in unharvested soybeans. All other crop type-post-harvest management combinations had low woodcock densities. We did not detect woodcock in fields <8 ha or >40 ha. Woodcock in the lower Mississippi Alluvial Valley may benefit from management for unharvested soybean fields of moderate size (approx. 8-40ha).
Miladinovic, Branko; Kumar, Ambuj; Mhaskar, Rahul; Djulbegovic, Benjamin
2014-10-21
To understand how often 'breakthroughs,' that is, treatments that significantly improve health outcomes, can be developed. We applied weighted adaptive kernel density estimation to construct the probability density function for observed treatment effects from five publicly funded cohorts and one privately funded group. 820 trials involving 1064 comparisons and enrolling 331,004 patients were conducted by five publicly funded cooperative groups. 40 cancer trials involving 50 comparisons and enrolling a total of 19,889 patients were conducted by GlaxoSmithKline. We calculated that the probability of detecting treatment with large effects is 10% (5-25%), and that the probability of detecting treatment with very large treatment effects is 2% (0.3-10%). Researchers themselves judged that they discovered a new, breakthrough intervention in 16% of trials. We propose these figures as the benchmarks against which future development of 'breakthrough' treatments should be measured. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Automated face detection for occurrence and occupancy estimation in chimpanzees.
Crunchant, Anne-Sophie; Egerer, Monika; Loos, Alexander; Burghardt, Tilo; Zuberbühler, Klaus; Corogenes, Katherine; Leinert, Vera; Kulik, Lars; Kühl, Hjalmar S
2017-03-01
Surveying endangered species is necessary to evaluate conservation effectiveness. Camera trapping and biometric computer vision are recent technological advances. They have impacted on the methods applicable to field surveys and these methods have gained significant momentum over the last decade. Yet, most researchers inspect footage manually and few studies have used automated semantic processing of video trap data from the field. The particular aim of this study is to evaluate methods that incorporate automated face detection technology as an aid to estimate site use of two chimpanzee communities based on camera trapping. As a comparative baseline we employ traditional manual inspection of footage. Our analysis focuses specifically on the basic parameter of occurrence where we assess the performance and practical value of chimpanzee face detection software. We found that the semi-automated data processing required only 2-4% of the time compared to the purely manual analysis. This is a non-negligible increase in efficiency that is critical when assessing the feasibility of camera trap occupancy surveys. Our evaluations suggest that our methodology estimates the proportion of sites used relatively reliably. Chimpanzees are mostly detected when they are present and when videos are filmed in high-resolution: the highest recall rate was 77%, for a false alarm rate of 2.8% for videos containing only chimpanzee frontal face views. Certainly, our study is only a first step for transferring face detection software from the lab into field application. Our results are promising and indicate that the current limitation of detecting chimpanzees in camera trap footage due to lack of suitable face views can be easily overcome on the level of field data collection, that is, by the combined placement of multiple high-resolution cameras facing reverse directions. This will enable to routinely conduct chimpanzee occupancy surveys based on camera trapping and semi-automated processing of footage. Using semi-automated ape face detection technology for processing camera trap footage requires only 2-4% of the time compared to manual analysis and allows to estimate site use by chimpanzees relatively reliably. © 2017 Wiley Periodicals, Inc.
Photoacoustic spectroscopy of CO2 laser in the detection of gaseous molecules
NASA Astrophysics Data System (ADS)
Lima, G. R.; Sthel, M. S.; da Silva, M. G.; Schramm, D. U. S.; de Castro, M. P. P.; Vargas, H.
2011-01-01
The detection of trace gases is very important for a variety of applications, including the monitoring of atmospheric pollutants, industrial process control, measuring air quality in workplaces, research into fruits physiological processes and medical diagnosis of diseases through the analysis of exhaled gases. The implementation of these and many other applications requiring gas sensors able to meet high sensitivity and selectivity. In this work, a photoacoustic laser spectrometer with CO2 emission in the infrared range and a resonant photoacoustic cell was used. We obtain the resonance frequency of 2.4 kHz to photoacoustic cell, was estimated detection limit of the spectrometer for molecules of ethylene (C2H4), 16 ppbV and ammonia (NH3) 42 ppbV.
Detection of Single Molecules Illuminated by a Light-Emitting Diode
Gerhardt, Ilja; Mai, Lijian; Lamas-Linares, Antía; Kurtsiefer, Christian
2011-01-01
Optical detection and spectroscopy of single molecules has become an indispensable tool in biological imaging and sensing. Its success is based on fluorescence of organic dye molecules under carefully engineered laser illumination. In this paper we demonstrate optical detection of single molecules on a wide-field microscope with an illumination based on a commercially available, green light-emitting diode. The results are directly compared with laser illumination in the same experimental configuration. The setup and the limiting factors, such as light transfer to the sample, spectral filtering and the resulting signal-to-noise ratio are discussed. A theoretical and an experimental approach to estimate these parameters are presented. The results can be adapted to other single emitter and illumination schemes. PMID:22346610
A Search for Neutrinos from Fast Radio Bursts with IceCube
NASA Astrophysics Data System (ADS)
Fahey, Samuel; Kheirandish, Ali; Vandenbroucke, Justin; Xu, Donglian
2017-08-01
We present a search for neutrinos in coincidence in time and direction with four fast radio bursts (FRBs) detected by the Parkes and Green Bank radio telescopes during the first year of operation of the complete IceCube Neutrino Observatory (2011 May through 2012 May). The neutrino sample consists of 138,322 muon neutrino candidate events, which are dominated by atmospheric neutrinos and atmospheric muons but also contain an astrophysical neutrino component. Considering only neutrinos detected on the same day as each FRB, zero IceCube events were found to be compatible with the FRB directions within the estimated 99% error radius of the neutrino directions. Based on the non-detection, we present the first upper limits on the neutrino fluence from FRBs.
Ultimate limits for quantum magnetometry via time-continuous measurements
NASA Astrophysics Data System (ADS)
Albarelli, Francesco; Rossi, Matteo A. C.; Paris, Matteo G. A.; Genoni, Marco G.
2017-12-01
We address the estimation of the magnetic field B acting on an ensemble of atoms with total spin J subjected to collective transverse noise. By preparing an initial spin coherent state, for any measurement performed after the evolution, the mean-square error of the estimate is known to scale as 1/J, i.e. no quantum enhancement is obtained. Here, we consider the possibility of continuously monitoring the atomic environment, and conclusively show that strategies based on time-continuous non-demolition measurements followed by a final strong measurement may achieve Heisenberg-limited scaling 1/{J}2 and also a monitoring-enhanced scaling in terms of the interrogation time. We also find that time-continuous schemes are robust against detection losses, as we prove that the quantum enhancement can be recovered also for finite measurement efficiency. Finally, we analytically prove the optimality of our strategy.
Overview of MPLNET Version 3 Cloud Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip
2016-01-01
The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.
Carpenter, Tim E; O'Brien, Joshua M; Hagerman, Amy D; McCarl, Bruce A
2011-01-01
The epidemic and economic impacts of Foot-and-mouth disease virus (FMDV) spread and control were examined by using epidemic simulation and economic (epinomic) optimization models. The simulated index herd was a ≥2,000 cow dairy located in California. Simulated disease spread was limited to California; however, economic impact was assessed throughout the United States and included international trade effects. Five index case detection delays were examined, which ranged from 7 to 22 days. The simulated median number of infected premises (IP) ranged from approximately 15 to 745, increasing as the detection delay increased from 7 to 22 days. Similarly, the median number of herds under quarantine increased from approximately 680 to 6,200, whereas animals slaughtered went from approximately 8,700 to 260,400 for detection delays of 7-22 days, respectively. The median economic impact of an FMD outbreak in California was estimated to result in national agriculture welfare losses of $2.3-$69.0 billion as detection delay increased from 7 to 22 days, respectively. If assuming a detection delay of 21 days, it was estimated that, for every additional hr of delay, the impact would be an additional approximately 2,000 animals slaughtered and an additional economic loss of $565 million. These findings underline the critical importance that the United States has an effective early detection system in place before an introduction of FMDV if it hopes to avoid dramatic losses to both livestock and the economy.
Detecting trihalomethanes using nanoporous-carbon coated surface-acoustic-wave sensors
Siegal, Michael P.; Mowry, Curtis D.; Pfeifer, Kent B.; ...
2015-03-07
We study nanoporous-carbon (NPC) grown via pulsed laser deposition (PLD) as a sorbent coating on 96.5-MHz surface-acoustic-wave (SAW) devices to detect trihalomethanes (THMs), regulated byproducts from the chemical treatment of drinking water. Using both insertion-loss and isothermal-response measurements from known quantities of chloroform, the highest vapor pressure THM, we optimize the NPC mass-density at 1.05 ± 0.08 g/cm3 by controlling the background argon pressure during PLD. Precise THM quantities in a chlorobenzene solvent are directly injected into a separation column and detected as the phase-angle shift of the SAW device output compared to the drive signal. Using optimized NPC-coated SAWs,more » we study the chloroform response as a function of operating temperatures ranging from 10–50°C. Finally, we demonstrate individual responses from complex mixtures of all four THMs, with masses ranging from 10–2000 ng, after gas chromatography separation. As a result, estimates for each THM detection limit using a simple peak-height response evaluation are 4.4 ng for chloroform and 1 ng for bromoform; using an integrated-peak area response analysis improves the detection limits to 0.73 ng for chloroform and 0.003 ng bromoform.« less
OH detection by Ford Motor Company
NASA Technical Reports Server (NTRS)
Wang, Charles C.
1986-01-01
Two different methods for detection of OH are presented: a low pressure flow cell system and a frequency modulation absorption measurement. Using conventional absorption spectroscopy, detection limits were quoted of 1,000,000 OH molecules per cu cm using a 30-minute averaging time on the ground, and a 3-hour averaging time in the air for present apparatus in use. With the addition of FM spectroscopy at 1 GHz, a double-beam machine should permit detectable absorption of and an OH limit of 100,000 per cu cm in a 30-minute averaging time. In the low pressure system on which experiments are ongoing nonexponential time behavior was observed after the decay had progressed to about 0.3 of its original level; this was attributed to ion emission in the photomultiplier. A flame source with OH present at high concentration levels was used as a calibration. It was estimated that within the sampling chamber, 400,000 OH could be measured. With a factor-of-2 loss at the sampling orifice, this means detectability of 5 to 8 x 100,000 cu cm at the present time. This could be reduced by a factor of 2 in one hour averaging time; improvements in laser bandwidth and energy should provide another factor of 2 in sensitivity.
Granja, Rodrigo H M M; Niño, Alfredo M Montes; Zucchetti, Roberto A M; Niño, Rosario E Montes; Salerno, Alessandro G
2008-01-01
Ethopabate is frequently used in the prophylaxis and treatment of coccidiosis in poultry. Residues of this drug in food present a potential risk to consumers. A simple, rapid, and sensitive column high-performance liquid chromatographic (HPLC) method with UV detection for determination of ethopabate in poultry liver is presented. The drug is extracted with acetonitrile. After evaporation, the residue is dissolved with an acetone-hexane mixture and cleaned up by solid-phase extraction using Florisil columns. The analyte is then eluted with methanol. LC analysis is carried out on a C18 5 microm Gemini column, 15 cm x 4.6 mm. Ethopabate is quantified by means of UV detection at 270 nm. Parameters such as decision limit, detection capability, precision, recovery, ruggedness, and measurement uncertainty were calculated according to method validation guidelines provided in 2002/657/EC and ISO/IEC 17025:2005. Decision limit and detection capability were determined to be 2 and 3 microg/kg, respectively. Average recoveries from poultry samples fortified with 10, 15, and 20 microg/kg levels of ethopabate were 100-105%. A complete statistical analysis was performed on the results obtained, including an estimation of the method uncertainty. The method is to be implemented into Brazil's residue monitoring and control program for ethopabate.
Li, P; Jia, J W; Jiang, L X; Zhu, H; Bai, L; Wang, J B; Tang, X M; Pan, A H
2012-04-27
To ensure the implementation of genetically modified organism (GMO)-labeling regulations, an event-specific detection method was developed based on the junction sequence of an exogenous integrant in the transgenic carnation variety Moonlite. The 5'-transgene integration sequence was isolated by thermal asymmetric interlaced PCR. Based upon the 5'-transgene integration sequence, the event-specific primers and TaqMan probe were designed to amplify the fragments, which spanned the exogenous DNA and carnation genomic DNA. Qualitative and quantitative PCR assays were developed employing the designed primers and probe. The detection limit of the qualitative PCR assay was 0.05% for Moonlite in 100 ng total carnation genomic DNA, corresponding to about 79 copies of the carnation haploid genome; the limit of detection and quantification of the quantitative PCR assay were estimated to be 38 and 190 copies of haploid carnation genomic DNA, respectively. Carnation samples with different contents of genetically modified components were quantified and the bias between the observed and true values of three samples were lower than the acceptance criterion (<25%) of the GMO detection method. These results indicated that these event-specific methods would be useful for the identification and quantification of the GMO carnation Moonlite.
OH detection by Ford Motor Company
NASA Astrophysics Data System (ADS)
Wang, Charles C.
1986-12-01
Two different methods for detection of OH are presented: a low pressure flow cell system and a frequency modulation absorption measurement. Using conventional absorption spectroscopy, detection limits were quoted of 1,000,000 OH molecules per cu cm using a 30-minute averaging time on the ground, and a 3-hour averaging time in the air for present apparatus in use. With the addition of FM spectroscopy at 1 GHz, a double-beam machine should permit detectable absorption of and an OH limit of 100,000 per cu cm in a 30-minute averaging time. In the low pressure system on which experiments are ongoing nonexponential time behavior was observed after the decay had progressed to about 0.3 of its original level; this was attributed to ion emission in the photomultiplier. A flame source with OH present at high concentration levels was used as a calibration. It was estimated that within the sampling chamber, 400,000 OH could be measured. With a factor-of-2 loss at the sampling orifice, this means detectability of 5 to 8 x 100,000 cu cm at the present time. This could be reduced by a factor of 2 in one hour averaging time; improvements in laser bandwidth and energy should provide another factor of 2 in sensitivity.
Carbon dots based fluorescent sensor for sensitive determination of hydroquinone.
Ni, Pengjuan; Dai, Haichao; Li, Zhen; Sun, Yujing; Hu, Jingting; Jiang, Shu; Wang, Yilin; Li, Zhuang
2015-11-01
In this paper, a novel biosensor based on Carbon dots (C-dots) for sensitive detection of hydroquinone (H2Q) is reported. It is interesting to find that the fluorescence of the C-dots could be quenched by H2Q directly. The possible quenching mechanism is proposed, which shows that the quenching effect may be caused by the electron transfer from C-dots to oxidized H2Q-quinone. Based on the above principle, a novel C-dots based fluorescent probe has been successfully applied to detect H2Q. Under the optimal condition, detection limit down to 0.1 μM is obtained, which is far below U.S. Environmental Protection Agency estimated wastewater discharge limit of 0.5 mg/L. Moreover, the proposed method shows high selectivity for H2Q over a number of potential interfering species. Finally, several water samples spiked with H2Q are analyzed utilizing the sensing method with satisfactory recovery. The proposed method is simple with high sensitivity and excellent selectivity, which provides a new approach for the detection of various analytes that can be transformed into quinone. Copyright © 2015 Elsevier B.V. All rights reserved.
van Mourik, Maaike S M; van Duijn, Pleun Joppe; Moons, Karel G M; Bonten, Marc J M; Lee, Grace M
2015-01-01
Objective Measuring the incidence of healthcare-associated infections (HAI) is of increasing importance in current healthcare delivery systems. Administrative data algorithms, including (combinations of) diagnosis codes, are commonly used to determine the occurrence of HAI, either to support within-hospital surveillance programmes or as free-standing quality indicators. We conducted a systematic review evaluating the diagnostic accuracy of administrative data for the detection of HAI. Methods Systematic search of Medline, Embase, CINAHL and Cochrane for relevant studies (1995–2013). Methodological quality assessment was performed using QUADAS-2 criteria; diagnostic accuracy estimates were stratified by HAI type and key study characteristics. Results 57 studies were included, the majority aiming to detect surgical site or bloodstream infections. Study designs were very diverse regarding the specification of their administrative data algorithm (code selections, follow-up) and definitions of HAI presence. One-third of studies had important methodological limitations including differential or incomplete HAI ascertainment or lack of blinding of assessors. Observed sensitivity and positive predictive values of administrative data algorithms for HAI detection were very heterogeneous and generally modest at best, both for within-hospital algorithms and for formal quality indicators; accuracy was particularly poor for the identification of device-associated HAI such as central line associated bloodstream infections. The large heterogeneity in study designs across the included studies precluded formal calculation of summary diagnostic accuracy estimates in most instances. Conclusions Administrative data had limited and highly variable accuracy for the detection of HAI, and their judicious use for internal surveillance efforts and external quality assessment is recommended. If hospitals and policymakers choose to rely on administrative data for HAI surveillance, continued improvements to existing algorithms and their robust validation are imperative. PMID:26316651
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodman, B.W.; Begley, J.A.; Brown, S.D.
1995-12-01
The analysis of the issue of upper bundle axial ODSCC as it apples to steam generator tube structural integrity in Unit 1 at the Palo Verde Nuclear generating Station is presented in this study. Based on past inspection results for Units 2 and 3 at Palo Verde, the detection of secondary side stress corrosion cracks in the upper bundle region of Unit 1 may occur at some future date. The following discussion provides a description and analysis of the probability of axial ODSCC in Unit 1 leading to the exceedance of Regulatory Guide 1.121 structural limits. The probabilities of structuralmore » limit exceedance are estimated as function of run time using a conservative approach. The chosen approach models the historical development of cracks, crack growth, detection of cracks and subsequent removal from service and the initiation and growth of new cracks during a given cycle of operation. Past performance of all Palo Verde Units as well as the historical performance of other steam generators was considered in the development of cracking statistics for application to Unit 1. Data in the literature and Unit 2 pulled tube examination results were used to construct probability of detection curves for the detection of axial IGSCC/IGA using an MRPC (multi-frequency rotating panake coil) eddy current probe. Crack growth rates were estimated from Unit 2 eddy current inspection data combined with pulled tube examination results and data in the literature. A Monte-Carlo probabilistic model is developed to provide an overall assessment of the risk of Regulatory Guide exceedance during plant operation.« less
NASA Astrophysics Data System (ADS)
Males, Jared R.; Close, Laird M.; Morzinski, Katie M.; Wahhaj, Zahed; Liu, Michael C.; Skemer, Andrew J.; Kopon, Derek; Follette, Katherine B.; Puglisi, Alfio; Esposito, Simone; Riccardi, Armando; Pinna, Enrico; Xompero, Marco; Briguglio, Runa; Biller, Beth A.; Nielsen, Eric L.; Hinz, Philip M.; Rodigas, Timothy J.; Hayward, Thomas L.; Chun, Mark; Ftaclas, Christ; Toomey, Douglas W.; Wu, Ya-Lin
2014-05-01
We present the first ground-based CCD (λ < 1 μm) image of an extrasolar planet. Using the Magellan Adaptive Optics system's VisAO camera, we detected the extrasolar giant planet β Pictoris b in Y-short (YS , 0.985 μm), at a separation of 0.470 ± 0.''010 and a contrast of (1.63 ± 0.49) × 10-5. This detection has a signal-to-noise ratio of 4.1 with an empirically estimated upper limit on false alarm probability of 1.0%. We also present new photometry from the Gemini Near-Infrared Coronagraphic Imager instrument on the Gemini South telescope, in CH 4S,1% (1.58 μm), KS (2.18 μm), and K cont (2.27 μm). A thorough analysis of our photometry combined with previous measurements yields an estimated near-IR spectral type of L2.5 ± 1.5, consistent with previous estimates. We estimate log (L bol/L ⊙) = -3.86 ± 0.04, which is consistent with prior estimates for β Pic b and with field early-L brown dwarfs (BDs). This yields a hot-start mass estimate of 11.9 ± 0.7 M Jup for an age of 21 ± 4 Myr, with an upper limit below the deuterium burning mass. Our L bol-based hot-start estimate for temperature is T eff = 1643 ± 32 K (not including model-dependent uncertainty). Due to the large corresponding model-derived radius of R = 1.43 ± 0.02 R Jup, this T eff is ~250 K cooler than would be expected for a field L2.5 BD. Other young, low-gravity (large-radius), ultracool dwarfs and directly imaged EGPs also have lower effective temperatures than are implied by their spectral types. However, such objects tend to be anomalously red in the near-IR compared to field BDs. In contrast, β Pic b has near-IR colors more typical of an early-L dwarf despite its lower inferred temperature.
Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P
2010-10-30
Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.
Ogaya, Yuko; Nomura, Ryota; Watanabe, Yoshiyuki; Nakano, Kazuhiko
2015-01-01
The oral cavity has been implicated as a source of Helicobacter pylori infection in childhood. Various PCR methods have been used to detect H. pylori DNA in oral specimens with various detection rates reported. Such disparity in detection rates complicates the estimation of the true infection rate of H. pylori in the oral cavity. In the present study, we constructed a novel PCR system for H. pylori detection and used it to analyse oral specimens. Firstly, the nucleotide alignments of genes commonly used for H. pylori detection were compared using the complete genome information for 48 strains registered in the GenBank database. Candidate primer sets with an estimated amplification size of approximately 300-400 bp were selected, and the specificity and sensitivity of the detection system using each primer set were evaluated. Five sets of primers targeting ureA were considered appropriate, of which a single primer set was chosen for inclusion in the PCR system. The sensitivity of the system was considered appropriate and its detection limit established as one to ten cells per reaction. The novel PCR system was used to examine H. pylori distribution in oral specimens (40 inflamed pulp tissues, 40 saliva samples) collected from Japanese children, adolescents and young adults. PCR analysis revealed that the detection rate of H. pylori in inflamed pulp was 15 %, whereas no positive reaction was found in any of the saliva specimens. Taken together, our novel PCR system was found to be reliable for detecting H. pylori. The results obtained showed that H. pylori was detected in inflamed pulp but not saliva specimens, indicating that an infected root canal may be a reservoir for H. pylori. © 2015 The Authors.
PCA method for automated detection of mispronounced words
NASA Astrophysics Data System (ADS)
Ge, Zhenhao; Sharma, Sudhendu R.; Smith, Mark J. T.
2011-06-01
This paper presents a method for detecting mispronunciations with the aim of improving Computer Assisted Language Learning (CALL) tools used by foreign language learners. The algorithm is based on Principle Component Analysis (PCA). It is hierarchical with each successive step refining the estimate to classify the test word as being either mispronounced or correct. Preprocessing before detection, like normalization and time-scale modification, is implemented to guarantee uniformity of the feature vectors input to the detection system. The performance using various features including spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs) are compared and evaluated. Best results were obtained using MFCCs, achieving up to 99% accuracy in word verification and 93% in native/non-native classification. Compared with Hidden Markov Models (HMMs) which are used pervasively in recognition application, this particular approach is computational efficient and effective when training data is limited.
Accounting for imperfect detection of groups and individuals when estimating abundance.
Clement, Matthew J; Converse, Sarah J; Royle, J Andrew
2017-09-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Accounting for imperfect detection of groups and individuals when estimating abundance
Clement, Matthew J.; Converse, Sarah J.; Royle, J. Andrew
2017-01-01
If animals are independently detected during surveys, many methods exist for estimating animal abundance despite detection probabilities <1. Common estimators include double-observer models, distance sampling models and combined double-observer and distance sampling models (known as mark-recapture-distance-sampling models; MRDS). When animals reside in groups, however, the assumption of independent detection is violated. In this case, the standard approach is to account for imperfect detection of groups, while assuming that individuals within groups are detected perfectly. However, this assumption is often unsupported. We introduce an abundance estimator for grouped animals when detection of groups is imperfect and group size may be under-counted, but not over-counted. The estimator combines an MRDS model with an N-mixture model to account for imperfect detection of individuals. The new MRDS-Nmix model requires the same data as an MRDS model (independent detection histories, an estimate of distance to transect, and an estimate of group size), plus a second estimate of group size provided by the second observer. We extend the model to situations in which detection of individuals within groups declines with distance. We simulated 12 data sets and used Bayesian methods to compare the performance of the new MRDS-Nmix model to an MRDS model. Abundance estimates generated by the MRDS-Nmix model exhibited minimal bias and nominal coverage levels. In contrast, MRDS abundance estimates were biased low and exhibited poor coverage. Many species of conservation interest reside in groups and could benefit from an estimator that better accounts for imperfect detection. Furthermore, the ability to relax the assumption of perfect detection of individuals within detected groups may allow surveyors to re-allocate resources toward detection of new groups instead of extensive surveys of known groups. We believe the proposed estimator is feasible because the only additional field data required are a second estimate of group size.
Li, Peng; Jia, Junwei; Bai, Lan; Pan, Aihu; Tang, Xueming
2013-07-01
Genetically modified carnation (Dianthus caryophyllus L.) Moonshade was approved for planting and commercialization in several countries from 2004. Developing methods for analyzing Moonshade is necessary for implementing genetically modified organism labeling regulations. In this study, the 5'-transgene integration sequence was isolated using thermal asymmetric interlaced (TAIL)-PCR. Based upon the 5'-transgene integration sequence, conventional and TaqMan real-time PCR assays were established. The relative limit of detection for the conventional PCR assay was 0.05 % for Moonshade using 100 ng total carnation genomic DNA, corresponding to approximately 79 copies of the carnation haploid genome, and the limits of detection and quantification of the TaqMan real-time PCR assay were estimated to be 51 and 254 copies of haploid carnation genomic DNA, respectively. These results are useful for identifying and quantifying Moonshade and its derivatives.
NASA Technical Reports Server (NTRS)
Littenberg, T. B.; Larson, S. L.; Nelemans, G.; Cornish, N. J.
2012-01-01
Space-based gravitational wave interferometers are sensitive to the galactic population of ultracompact binaries. An important subset of the ultracompact binary population are those stars that can be individually resolved by both gravitational wave interferometers and electromagnetic telescopes. The aim of this paper is to quantify the multimessenger potential of space-based interferometers with arm-lengths between 1 and 5 Gm. The Fisher information matrix is used to estimate the number of binaries from a model of the Milky Way which are localized on the sky by the gravitational wave detector to within 1 and 10 deg(exp 2) and bright enough to be detected by a magnitude-limited survey.We find, depending on the choice ofGW detector characteristics, limiting magnitude and observing strategy, that up to several hundred gravitational wave sources could be detected in electromagnetic follow-up observations.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Cooper, A D; Stubbings, G W; Kelly, M; Tarbin, J A; Farrington, W H; Shearer, G
1998-07-03
An improved on-line metal chelate affinity chromatography-high-performance liquid chromatography (MCAC-HPLC) method for the determination of tetracycline antibiotics in animal tissues and egg has been developed. Extraction was carried out with ethyl acetate. The extract was then evaporated to dryness and reconstituted in methanol prior to on-line MCAC clean-up and HPLC-UV determination. Recoveries of tetracycline, oxytetracycline, demeclocycline and chlortetracycline in the range 42% to 101% were obtained from egg, poultry, fish and venison tissues spiked at 25 micrograms kg-1. Limits of detection less than 10 microgram kg-1 were estimated for all four analytes. This method has higher throughput, higher recovery and lower limits of detection than a previously reported on-line MCAC-HPLC method which involved aqueous extraction and solid-phase extraction clean-up.
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
Cancer imaging using Surface-Enhanced Resonance Raman Scattering (SERRS) nanoparticles
Harmsen, Stefan; Wall, Matthew A.; Huang, Ruimin
2017-01-01
The unique spectral signatures and biologically inert compositions of surface-enhanced (resonance) Raman scattering (SE(R)RS) nanoparticles make them promising contrast agents for in vivo cancer imaging. Subtle aspects of their preparation can shift their limit of detection by orders of magnitude. In this protocol, we present the optimized, step-by-step procedure for generating reproducible SERRS nanoparticles with femtomolar (10−15 M) limits of detection. We introduce several applications of these nanoprobes for biomedical research, with a focus on intraoperative cancer imaging via Raman imaging. A detailed account is provided for successful intravenous administration of SERRS nanoparticles such that delineation of cancerous lesions may be achieved without the need for specific biomarker targeting. The time estimate for this straightforward, yet comprehensive protocol from initial de novo gold nanoparticle synthesis to SE(R)RS nanoparticle contrast-enhanced preclinical Raman imaging in animal models is ~96 h. PMID:28686581
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
Matter, A.; Falke, Jeffrey A.; López, J. Andres; Savereide, James W.
2018-01-01
Identification and protection of water bodies used by anadromous species are critical in light of increasing threats to fish populations, yet often challenging given budgetary and logistical limitations. Noninvasive, rapid‐assessment, sampling techniques may reduce costs and effort while increasing species detection efficiencies. We used an intrinsic potential (IP) habitat model to identify high‐quality rearing habitats for Chinook Salmon Oncorhynchus tshawytscha and select sites to sample throughout the Chena River basin, Alaska, for juvenile occupancy using an environmental DNA (eDNA) approach. Water samples were collected from 75 tributary sites in 2014 and 2015. The presence of Chinook Salmon DNA in water samples was assessed using a species‐specific quantitative PCR (qPCR) assay. The IP model predicted over 900 stream kilometers in the basin to support high‐quality (IP ≥ 0.75) rearing habitat. Occupancy estimation based on eDNA samples indicated that 80% and 56% of previously unsampled sites classified as high or low IP (IP < 0.75), respectively, were occupied. The probability of detection (p) of Chinook Salmon DNA from three replicate water samples was high (p = 0.76) but varied with drainage area (km2). A power analysis indicated high power to detect proportional changes in occupancy based on parameter values estimated from eDNA occupancy models, although power curves were not symmetrical around zero, indicating greater power to detect positive than negative proportional changes in occupancy. Overall, the combination of IP habitat modeling and occupancy estimation provided a useful, rapid‐assessment method to predict and subsequently quantify the distribution of juvenile salmon in previously unsampled tributary habitats. Additionally, these methods are flexible and can be modified for application to other species and in other locations, which may contribute towards improved population monitoring and management.
Detection of bacteria in suspension using a superconducting Quantum interference device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, H.L.; Myers, W.R.; Vreeland, V.J.
2003-06-09
We demonstrate a technique for detecting magnetically-labeled Listeria monocytogenes and for measuring the binding rate between antibody-linked magnetic particles and bacteria. This assay, which is both sensitive and straightforward to perform, can quantify specific bacteria in a sample without the need to immobilize the bacteria or wash away unbound magnetic particles. In the measurement, we add 50 nm diameter superparamagnetic particles, coated with antibodies, to a liquid sample containing L. monocytogenes. We apply a pulsed magnetic field to align the magnetic dipole moments and use a high transition temperature Superconducting Quantum Interference Device (SQUID), an extremely sensitive detector of magneticmore » flux, to measure the magnetic relaxation signal when the field is turned off. Unbound particles randomize direction by Brownian rotation too quickly to be detected. In contrast, particles bound to L. monocytogenes are effectively immobilized and relax in about 1 s by rotation of the internal dipole moment. This Neel relaxation process is detected by the SQUID. The measurements indicate a detection limit of (5.6 {+-} 1.1) x 10{sup 6} L. monocytogenes for a 20 {micro}L sample volume. If the sample volume were reduced to 1 nL, we estimate that the detection limit could be improved to 230 {+-} 40 L. monocytogenes cells. Time-resolved measurements yield the binding rate between the particles and bacteria.« less
Integrated survival analysis using an event-time approach in a Bayesian framework
Walsh, Daniel P.; Dreitz, VJ; Heisey, Dennis M.
2015-01-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
Integrated survival analysis using an event-time approach in a Bayesian framework.
Walsh, Daniel P; Dreitz, Victoria J; Heisey, Dennis M
2015-02-01
Event-time or continuous-time statistical approaches have been applied throughout the biostatistical literature and have led to numerous scientific advances. However, these techniques have traditionally relied on knowing failure times. This has limited application of these analyses, particularly, within the ecological field where fates of marked animals may be unknown. To address these limitations, we developed an integrated approach within a Bayesian framework to estimate hazard rates in the face of unknown fates. We combine failure/survival times from individuals whose fates are known and times of which are interval-censored with information from those whose fates are unknown, and model the process of detecting animals with unknown fates. This provides the foundation for our integrated model and permits necessary parameter estimation. We provide the Bayesian model, its derivation, and use simulation techniques to investigate the properties and performance of our approach under several scenarios. Lastly, we apply our estimation technique using a piece-wise constant hazard function to investigate the effects of year, age, chick size and sex, sex of the tending adult, and nesting habitat on mortality hazard rates of the endangered mountain plover (Charadrius montanus) chicks. Traditional models were inappropriate for this analysis because fates of some individual chicks were unknown due to failed radio transmitters. Simulations revealed biases of posterior mean estimates were minimal (≤ 4.95%), and posterior distributions behaved as expected with RMSE of the estimates decreasing as sample sizes, detection probability, and survival increased. We determined mortality hazard rates for plover chicks were highest at <5 days old and were lower for chicks with larger birth weights and/or whose nest was within agricultural habitats. Based on its performance, our approach greatly expands the range of problems for which event-time analyses can be used by eliminating the need for having completely known fate data.
2D Fast Vessel Visualization Using a Vessel Wall Mask Guiding Fine Vessel Detection
Raptis, Sotirios; Koutsouris, Dimitris
2010-01-01
The paper addresses the fine retinal-vessel's detection issue that is faced in diagnostic applications and aims at assisting in better recognizing fine vessel anomalies in 2D. Our innovation relies in separating key visual features vessels exhibit in order to make the diagnosis of eventual retinopathologies easier to detect. This allows focusing on vessel segments which present fine changes detectable at different sampling scales. We advocate that these changes can be addressed as subsequent stages of the same vessel detection procedure. We first carry out an initial estimate of the basic vessel-wall's network, define the main wall-body, and then try to approach the ridges and branches of the vasculature's using fine detection. Fine vessel screening looks into local structural inconsistencies in vessels properties, into noise, or into not expected intensity variations observed inside pre-known vessel-body areas. The vessels are first modelled sufficiently but not precisely by their walls with a tubular model-structure that is the result of an initial segmentation. This provides a chart of likely Vessel Wall Pixels (VWPs) yielding a form of a likelihood vessel map mainly based on gradient filter's intensity and spatial arrangement parameters (e.g., linear consistency). Specific vessel parameters (centerline, width, location, fall-away rate, main orientation) are post-computed by convolving the image with a set of pre-tuned spatial filters called Matched Filters (MFs). These are easily computed as Gaussian-like 2D forms that use a limited range sub-optimal parameters adjusted to the dominant vessel characteristics obtained by Spatial Grey Level Difference statistics limiting the range of search into vessel widths of 16, 32, and 64 pixels. Sparse pixels are effectively eliminated by applying a limited range Hough Transform (HT) or region growing. Major benefits are limiting the range of parameters, reducing the search-space for post-convolution to only masked regions, representing almost 2% of the 2D volume, good speed versus accuracy/time trade-off. Results show the potentials of our approach in terms of time for detection ROC analysis and accuracy of vessel pixel (VP) detection. PMID:20706682
Quantum estimation of parameters of classical spacetimes
NASA Astrophysics Data System (ADS)
Downes, T. G.; van Meter, J. R.; Knill, E.; Milburn, G. J.; Caves, C. M.
2017-11-01
We describe a quantum limit to the measurement of classical spacetimes. Specifically, we formulate a quantum Cramér-Rao lower bound for estimating the single parameter in any one-parameter family of spacetime metrics. We employ the locally covariant formulation of quantum field theory in curved spacetime, which allows for a manifestly background-independent derivation. The result is an uncertainty relation that applies to all globally hyperbolic spacetimes. Among other examples, we apply our method to the detection of gravitational waves with the electromagnetic field as a probe, as in laser-interferometric gravitational-wave detectors. Other applications are discussed, from terrestrial gravimetry to cosmology.
Estimation of arterial baroreflex sensitivity in relation to carotid artery stiffness.
Lipponen, Jukka A; Tarvainen, Mika P; Laitinen, Tomi; Karjalainen, Pasi A; Vanninen, Joonas; Koponen, Timo; Lyyra-Laitinen, Tiina
2012-01-01
Arterial baroreflex has a significant role in regulating blood pressure. It is known that increased stiffness of the carotid sinus affects mecanotransduction of baroreceptors and therefore limits baroreceptors capability to detect changes in blood pressure. By using high resolution ultrasound video signal and continuous measurement of electrocardiogram (ECG) and blood pressure, it is possible to define elastic properties of artery simultaneously with baroreflex sensitivity parameters. In this paper dataset which consist 38 subjects, 11 diabetics and 27 healthy controls was analyzed. Use of diabetic and healthy test subjects gives wide scale of arteries with different elasticity properties, which provide opportunity to validate baroreflex and artery stiffness estimation methods.
Rodrigo J. Mercader; Deborah G. McCullough; Andrew J. Storer; John M. Bedford; Robert Heyd; Nathan W. Siegert; Steven Katovich; Therese M. Poland
2016-01-01
Information on the pattern and rate of spread for invasive wood- and phloem-feeding insects, including the emerald ash borer (EAB) (Agrilus planipennis Fairmaire), is relatively limited, largely because of the difficulty of detecting subcortical insects at low densities. From 2008 to 2011, grids of girdled and subsequently debarked ash (...
James E. Garabedian; Robert J. McGaughey; Stephen E. Reutebuch; Bernard R. Parresol; John C. Kilgo; Christopher E. Moorman; M. Nils. Peterson
2014-01-01
Light detection and ranging (LiDAR) technology has the potential to radically alter the way researchers and managers collect data on wildlifeâhabitat relationships. To date, the technology has fostered several novel approaches to characterizing avian habitat, but has been limited by the lack of detailed LiDAR-habitat attributes relevant to species across a continuum of...
Development of a Ballistic Impact Detection System
2004-09-01
body surface remains the largest variable to overcome. The snug fit of the body amour stabilizes the sensors and their response . The data from the...estimated to average 1 hour per response , including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 22 19a. NAME OF RESPONSIBLE PERSON
Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles
NASA Technical Reports Server (NTRS)
Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick
2012-01-01
Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.
Estimated communication range of social sounds used by bottlenose dolphins (Tursiops truncatus).
Quintana-Rizzo, Ester; Mann, David A; Wells, Randall S
2006-09-01
Bottlenose dolphins, Tursiops truncatus, exhibit flexible associations in which the compositions of groups change frequently. We investigated the potential distances over which female dolphins and their dependent calves could remain in acoustic contact. We quantified the propagation of sounds in the frequency range of typical dolphin whistles in shallow water areas and channels of Sarasota Bay, Florida. Our results indicated that detection range was noise limited as opposed to being limited by hearing sensitivity. Sounds were attenuated to a greater extent in areas with seagrass than any other habitat. Estimates of active space of whistles showed that in seagrass shallow water areas, low-frequency whistles (7-13 kHz) with a 165 dB source level could be heard by dolphins at 487 m. In shallow areas with a mud bottom, all whistle frequency components of the same whistle could be heard by dolphins travel up to 2 km. In channels, high-frequency whistles (13-19 kHz) could be detectable potentially over a much longer distance (> 20 km). Our findings indicate that the communication range of social sounds likely exceeds the mean separation distances between females and their calves. Ecological pressures might play an important role in determining the separation distances within communication range.
Aresta, Antonella; Cioffi, Nicola; Palmisano, Francesco; Zambonin, Carlo G
2003-08-27
A solid-phase microextraction (SPME) method, coupled to liquid chromatography with diode array UV detection (LC-UV/DAD), for the simultaneous determination of cyclopiazonic acid, mycophenolic acid, tenuazonic acid, and ochratoxin A is described. Chromatographic separation was achieved on a propylamino-bonded silica gel stationary phase using acetonitrile/methanol/ammonium acetate buffer mixture (78:2:20, v/v/v) as mobile phase. SPME adsorption and desorption conditions were optimized using a silica fiber coated with a 60 microm thick polydimethylsiloxane/divinylbenzene film. Estimated limits of detection and limits of quantitation ranged from 3 to 12 ng/mL and from 7 to 29 ng/mL, respectively. The method has been applied to cornflake samples. Samples were subjected to a preliminary short sonication in MeOH/2% KHCO(3) (70:30, v/v); the mixture was evaporated to near dryness and reconstituted in 1.5 mL of 5 mM phosphate buffer (pH 3) for SPME followed by LC-UV/DAD. The overall procedure had recoveries (evaluated on samples spiked at 200 ng/g level) ranging from 74 +/- 4 to 103 +/- 9%. Samples naturally contaminated with cyclopiazonic and tenuazonic acids were found; estimated concentrations were 72 +/- 9 and 25 +/- 6 ng/g, respectively.
Giuliani, N; Saugy, M; Augsburger, M; Varlet, V
2015-11-01
A headspace-gas chromatography-tandem mass spectrometry (HS-GC-MS/MS) method for the trace measurement of perfluorocarbon compounds (PFCs) in blood was developed. Due to oxygen carrying capabilities of PFCs, application to doping and sports misuse is speculated. This study was therefore extended to perform validation methods for F-tert-butylcyclohexane (Oxycyte(®)), perfluoro(methyldecalin) (PFMD) and perfluorodecalin (PFD). The limit of detection of these compounds was established and found to be 1.2 µg/mL blood for F-tert-butylcyclohexane, 4.9 µg/mL blood for PFMD and 9.6 µg/mL blood for PFD. The limit of quantification was assumed to be 12 µg/mL blood (F-tert-butylcyclohexane), 48 µg/mL blood (PFMD) and 96 µg/mL blood (PFD). HS-GC-MS/MS technique allows detection from 1000 to 10,000 times lower than the estimated required dose to ensure a biological effect for the investigated PFCs. Thus, this technique could be used to identify a PFC misuse several hours, maybe days, after the injection or the sporting event. Clinical trials with those compounds are still required to evaluate the validation parameters with the calculated estimations. Copyright © 2015 Elsevier B.V. All rights reserved.
Lin, Tzu-Yung; Green, Roger J.; O'Connor, Peter B.
2011-01-01
The nature of the ion signal from a 12-T Fourier-transform ion cyclotron resonance mass spectrometer and the electronic noise were studied to further understand the electronic detection limit. At minimal cost, a new transimpedance preamplifier was designed, computer simulated, built, and tested. The preamplifier design pushes the electronic signal-to-noise performance at room temperature to the limit, because of its enhanced tolerance of the capacitance of the detection device, lower intrinsic noise, and larger flat mid-band gain (input current noise spectral density of around 1 pA/\\documentclass[12pt]{minimal}\\begin{document}$\\sqrt{\\mbox{Hz}}$\\end{document}Hz when the transimpedance is about 85 dBΩ). The designed preamplifier has a bandwidth of ∼3 kHz to 10 MHz, which corresponds to the mass-to-charge ratio, m/z, of approximately 18 to 61 k at 12 T. The transimpedance and the bandwidth can be easily adjusted by changing the value of passive components. The feedback limitation of the circuit is discussed. With the maximum possible transimpedance of 5.3 MΩ when using an 0402 surface mount resistor, the preamplifier was estimated to be able to detect ∼110 charges in a single scan. PMID:22225232
Ware, M W; Keely, S P; Villegas, E N
2013-07-01
This study developed and systematically evaluated performance and limit of detection of an off-the-slide genotyping procedure for both Cryptosporidium oocysts and Giardia cysts. Slide standards containing flow-sorted (oo)cysts were used to evaluate the off-the-slide genotyping procedure by microscopy and PCR. Results show approximately 20% of cysts and oocysts are lost during staining. Although transfer efficiency from the slide to the PCR tube could not be determined by microscopy, it was observed that the transfer process aided in the physical lysis of the (oo)cysts likely releasing DNA. PCR detection rates for a single event on a slide were 44% for Giardia and 27% for Cryptosporidium, and a minimum of five cysts and 20 oocysts are required to achieve a 90% PCR detection rate. A Poisson distribution analysis estimated the relative PCR target densities and limits of detection, it showed that 18 Cryptosporidium and five Giardia replicates are required for a 95% probability of detecting a single (oo)cyst on a slide. This study successfully developed and evaluated recovery rates and limits of detection of an off-the-slide genotyping procedure for both Cryptosporidium and Giardia (oo)cysts from the same slide. This off-the-slide genotyping technique is a simple and low cost tool that expands the applications of US EPA Method 1623 results by identifying the genotypes and assemblages of the enumerated Cryptosporidium and Giardia. This additional information will be useful for microbial risk assessment models and watershed management decisions. Journal of Applied Microbiology Published [2013]. This article is a U.S. Government work and is in the public domain in the USA.
Estimation of risks by chemicals produced during laser pyrolysis of tissues
NASA Astrophysics Data System (ADS)
Weber, Lothar W.; Spleiss, Martin
1995-01-01
Use of laser systems in minimal invasive surgery results in formation of laser aerosol with volatile organic compounds of possible health risk. By use of currently identified chemical substances an overview on possibly associated risks to human health is given. The class of the different identified alkylnitriles seem to be a laser specific toxicological problem. Other groups of chemicals belong to the Maillard reaction type, the fatty acid pyrolysis type, or even the thermally activated chemolysis. In relation to the available different threshold limit values the possible exposure ranges of identified substances are discussed. A rough estimation results in an exposure range of less than 1/100 for almost all substances with given human threshold limit values without regard of possible interactions. For most identified alkylnitriles, alkenes, and heterocycles no threshold limit values are given for lack of, until now, practical purposes. Pyrolysis of anaesthetized organs with isoflurane gave no hints for additional pyrolysis products by fragment interactions with resulting VOCs. Measurements of pyrolysis gases resulted in detection of small amounts of NO additionally with NO2 formation at plasma status.
Kriščiukaitis, Algimantas; Šimoliūnienė, Renata; Macas, Andrius; Petrolis, Robertas; Drėgūnas, Kęstutis; Bakšytė, Giedrė; Pieteris, Linas; Bertašienė, Zita; Žaliūnas, Remigijus
2014-01-01
Beat-to-beat alteration in ventricles repolarization reflected by alternans of amplitude and/or shape of ECG S-T,T segment (TWA) is known as phenomena related with risk of severe arrhythmias leading to sudden cardiac death. Technical difficulties have caused limited its usage in clinical diagnostics. Possibilities to register and analyze multimodal signals reflecting heart activity inspired search for new technical solutions. First objective of this study was to test whether thoracic impedance signal and beat-to-beat heart rate reflect repolarization alternans detected as TWA. The second objective was revelation of multimodal signal features more comprehensively representing the phenomena and increasing its prognostic usefulness. ECG, and thoracic impedance signal recordings made during 24h follow-up of the patients hospitalized in acute phase of myocardial infarction were used for investigation. Signal morphology variations reflecting estimates were obtained by the principal component analysis-based method. Clinical outcomes of patients (survival and/or rehospitalization in 6 and 12 months) were compared to repolarization alternans and heart rate variability estimates. Repolarization alternans detected as TWA was also reflected in estimates of thoracic impedance signal shape and variation in beat-to-beat heart rate. All these parameters showed correlation with clinical outcomes of patients. The strongest significant correlation showed magnitude of alternans in estimates of thoracic impedance signal shape. The features of ECG, thoracic impedance signal and beat-to-beat variability of heart rate, give comprehensive estimates of repolarization alternans, which correlate, with clinical outcomes of the patients and we recommend using them to improve diagnostic reliability. Copyright © 2014 Lithuanian University of Health Sciences. Production and hosting by Elsevier Urban & Partner Sp. z o.o. All rights reserved.
IUE observations of PG 1115 + 080 - The He I Gunn-Peterson test and a search for the lensing galaxy
NASA Technical Reports Server (NTRS)
Tripp, Todd M.; Green, Richard F.; Bechtold, Jill
1990-01-01
Five observations of PG 1115 + 080 taken with the IUE SWP camera have been combined in order to carry out the He I Gunn-Peterson test and to search for a Lyman limit which could determine the redshift of the lens candidate reported by Christian et al. (1987). No Lyman-limit discontinuities are found, implying that the lensing galaxy does not intercept the line of sight or does not contain enough neutral hydrogen to be detected as a Lyman-limit edge. It is estimated that the lens column density for neutral hydrogen is 3 x 10 to the 16th/sq cm or less if it intercepts the line of sight.
Aircraft Flight Envelope Determination using Upset Detection and Physical Modeling Methods
NASA Technical Reports Server (NTRS)
Keller, Jeffrey D.; McKillip, Robert M. Jr.; Kim, Singwan
2009-01-01
The development of flight control systems to enhance aircraft safety during periods of vehicle impairment or degraded operations has been the focus of extensive work in recent years. Conditions adversely affecting aircraft flight operations and safety may result from a number of causes, including environmental disturbances, degraded flight operations, and aerodynamic upsets. To enhance the effectiveness of adaptive and envelope limiting controls systems, it is desirable to examine methods for identifying the occurrence of anomalous conditions and for assessing the impact of these conditions on the aircraft operational limits. This paper describes initial work performed toward this end, examining the use of fault detection methods applied to the aircraft for aerodynamic performance degradation identification and model-based methods for envelope prediction. Results are presented in which a model-based fault detection filter is applied to the identification of aircraft control surface and stall departure failures/upsets. This application is supported by a distributed loading aerodynamics formulation for the flight dynamics system reference model. Extensions for estimating the flight envelope due to generalized aerodynamic performance degradation are also described.
Amperometric Sensor for Detection of Chloride Ions†
Trnkova, Libuse; Adam, Vojtech; Hubalek, Jaromir; Babula, Petr; Kizek, Rene
2008-01-01
Chloride ion sensing is important in many fields such as clinical diagnosis, environmental monitoring and industrial applications. We have measured chloride ions at a carbon paste electrode (CPE) and at a CPE modified with solid AgNO3, a solution of AgNO3 and/or solid silver particles. Detection limits (3 S/N) for chloride ions were 100 μM, 100 μM and 10 μM for solid AgNO3, solution of AgNO3 and/or solid silver particles, respectively. The CPE modified with silver particles is the most sensitive to the presence chloride ions. After that we approached to the miniaturization of the whole electrochemical instrument. Measurements were carried out on miniaturized instrument consisting of a potentiostat with dimensions 35 × 166 × 125 mm, screen printed electrodes, a peristaltic pump and a PC with control software. Under the most suitable experimental conditions (Britton-Robinson buffer, pH 1.8 and working electrode potential 550 mV) we estimated the limit of detection (3 S/N) as 500 nM. PMID:27873832
Upper limits to the detection of ammonia from protoplanetary disks around HL Tauri and L1551-IRS 5
NASA Technical Reports Server (NTRS)
Gomez, Jose F.; Torrelles, Jose M.; Ho, Paul T. P.; Rodriguez, Luis F.; Canto, Jorge
1993-01-01
We present NH3(1, 1) and (2, 2) observations of the young stellar sources HL Tau and L1551-IRS 5 using the VLA in its B-configuration, which provides an angular resolution of about 0.4 arcsec (about 50 AU at 140 pc) at 1.3 cm wavelength. Our goal was to detect and resolve circumstellar molecular disks with radius of the order of 100 AU around these two sources. No ammonia emission was detected toward either of them. The 3-sigma levels were 2.7 mJy/beam and 3.9 mJy/beam for HL Tau and L1551-IRS 5, respectively, with a velocity resolution of about 5 km/s. With this nondetection, we estimate upper limits to the mass of the proposed protoplanetary molecular disks (within a radius of 10 AU from the central stars) on the order of 0.02/(X(NH3)/10 exp -8) solar mass for HL Tau and 0.1/(X(NH3)/10 exp -8) solar mass for L1551-IRS 5.
Stellar Companions of Exoplanet Host Stars in K2
NASA Astrophysics Data System (ADS)
Matson, Rachel; Howell, Steve; Horch, Elliott; Everett, Mark
2018-01-01
Stellar multiplicity has significant implications for the detection and characterization of exoplanets. A stellar companion can mimic the signal of a transiting planet or distort the true planetary radii, leading to improper density estimates and over-predicting the occurrence rates of Earth-sized planets. Determining the fraction of exoplanet host stars that are also binaries allows us to better determine planetary characteristics as well as establish the relationship between binarity and planet formation. Using high-resolution speckle imaging to obtain diffraction limited images of K2 planet candidate host stars we detect stellar companions within one arcsec and up to six magnitudes fainter than the host star. By comparing our observed companion fraction to TRILEGAL star count simulations, and using the known detection limits of speckle imaging, we find the binary fraction of K2 planet host stars to be similar to that of Kepler host stars and solar-type field stars. Accounting for stellar companions in exoplanet studies is therefore essential for deriving true stellar and planetary properties as well as maximizing the returns for TESS and future exoplanet missions.
NASA Astrophysics Data System (ADS)
Sonnabend, G.; Stupar, D.; Sornig, M.; Stangier, T.; Kostiuk, T.; Livengood, T. A.
2013-09-01
We report our search for methane in the atmosphere of Mars using high-spectral resolution heterodyne spectroscopy in the 7.8 μm wavelength region. Resolving power and frequency precision of >106 of the technique enable identification and full resolution of a targeted spectral line in the terrestrial-Mars spectrum observed from the ground. Observations were carried out on two occasions, in April 2010 and May 2012 at the McMath-Pierce Solar Telescope and the NASA Infrared Telescope Facility, respectively. A single line in the ν4 band of methane at 1282.62448 cm-1 was targeted in both cases. No absorption due to methane was detected and only upper limits of ∼100 ppb for the martian atmospheric methane concentration were retrieved. Lack of observing time (due to weather) and telluric opacity greater than anticipated led to reduced signal-to-noise ratios (SNR). Based on current measurements and calculations, under proper viewing conditions, we estimate an achievable detection limit of ∼10 ppb using the infrared heterodyne technique - adequate for confirming reported detections of methane based on other techniques.
Kuster, William C; Harren, Frans J M; de Gouw, Joost A
2005-06-15
Laser photoacoustic spectroscopy (LPAS) is highly suitable for the detection of ethene in air due to the overlap between its strongest absorption lines and the wavelengths accessible by high-powered CO2 lasers. Here, we test the ability of LPAS to measure ethene in ambient air by comparing the measurements in urban air with those from a gas chromatography flame-ionization detection (GC-FID) instrument. Over the course of several days, we obtained quantitative agreement between the two measurements. Over this period, the LPAS instrument had a positive offset of 330 +/- 140 pptv (parts-per-trillion by volume) relative to the GC-FID instrument, possibly caused by interference from other species. The detection limit of the LPAS instrument is currently estimated around 1 ppbv and is limited by this offset and the statistical noise in the data. We conclude that LPAS has the potential to provide fast-response measurements of ethene in the atmosphere, with significant advantages over existing techniques when measuring from moving platforms and in the vicinity of emission sources.
Palanisamy, Selvakumar; Thangavelu, Kokulnathan; Chen, Shen-Ming; Gnanaprakasam, P; Velusamy, Vijayalakshmi; Liu, Xiao-Heng
2016-10-20
The accurate detection of dopamine (DA) levels in biological samples such as human serum and urine are essential indicators in medical diagnostics. In this work, we describe the preparation of chitosan (CS) biopolymer grafted graphite (GR) composite for the sensitive and lower potential detection of DA in its sub micromolar levels. The composite modified electrode has been used for the detection of DA in biological samples such as human serum and urine. The GR-CS composite modified electrode shows an enhanced oxidation peak current response and low oxidation potential for the detection of DA than that of electrodes modified with bare, GR and CS discretely. Under optimum conditions, the fabricated GR-CS composite modified electrode shows the DPV response of DA in the linear response ranging from 0.03 to 20.06μM. The detection limit and sensitivity of the sensor were estimated as 0.0045μM and 6.06μA μM(-1)cm(-2), respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Burgués, Javier; Marco, Santiago
2018-08-17
Metal oxide semiconductor (MOX) sensors are usually temperature-modulated and calibrated with multivariate models such as partial least squares (PLS) to increase the inherent low selectivity of this technology. The multivariate sensor response patterns exhibit heteroscedastic and correlated noise, which suggests that maximum likelihood methods should outperform PLS. One contribution of this paper is the comparison between PLS and maximum likelihood principal components regression (MLPCR) in MOX sensors. PLS is often criticized by the lack of interpretability when the model complexity increases beyond the chemical rank of the problem. This happens in MOX sensors due to cross-sensitivities to interferences, such as temperature or humidity and non-linearity. Additionally, the estimation of fundamental figures of merit, such as the limit of detection (LOD), is still not standardized in multivariate models. Orthogonalization methods, such as orthogonal projection to latent structures (O-PLS), have been successfully applied in other fields to reduce the complexity of PLS models. In this work, we propose a LOD estimation method based on applying the well-accepted univariate LOD formulas to the scores of the first component of an orthogonal PLS model. The resulting LOD is compared to the multivariate LOD range derived from error-propagation. The methodology is applied to data extracted from temperature-modulated MOX sensors (FIS SB-500-12 and Figaro TGS 3870-A04), aiming at the detection of low concentrations of carbon monoxide in the presence of uncontrolled humidity (chemical noise). We found that PLS models were simpler and more accurate than MLPCR models. Average LOD values of 0.79 ppm (FIS) and 1.06 ppm (Figaro) were found using the approach described in this paper. These values were contained within the LOD ranges obtained with the error-propagation approach. The mean LOD increased to 1.13 ppm (FIS) and 1.59 ppm (Figaro) when considering validation samples collected two weeks after calibration, which represents a 43% and 46% degradation, respectively. The orthogonal score-plot was a very convenient tool to visualize MOX sensor data and to validate the LOD estimates. Copyright © 2018 Elsevier B.V. All rights reserved.
BRIEF REPORT: Screening Items to Identify Patients with Limited Health Literacy Skills
Wallace, Lorraine S; Rogers, Edwin S; Roskos, Steven E; Holiday, David B; Weiss, Barry D
2006-01-01
BACKGROUND Patients with limited literacy skills are routinely encountered in clinical practice, but they are not always identified by clinicians. OBJECTIVE To evaluate 3 candidate questions to determine their accuracy in identifying patients with limited or marginal health literacy skills. METHODS We studied 305 English-speaking adults attending a university-based primary care clinic. Demographic items, health literacy screening questions, and the Rapid Estimate of Adult Literacy in Medicine (REALM) were administered to patients. To determine the accuracy of the candidate questions for identifying limited or marginal health literacy skills, we plotted area under the receiver operating characteristic (AUROC) curves for each item, using REALM scores as a reference standard. RESULTS The mean age of subjects was 49.5; 67.5% were female, 85.2% Caucasian, and 81.3% insured by TennCare and/or Medicare. Fifty-four (17.7%) had limited and 52 (17.0%) had marginal health literacy skills. One screening question, “How confident are you filling out medical forms by yourself?” was accurate in detecting limited (AUROC of 0.82; 95% confidence interval [CI]=0.77 to 0.86) and limited/marginal (AUROC of 0.79; 95% CI=0.74 to 0.83) health literacy skills. This question had significantly greater AUROC than either of the other questions (P<.01) and also a greater AUROC than questions based on demographic characteristics. CONCLUSIONS One screening question may be sufficient for detecting limited and marginal health literacy skills in clinic populations. PMID:16881950
NASA Astrophysics Data System (ADS)
Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue
2015-09-01
Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.
Toward Automated Generation of Reservoir Water Elevation Changes From Satellite Radar Altimetry.
NASA Astrophysics Data System (ADS)
Okeowo, M. A.; Lee, H.; Hossain, F.
2015-12-01
Until now, processing satellite radar altimetry data over inland water bodies on a large scale has been a cumbersome task primarily due to contaminated measurements from their surrounding topography. It becomes more challenging if the size of the water body is small and thus the number of available high-rate measurements from the water surface is limited. A manual removal of outliers is time consuming which limits a global generation of reservoir elevation profiles. This has limited a global study of lakes and reservoir elevation profiles for monitoring storage changes and hydrologic modeling. We have proposed a new method to automatically generate a time-series information from raw satellite radar altimetry without user intervention. With this method, scientist with little knowledge of altimetry can now independently process radar altimetry for diverse purposes. The method is based on K-means clustering, backscatter coefficient and statistical analysis of the dataset for outlier detection. The result of this method will be validated using in-situ gauges from US, Indus and Bangladesh reservoirs. In addition, a sensitivity analysis will be done to ascertain the limitations of this algorithm based on the surrounding topography, and the length of altimetry track overlap with the lake/reservoir. Finally, a reservoir storage change will be estimated on the study sites using MODIS and Landsat water classification for estimating the area of reservoir and the height will be estimated using Jason-2 and SARAL/Altika satellites.
Non-Enzymatic Glucose Biosensor Based on CuO-Decorated CeO2 Nanoparticles
Guan, Panpan; Li, Yongjian; Zhang, Jie; Li, Wei
2016-01-01
Copper oxide (CuO)-decorated cerium oxide (CeO2) nanoparticles were synthesized and used to detect glucose non-enzymatically. The morphological characteristics and structure of the nanoparticles were characterized through transmission electron microscopy, X-ray photoelectron spectroscopy, and X-ray diffraction. The sensor responses of electrodes to glucose were investigated via an electrochemical method. The CuO/CeO2 nanocomposite exhibited a reasonably good sensitivity of 2.77 μA mM−1cm−2, an estimated detection limit of 10 μA, and a good anti-interference ability. The sensor was also fairly stable under ambient conditions. PMID:28335287
NASA Astrophysics Data System (ADS)
Ganot, Noam; Gal-Yam, Avishay; Ofek, Eran. O.; Sagiv, Ilan; Waxman, Eli; Lapid, Ofer; Kulkarni, Shrinivas R.; Ben-Ami, Sagi; Kasliwal, Mansi M.; The ULTRASAT Science Team; Chelouche, Doron; Rafter, Stephen; Behar, Ehud; Laor, Ari; Poznanski, Dovi; Nakar, Ehud; Maoz, Dan; Trakhtenbrot, Benny; WTTH Consortium, The; Neill, James D.; Barlow, Thomas A.; Martin, Christofer D.; Gezari, Suvi; the GALEX Science Team; Arcavi, Iair; Bloom, Joshua S.; Nugent, Peter E.; Sullivan, Mark; Palomar Transient Factory, The
2016-03-01
The radius and surface composition of an exploding massive star, as well as the explosion energy per unit mass, can be measured using early UV observations of core-collapse supernovae (SNe). We present the first results from a simultaneous GALEX/PTF search for early ultraviolet (UV) emission from SNe. Six SNe II and one Type II superluminous SN (SLSN-II) are clearly detected in the GALEX near-UV (NUV) data. We compare our detection rate with theoretical estimates based on early, shock-cooling UV light curves calculated from models that fit existing Swift and GALEX observations well, combined with volumetric SN rates. We find that our observations are in good agreement with calculated rates assuming that red supergiants (RSGs) explode with fiducial radii of 500 R ⊙, explosion energies of 1051 erg, and ejecta masses of 10 M ⊙. Exploding blue supergiants and Wolf-Rayet stars are poorly constrained. We describe how such observations can be used to derive the progenitor radius, surface composition, and explosion energy per unit mass of such SN events, and we demonstrate why UV observations are critical for such measurements. We use the fiducial RSG parameters to estimate the detection rate of SNe during the shock-cooling phase (<1 day after explosion) for several ground-based surveys (PTF, ZTF, and LSST). We show that the proposed wide-field UV explorer ULTRASAT mission is expected to find >85 SNe per year (˜0.5 SN per deg2), independent of host galaxy extinction, down to an NUV detection limit of 21.5 mag AB. Our pilot GALEX/PTF project thus convincingly demonstrates that a dedicated, systematic SN survey at the NUV band is a compelling method to study how massive stars end their life.
NASA Technical Reports Server (NTRS)
Riggs, George A.; Hall, Dorothy K.; Foster, James L.
2009-01-01
Monitoring of snow cover extent and snow water equivalent (SWE) in boreal forests is important for determining the amount of potential runoff and beginning date of snowmelt. The great expanse of the boreal forest necessitates the use of satellite measurements to monitor snow cover. Snow cover in the boreal forest can be mapped with either the Moderate Resolution Imaging Spectroradiometer (MODIS) or the Advanced Microwave Scanning Radiometer for EOS (AMSR-E) microwave instrument. The extent of snow cover is estimated from the MODIS data and SWE is estimated from the AMSR-E. Environmental limitations affect both sensors in different ways to limit their ability to detect snow in some situations. Forest density, snow wetness, and snow depth are factors that limit the effectiveness of both sensors for snow detection. Cloud cover is a significant hindrance to monitoring snow cover extent Using MODIS but is not a hindrance to the use of the AMSR-E. These limitations could be mitigated by combining MODIS and AMSR-E data to allow for improved interpretation of snow cover extent and SWE on a daily basis and provide temporal continuity of snow mapping across the boreal forest regions in Canada. The purpose of this study is to investigate if temporal monitoring of snow cover using a combination of MODIS and AMSR-E data could yield a better interpretation of changing snow cover conditions. The MODIS snow mapping algorithm is based on snow detection using the Normalized Difference Snow Index (NDSI) and the Normalized Difference Vegetation Index (NDVI) to enhance snow detection in dense vegetation. (Other spectral threshold tests are also used to map snow using MODIS.) Snow cover under a forest canopy may have an effect on the NDVI thus we use the NDVI in snow detection. A MODIS snow fraction product is also generated but not used in this study. In this study the NDSI and NDVI components of the snow mapping algorithm were calculated and analyzed to determine how they changed through the seasons. A blended snow product, the Air Force Weather Agency and NASA (ANSA) snow algorithm and product has recently been developed. The ANSA algorithm blends the MODIS snow cover and AMSR-E SWE products into a single snow product that has been shown to improve the performance of snow cover mapping. In this study components of the ANSA snow algorithm are used along with additional MODIS data to monitor daily changes in snow cover over the period of 1 February to 30 June 2008.
NASA Astrophysics Data System (ADS)
Yang, Mingxi; Prytherch, John; Kozlova, Elena; Yelland, Margaret J.; Parenkat Mony, Deepulal; Bell, Thomas G.
2016-11-01
In recent years several commercialised closed-path cavity-based spectroscopic instruments designed for eddy covariance flux measurements of carbon dioxide (CO2), methane (CH4), and water vapour (H2O) have become available. Here we compare the performance of two leading models - the Picarro G2311-f and the Los Gatos Research (LGR) Fast Greenhouse Gas Analyzer (FGGA) at a coastal site. Both instruments can compute dry mixing ratios of CO2 and CH4 based on concurrently measured H2O, temperature, and pressure. Additionally, we used a high throughput Nafion dryer to physically remove H2O from the Picarro airstream. Observed air-sea CO2 and CH4 fluxes from these two analysers, averaging about 12 and 0.12 mmol m-2 day-1 respectively, agree within the measurement uncertainties. For the purpose of quantifying dry CO2 and CH4 fluxes downstream of a long inlet, the numerical H2O corrections appear to be reasonably effective and lead to results that are comparable to physical removal of H2O with a Nafion dryer in the mean. We estimate the high-frequency attenuation of fluxes in our closed-path set-up, which was relatively small ( ≤ 10 %) for CO2 and CH4 but very large for the more polar H2O. The Picarro showed significantly lower noise and flux detection limits than the LGR. The hourly flux detection limit for the Picarro was about 2 mmol m-2 day-1 for CO2 and 0.02 mmol m-2 day-1 for CH4. For the LGR these detection limits were about 8 and 0.05 mmol m-2 day-1. Using global maps of monthly mean air-sea CO2 flux as reference, we estimate that the Picarro and LGR can resolve hourly CO2 fluxes from roughly 40 and 4 % of the world's oceans respectively. Averaging over longer timescales would be required in regions with smaller fluxes. Hourly flux detection limits of CH4 from both instruments are generally higher than the expected emissions from the open ocean, though the signal to noise of this measurement may improve closer to the coast.
Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm
NASA Astrophysics Data System (ADS)
Neri, P.
2017-05-01
Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.
Sankaran, Sindhuja; Panigrahi, Suranjan; Mallik, Sanku
2011-03-15
Detection of food-borne bacteria present in the food products is critical to prevent the spread of infectious diseases. Intelligent quality sensors are being developed for detecting bacterial pathogens such as Salmonella in beef. One of our research thrusts was to develop novel sensing materials sensitive to specific indicator alcohols at low concentrations. Present work focuses on developing olfactory sensors mimicking insect odorant binding protein to detect alcohols in low concentrations at room temperature. A quartz crystal microbalance (QCM) based sensor in conjunction with synthetic peptide was developed to detect volatile organic compounds indicative to Salmonella contamination in packaged beef. The peptide sequence used as sensing materials was derived from the amino acids sequence of Drosophila odorant binding protein, LUSH. The sensors were used to detect alcohols: 3-methyl-1-butanol and 1-hexanol. The sensors were sensitive to alcohols with estimated lower detection limits of <5 ppm. Thus, the LUSH-derived QCM sensors exhibited potential to detect alcohols at low ppm concentrations. Copyright © 2011. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Levesque, M.
Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.
Gait parameter and event estimation using smartphones.
Pepa, Lucia; Verdini, Federica; Spalazzi, Luca
2017-09-01
The use of smartphones can greatly help for gait parameters estimation during daily living, but its accuracy needs a deeper evaluation against a gold standard. The objective of the paper is a step-by-step assessment of smartphone performance in heel strike, step count, step period, and step length estimation. The influence of smartphone placement and orientation on estimation performance is evaluated as well. This work relies on a smartphone app developed to acquire, process, and store inertial sensor data and rotation matrices about device position. Smartphone alignment was evaluated by expressing the acceleration vector in three reference frames. Two smartphone placements were tested. Three methods for heel strike detection were considered. On the basis of estimated heel strikes, step count is performed, step period is obtained, and the inverted pendulum model is applied for step length estimation. Pearson correlation coefficient, absolute and relative errors, ANOVA, and Bland-Altman limits of agreement were used to compare smartphone estimation with stereophotogrammetry on eleven healthy subjects. High correlations were found between smartphone and stereophotogrammetric measures: up to 0.93 for step count, to 0.99 for heel strike, 0.96 for step period, and 0.92 for step length. Error ranges are comparable to those in the literature. Smartphone placement did not affect the performance. The major influence of acceleration reference frames and heel strike detection method was found in step count. This study provides detailed information about expected accuracy when smartphone is used as a gait monitoring tool. The obtained results encourage real life applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Peng, Mei; Jaeger, Sara R; Hautus, Michael J
2014-03-01
Psychometric functions are predominately used for estimating detection thresholds in vision and audition. However, the requirement of large data quantities for fitting psychometric functions (>30 replications) reduces their suitability in olfactory studies because olfactory response data are often limited (<4 replications) due to the susceptibility of human olfactory receptors to fatigue and adaptation. This article introduces a new method for fitting individual-judge psychometric functions to olfactory data obtained using the current standard protocol-American Society for Testing and Materials (ASTM) E679. The slope parameter of the individual-judge psychometric function is fixed to be the same as that of the group function; the same-shaped symmetrical sigmoid function is fitted only using the intercept. This study evaluated the proposed method by comparing it with 2 available methods. Comparison to conventional psychometric functions (fitted slope and intercept) indicated that the assumption of a fixed slope did not compromise precision of the threshold estimates. No systematic difference was obtained between the proposed method and the ASTM method in terms of group threshold estimates or threshold distributions, but there were changes in the rank, by threshold, of judges in the group. Overall, the fixed-slope psychometric function is recommended for obtaining relatively reliable individual threshold estimates when the quantity of data is limited.
Fu, Jianjie; Gao, Yan; Cui, Lin; Wang, Thanh; Liang, Yong; Qu, Guangbo; Yuan, Bo; Wang, Yawei; Zhang, Aiqian; Jiang, Guibin
2016-01-01
Paired serum and urine samples were collected from workers in a fluorochemical plant from 2008 to 2012 (n = 302) to investigate the level, temporal trends, and half-lives of PFAAs in workers of a fluorochemical plant. High levels of perfluorohexane sulfonate (PFHxS), perfluorooctanoic acid (PFOA), and perfluorooctanesulfonate (PFOS) were detected in serum with median concentrations of 764, 427, and 1725 ng mL−1, respectively. The half-lives of PFAAs in workers were estimated by daily clearance rates and annual decline rates of PFAAs in serum by a first-order model. The geometric mean and median value for PFHxS, PFOA, and PFOS were 14.7 and 11.7, 4.1 and 4.0, 32.6 and 21.6 years, respectively, by the daily clearance rates, and they were 3.6, 1.7, and 1.9 years estimated by annual decline rates. The half-lives estimated by the limited clearance route information could be considered as the upper limits for PFAAs, however, the huge difference between two estimated approaches indicated that there were other important elimination pathways of PFAAs other than renal clearance in human. The half-lives estimated by annual decline rates in the present study were the shortest values ever reported, and the intrinsic half-lives might even shorter due to the high levels of ongoing exposure to PFAAs. PMID:27905562
NASA Astrophysics Data System (ADS)
Fu, Jianjie; Gao, Yan; Cui, Lin; Wang, Thanh; Liang, Yong; Qu, Guangbo; Yuan, Bo; Wang, Yawei; Zhang, Aiqian; Jiang, Guibin
2016-12-01
Paired serum and urine samples were collected from workers in a fluorochemical plant from 2008 to 2012 (n = 302) to investigate the level, temporal trends, and half-lives of PFAAs in workers of a fluorochemical plant. High levels of perfluorohexane sulfonate (PFHxS), perfluorooctanoic acid (PFOA), and perfluorooctanesulfonate (PFOS) were detected in serum with median concentrations of 764, 427, and 1725 ng mL-1, respectively. The half-lives of PFAAs in workers were estimated by daily clearance rates and annual decline rates of PFAAs in serum by a first-order model. The geometric mean and median value for PFHxS, PFOA, and PFOS were 14.7 and 11.7, 4.1 and 4.0, 32.6 and 21.6 years, respectively, by the daily clearance rates, and they were 3.6, 1.7, and 1.9 years estimated by annual decline rates. The half-lives estimated by the limited clearance route information could be considered as the upper limits for PFAAs, however, the huge difference between two estimated approaches indicated that there were other important elimination pathways of PFAAs other than renal clearance in human. The half-lives estimated by annual decline rates in the present study were the shortest values ever reported, and the intrinsic half-lives might even shorter due to the high levels of ongoing exposure to PFAAs.
On the radio properties of the intermediate-mass black hole candidate ESO 243-49 HLX-1
NASA Astrophysics Data System (ADS)
Cseh, D.; Webb, N. A.; Godet, O.; Barret, D.; Corbel, S.; Coriat, M.; Falcke, H.; Farrell, S. A.; Körding, E.; Lenc, E.; Wrobel, J. M.
2015-02-01
We present follow-up radio observations of ESO 243-49 HLX-1 from 2012 using the Australia Telescope Compact Array (ATCA) and the Karl G. Jansky Very Large Array (VLA). We report the detection of radio emission at the location of HLX-1 during its hard X-ray state using the ATCA. Assuming that the `Fundamental Plane' of accreting black holes is applicable, we provide an independent estimate of the black hole mass of M_{BH}≤ 2.8^{+7.5}_{-2.1} × 106 M⊙ at 90 per cent confidence. However, we argue that the detected radio emission is likely to be Doppler-boosted and our mass estimate is an upper limit. We discuss other possible origins of the radio emission such as being due to a radio nebula, star formation, or later interaction of the flares with the large-scale environment. None of these were found adequate. The VLA observations were carried out during the X-ray outburst. However, no new radio flare was detected, possibly due to a sparse time sampling. The deepest, combined VLA data suggest a variable radio source and we briefly discuss the properties of the previously detected flares and compare them with microquasars and active galactic nuclei.
Estimate of within population incremental selection through branch imbalance in lineage trees
Liberman, Gilad; Benichou, Jennifer I.C.; Maman, Yaakov; Glanville, Jacob; Alter, Idan; Louzoun, Yoram
2016-01-01
Incremental selection within a population, defined as limited fitness changes following mutation, is an important aspect of many evolutionary processes. Strongly advantageous or deleterious mutations are detected using the synonymous to non-synonymous mutations ratio. However, there are currently no precise methods to estimate incremental selection. We here provide for the first time such a detailed method and show its precision in multiple cases of micro-evolution. The proposed method is a novel mixed lineage tree/sequence based method to detect within population selection as defined by the effect of mutations on the average number of offspring. Specifically, we propose to measure the log of the ratio between the number of leaves in lineage trees branches following synonymous and non-synonymous mutations. The method requires a high enough number of sequences, and a large enough number of independent mutations. It assumes that all mutations are independent events. It does not require of a baseline model and is practically not affected by sampling biases. We show the method's wide applicability by testing it on multiple cases of micro-evolution. We show that it can detect genes and inter-genic regions using the selection rate and detect selection pressures in viral proteins and in the immune response to pathogens. PMID:26586802
Deep Independence Network Analysis of Structural Brain Imaging: Application to Schizophrenia
Castro, Eduardo; Hjelm, R. Devon; Plis, Sergey M.; Dinh, Laurent; Turner, Jessica A.; Calhoun, Vince D.
2016-01-01
Linear independent component analysis (ICA) is a standard signal processing technique that has been extensively used on neuroimaging data to detect brain networks with coherent brain activity (functional MRI) or covarying structural patterns (structural MRI). However, its formulation assumes that the measured brain signals are generated by a linear mixture of the underlying brain networks and this assumption limits its ability to detect the inherent nonlinear nature of brain interactions. In this paper, we introduce nonlinear independent component estimation (NICE) to structural MRI data to detect abnormal patterns of gray matter concentration in schizophrenia patients. For this biomedical application, we further addressed the issue of model regularization of nonlinear ICA by performing dimensionality reduction prior to NICE, together with an appropriate control of the complexity of the model and the usage of a proper approximation of the probability distribution functions of the estimated components. We show that our results are consistent with previous findings in the literature, but we also demonstrate that the incorporation of nonlinear associations in the data enables the detection of spatial patterns that are not identified by linear ICA. Specifically, we show networks including basal ganglia, cerebellum and thalamus that show significant differences in patients versus controls, some of which show distinct nonlinear patterns. PMID:26891483
IR/THz Double Resonance Spectroscopy Approach for Remote Chemical Detection at Atmospheric Pressure
NASA Astrophysics Data System (ADS)
Tanner, Elizabeth A.; Phillips, Dane J.; De Lucia, Frank C.; Everitt, Henry O.
2013-06-01
A remote sensing methodology based on infrared/terahertz (IR/THz) double resonance (DR) spectroscopy is shown to overcome limitations traditionally associated with either IR or THz spectroscopic approaches for detecting trace gases in an atmosphere. The applicability of IR/THz DR spectroscopy is explored by estimating the IR and THz power requirements for detecting a 100 part-per-million-meter cloud of methyl fluoride, methyl chloride, or methyl bromide at ranges up to 1km in three atmospheric windows below 0.3 THz. These prototypical molecules are used to ascertain the dependence of the DR signal-to-noise ratio on IR and THz beam power. A line-tunable CO_2 laser with 100 ps pulse duration generates a DR signature in four rotational transitions on a time scale commensurate with collisional relaxations caused by atmospheric N_2 and O_2. A continuous wave THz beam is frequency tuned to probe one of these rotational transitions so that laser-induced absorption variations in the analyte cloud are detected as temporal power fluctuations synchronized with the laser pulses. A combination of molecule-specific physics and scenario-dependent atmospheric conditions are used to predict the signal-to-noise ratio (SNR) for detecting an analyte as a function of cloud column density. A methodology is presented by which the optimal IR/THz pump/probe frequencies are identified. These estimates show the potential for low concentration chemical detection in a challenging atmospheric scenario with currently available or near term hardware components.
Feng, Jianyuan; Turksoy, Kamuran; Samadi, Sediqeh; Hajizadeh, Iman; Littlejohn, Elizabeth; Cinar, Ali
2017-12-01
Supervision and control systems rely on signals from sensors to receive information to monitor the operation of a system and adjust manipulated variables to achieve the control objective. However, sensor performance is often limited by their working conditions and sensors may also be subjected to interference by other devices. Many different types of sensor errors such as outliers, missing values, drifts and corruption with noise may occur during process operation. A hybrid online sensor error detection and functional redundancy system is developed to detect errors in online signals, and replace erroneous or missing values detected with model-based estimates. The proposed hybrid system relies on two techniques, an outlier-robust Kalman filter (ORKF) and a locally-weighted partial least squares (LW-PLS) regression model, which leverage the advantages of automatic measurement error elimination with ORKF and data-driven prediction with LW-PLS. The system includes a nominal angle analysis (NAA) method to distinguish between signal faults and large changes in sensor values caused by real dynamic changes in process operation. The performance of the system is illustrated with clinical data continuous glucose monitoring (CGM) sensors from people with type 1 diabetes. More than 50,000 CGM sensor errors were added to original CGM signals from 25 clinical experiments, then the performance of error detection and functional redundancy algorithms were analyzed. The results indicate that the proposed system can successfully detect most of the erroneous signals and substitute them with reasonable estimated values computed by functional redundancy system.
Biological dosimetry in a group of radiologists by the analysis of dicentrics and translocations.
Montoro, A; Rodríguez, P; Almonacid, M; Villaescusa, J I; Verdú, G; Caballín, M R; Barrios, L; Barquinero, J F
2005-11-01
The results of a cytogenetic study carried out in a group of nine radiologists are presented. Chromosome aberrations were detected by fluorescence plus Giemsa staining and fluorescence in situ hybridization. Dose estimates were obtained by extrapolating the yield of dicentrics and translocations to their respective dose-effect curves. In seven individuals, the 95% confidence limits of the doses estimated by dicentrics did not include 0 Gy. The 99 dicentrics observed in 17,626 cells gave a collective estimated dose of 115 mGy (95% confidence limits 73-171). For translocations, five individuals had estimated doses that were clearly higher than the total accumulated recorded dose. The 82 total apparently simple translocations observed in 9722 cells gave a collective estimated dose of 275 mGy (132-496). The mean genomic frequencies (x100 +/- SE) of complete and total apparently simple translocations observed in the group of radiologists (1.91 +/- 0.30 and 2.67 +/- 0.34, respectively) were significantly higher than those observed in a matched control group (0.53 +/- 0.10 and 0.87 +/- 0.13, P < 0.01 in both cases) and in another occupationally exposed matched group (0.79 +/- 0.12 and 1.14 +/-0.14, P < 0.03 and P < 0.01, respectively). The discrepancies observed between the physically recorded doses and the biologically estimated doses indicate that the radiologists did not always wear their dosimeters or that the dosimeters were not always in the radiation field.
NASA Astrophysics Data System (ADS)
Gibbons, Steven J.; Näsholm, S. P.; Ruigrok, E.; Kværna, T.
2018-04-01
Seismic arrays enhance signal detection and parameter estimation by exploiting the time-delays between arriving signals on sensors at nearby locations. Parameter estimates can suffer due to both signal incoherence, with diminished waveform similarity between sensors, and aberration, with time-delays between coherent waveforms poorly represented by the wave-front model. Sensor-to-sensor correlation approaches to parameter estimation have an advantage over direct beamforming approaches in that individual sensor-pairs can be omitted without necessarily omitting entirely the data from each of the sensors involved. Specifically, we can omit correlations between sensors for which signal coherence in an optimal frequency band is anticipated to be poor or for which anomalous time-delays are anticipated. In practice, this usually means omitting correlations between more distant sensors. We present examples from International Monitoring System seismic arrays with poor parameter estimates resulting when classical f-k analysis is performed over the full array aperture. We demonstrate improved estimates and slowness grid displays using correlation beamforming restricted to correlations between sufficiently closely spaced sensors. This limited sensor-pair correlation (LSPC) approach has lower slowness resolution than would ideally be obtained by considering all sensor-pairs. However, this ideal estimate may be unattainable due to incoherence and/or aberration and the LSPC estimate can often exploit all channels, with the associated noise-suppression, while mitigating the complications arising from correlations between very distant sensors. The greatest need for the method is for short-period signals on large aperture arrays although we also demonstrate significant improvement for secondary regional phases on a small aperture array. LSPC can also provide a robust and flexible approach to parameter estimation on three-component seismic arrays.
Using the Detectability Index to Predict P300 Speller Performance
Mainsah, B.O.; Collins, L.M.; Throckmorton, C.S.
2017-01-01
Objective The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping algorithms and other stimulus paradigms is desirable. Approach We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian dynamic stopping (DS) algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance The proposed method could serve as a useful tool to initially asses BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level. PMID:27705956
Using the detectability index to predict P300 speller performance
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Collins, L. M.; Throckmorton, C. S.
2016-12-01
Objective. The P300 speller is a popular brain-computer interface (BCI) system that has been investigated as a potential communication alternative for individuals with severe neuromuscular limitations. To achieve acceptable accuracy levels for communication, the system requires repeated data measurements in a given signal condition to enhance the signal-to-noise ratio of elicited brain responses. These elicited brain responses, which are used as control signals, are embedded in noisy electroencephalography (EEG) data. The discriminability between target and non-target EEG responses defines a user’s performance with the system. A previous P300 speller model has been proposed to estimate system accuracy given a certain amount of data collection. However, the approach was limited to a static stopping algorithm, i.e. averaging over a fixed number of measurements, and the row-column paradigm. A generalized method that is also applicable to dynamic stopping (DS) algorithms and other stimulus paradigms is desirable. Approach. We developed a new probabilistic model-based approach to predicting BCI performance, where performance functions can be derived analytically or via Monte Carlo methods. Within this framework, we introduce a new model for the P300 speller with the Bayesian DS algorithm, by simplifying a multi-hypothesis to a binary hypothesis problem using the likelihood ratio test. Under a normality assumption, the performance functions for the Bayesian algorithm can be parameterized with the detectability index, a measure which quantifies the discriminability between target and non-target EEG responses. Main results. Simulations with synthetic and empirical data provided initial verification of the proposed method of estimating performance with Bayesian DS using the detectability index. Analysis of results from previous online studies validated the proposed method. Significance. The proposed method could serve as a useful tool to initially assess BCI performance without extensive online testing, in order to estimate the amount of data required to achieve a desired accuracy level.
NASA Astrophysics Data System (ADS)
Suzuki, Yoshinari; Ohara, Ryota; Matsunaga, Kirara
2017-09-01
Nuclear power plant accidents release radioactive strontium 90 (90Sr) into the environment. Monitoring of 90Sr, although important, is difficult and time consuming because it emits only beta radiation. We have developed a new analytical system that enables real-time analysis of 90Sr in atmospheric particulate matter with an analytical run time of only 10 min. Briefly, after passage of an air sample through an impactor, a small fraction of the sample is introduced into a gas-exchange device, where the air is replaced by Ar. Then the sample is directly introduced into an inductively coupled plasma tandem mass spectrometry (ICP-MS/MS) system equipped with a collision/reaction cell to eliminate isobaric interferences on 90Sr from 90Zr+, 89Y1H+, and 90Y+. Experiments with various reaction gas conditions revealed that these interferences could be minimized under the following optimized conditions: 1.0 mL min- 1 O2, 10.0 mL min- 1 H2, and 1.0 mL min- 1 NH3. The estimated background equivalent concentration and estimated detection limit of the system were 9.7 × 10- 4 and 3.6 × 10- 4 ng m- 3, respectively, which are equivalent to 4.9 × 10- 6 and 1.8 × 10- 6 Bq cm- 3. Recoveries of Sr in PM2.5 measured by real-time analysis compared to those obtained by simultaneously collection on filter was 53 ± 23%, and using this recovery, the detection limit as PM2.5 was estimated to be 3.4 ± 1.5 × 10- 6 Bq cm- 3. That is, this system enabled detection of 90Sr at concentrations < 5 × 10- 6 Bq cm- 3 even considering the insufficient fusion/vaporization/ionization efficiency of Sr in PM2.5.
Effects of Phasor Measurement Uncertainty on Power Line Outage Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Chen; Wang, Jianhui; Zhu, Hao
2014-12-01
Phasor measurement unit (PMU) technology provides an effective tool to enhance the wide-area monitoring systems (WAMSs) in power grids. Although extensive studies have been conducted to develop several PMU applications in power systems (e.g., state estimation, oscillation detection and control, voltage stability analysis, and line outage detection), the uncertainty aspects of PMUs have not been adequately investigated. This paper focuses on quantifying the impact of PMU uncertainty on power line outage detection and identification, in which a limited number of PMUs installed at a subset of buses are utilized to detect and identify the line outage events. Specifically, the linemore » outage detection problem is formulated as a multi-hypothesis test, and a general Bayesian criterion is used for the detection procedure, in which the PMU uncertainty is analytically characterized. We further apply the minimum detection error criterion for the multi-hypothesis test and derive the expected detection error probability in terms of PMU uncertainty. The framework proposed provides fundamental guidance for quantifying the effects of PMU uncertainty on power line outage detection. Case studies are provided to validate our analysis and show how PMU uncertainty influences power line outage detection.« less
Olive tree, Olea europaea L., leaves as a bioindicator of atmospheric PCB contamination.
Sofuoglu, Sait C; Yayla, Burak; Kavcar, Pınar; Ates, Duygu; Turgut, Cafer; Sofuoglu, Aysun
2013-09-01
Olive tree leaf samples were collected to investigate their possible use for biomonitoring of lipophilic toxic substances. The samples were analyzed for 28 polychlorinated biphenyls (PCB) congeners. Twelve congeners were detected in the samples. PCB-60, 77, 81, 89, 105, 114, and 153 were the most frequently detected congeners ranging from 32 % for PCB-52 to 97 % for PCB-81. Σ12PCBs concentration varied from below detection limit to 248 ng/g wet weight in the sampling area, while the mean congener concentrations ranged from 0.06 ng/g (PCB-128 + 167) to 64.2 ng/g wet weight (PCB-60). Constructed concentration maps showed that olive tree leaves can be employed for the estimation of spatial distrubution of these congeners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartholomew, Rachel A.; Ozanich, Richard M.; Arce, Jennifer S.
2017-02-01
The goal of this testing was to evaluate the ability of currently available commercial off-the-shelf (COTS) biological indicator tests and immunoassays to detect Bacillus anthracis (Ba) spores and ricin. In general, immunoassays provide more specific identification of biological threats as compared to indicator tests [3]. Many of these detection products are widely used by first responders and other end users. In most cases, performance data for these instruments are supplied directly from the manufacturer, but have not been verified by an external, independent assessment [1]. Our test plan modules included assessments of inclusivity (ability to generate true positive results), commonlymore » encountered hoax powders (which can cause potential interferences or false positives), and estimation of limit of detection (LOD) (sensitivity) testing.« less
An exploratory wastewater analysis study of drug use in Auckland, New Zealand.
Lai, Foon Yin; Wilkins, Chris; Thai, Phong; Mueller, Jochen F
2017-09-01
New Zealand is considered to have unusual drug use patterns by international standards. However, this understanding has largely been obtained from social surveys where respondents self-report use. The aim of this paper is to conduct the first wastewater study of drug use in Auckland. Wastewater sampling was completed from 2 May to 18 July 2014 at 2 Auckland wastewater treatment plants which service 1.3 million people. Samples were analysed for 17 drug residues by using liquid chromatography-tandem mass spectrometry. Consumption of methamphetamine, 3,4-methylenedioxymethamphetamine (MDMA), cocaine, codeine and methadone (mg/day/1000 people) was estimated by using a back-calculation formula. Methamphetamine, codeine, morphine and methadone were detected with high frequency (80-100%), followed by amphetamine (~60%), MDMA (~7%, i.e. 8 occasions) and methylone (3 occasions). An overall mean of 360 mg of methamphetamine and 60 mg of MDMA was estimated to have been consumed per day per 1000 people. Methamphetamine consumption was found at similar levels in both catchments (377 and 351 mg/day/1000 people). Cocaine was only detected in 1 catchment and on only 8 occasions. JWH-018 was detected in 1 catchment and only on 1 occasion. Methamphetamine, codeine and other opioids were detected at a consistent level throughout the week. 3,4-Methylenedioxymethamphetamine and methylone were detected only during the weekends. Wastewater analysis confirms that methamphetamine was one of the most commonly detected illegal drugs in Auckland and was detected consistently throughout the week. In contrast, cocaine and MDMA were rarely detected, with detection limited to weekends. [Lai FY, Wilkins C, Thai P, Mueller JF. An exploratory wastewater analysis study of drug use in Auckland, New Zealand. Drug Alcohol Rev 2017;00:000-000]. © 2017 Australasian Professional Society on Alcohol and other Drugs.
A game theory approach to target tracking in sensor networks.
Gu, Dongbing
2011-02-01
In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.
Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses
Jarvi, Susan I.; Bianchi, Kiara R.
2006-01-01
Population level studies of genetic diversity can provide information about population structure, individual genetic distinctiveness and former population size. They are especially important for rare and threatened species like the Alala, where they can be used to assess extinction risks and evolutionary potential. In an ideal situation multiple methods should be used to detect variation, and these methods should be comparable across studies. In this report, we discuss AFLP (Amplified Fragment Length Polymorphism) as a genetic approach for detecting variation in the Alala , describe our findings, and discuss these in relation to mtDNA and microsatellite data reported elsewhere in this same population. AFLP is a technique for DNA fingerprinting that has wide applications. Because little or no prior knowledge of the particular species is required to carry out this method of analysis, AFLP can be used universally across varied taxonomic groups. Within individuals, estimates of diversity or heterozygosity across genomes may be complex because levels of diversity differ between and among genes. One of the more traditional methods of estimating diversity employs the use of codominant markers such as microsatellites. Codominant markers detect each allele at a locus independently. Hence, one can readily distinguish heterozygotes from homozygotes, directly assess allele frequencies and calculate other population level statistics. Dominant markers (for example, AFLP) are scored as either present or absent (null) so heterozygotes cannot be directly distinguished from homozygotes. However, the presence or absence data can be converted to expected heterozygosity estimates which are comparable to those determined by codominant markers. High allelic diversity and heterozygosity inherent in microsatellites make them excellent tools for studies of wild populations and they have been used extensively. One limitation to the use of microsatellites is that heterozygosity estimates are affected by the mutation rate at microsatellite loci, thus introducing a bias. Also, the number of loci that can be studied is frequently limited to fewer than 10. This theoretically represents a maximum of one marker for each of 10 chromosomes. Dominant markers like AFLP allow a larger fraction of the genome to be screened. Large numbers of loci can be screened by AFLP to resolve very small individual differences that can be used for identification of individuals, estimates of pairwise relatedness and, in some cases, for parentage analyses. Since AFLP is a dominant marker (can not distinguish between +/+ homozygote versus +/- heterozygote), it has limitations for parentage analyses. Only when both parents are homozygous for the absence of alleles (-/-) and offspring show a presence (+/+ or +/-) can the parents be excluded. In this case, microsatellites become preferable as they have the potential to exclude individual parents when the other parent is unknown. Another limitation of AFLP is that the loci are generally less polymorphic (only two alleles/locus) than microsatellite loci (often >10 alleles/locus). While generally fewer than 10 highly polymorphic microsatellite loci are enough to exclude and assign parentage, it might require up to 100 or more AFLP loci. While there are pros and cons to different methodologies, the total number of loci evaluated by AFLP generally offsets the limitations imposed due to the dominant nature of this approach and end results between methods are generally comparable. Overall objectives of this study were to evaluate the level of genetic diversity in the captive population of Alala, to compare genetic data with currently available pedigree information, and to determine the extent of relatedness of mating pairs and among founding individuals.
Estimating population trends with a linear model: Technical comments
Sauer, John R.; Link, William A.; Royle, J. Andrew
2004-01-01
Controversy has sometimes arisen over whether there is a need to accommodate the limitations of survey design in estimating population change from the count data collected in bird surveys. Analyses of surveys such as the North American Breeding Bird Survey (BBS) can be quite complex; it is natural to ask if the complexity is necessary, or whether the statisticians have run amok. Bart et al. (2003) propose a very simple analysis involving nothing more complicated than simple linear regression, and contrast their approach with model-based procedures. We review the assumptions implicit to their proposed method, and document that these assumptions are unlikely to be valid for surveys such as the BBS. One fundamental limitation of a purely design-based approach is the absence of controls for factors that influence detection of birds at survey sites. We show that failure to model observer effects in survey data leads to substantial bias in estimation of population trends from BBS data for the 20 species that Bart et al. (2003) used as the basis of their simulations. Finally, we note that the simulations presented in Bart et al. (2003) do not provide a useful evaluation of their proposed method, nor do they provide a valid comparison to the estimating- equations alternative they consider.
Jiao, York; Gipson, Keith E; Bonde, Pramod; Mangi, Abeel; Hagberg, Robert; Rosinski, David J; Gross, Jeffrey B; Schonberger, Robert B
Prolonged use of venoarterial extracorporeal membrane oxygenation (VA ECMO) may be complicated by end-organ dysfunction. Although gaseous microemboli (GME) are thought to damage end organs during cardiopulmonary bypass, patient exposures to GME have not been well characterized during VA ECMO. We therefore performed an observational study of GME in adult VA ECMO patients, with correlation to clinical events during routine patient care. After institutional review board (IRB) approval, we used two Doppler probes to detect GME noninvasively in extracorporeal membrane oxygenation (ECMO) circuits on four patients for 15 hours total while also recording patient care events. We then conducted in vitro trials to compare Doppler signals with gold-standard measurements using an Emboli Detection and Classification EDAC quantifier (Luna Innnovations, Inc. Roanoke, VA) (Terumo Cardiovascular, Ann Arbor, MI) during simulated clinical interventions. Correlations between Doppler and EDAC data were used to estimate GME counts and volumes represented by clinical Doppler data. A total of 503 groups of Doppler peaks representing GME showers were observed, including 194 statistically larger showers during patient care activities containing 92% of total Doppler peaks. Intravenous injections accounted for an estimated 68% of GME and 88% of GME volume, whereas care involving movement accounted for an estimated 6% of GME and 3% of volume. Overall estimated embolic rates of 24,000 GME totaling 4 μl/hr rivals reported GME rates during cardiopulmonary bypass. Numerous GME are present in the postmembrane circuit during VA ECMO, raising concern for effects on microcirculation and organ dysfunction. Strategies to detect and minimize GME may be warranted to limit embolic exposures experienced by VA ECMO patients.
Naimo, T.J.; Damschen, E.D.; Rada, R.G.; Monroe, E.M.
1998-01-01
In long-lived unionid mussels, many short-term measures of growth are of limited value. Changes in physiological condition may be an early indication of stress, because the increased energy demand associated with stress often results in a depletion of glycogen reserves, the principal storage form of carbohydrates in unionid mussels. Our goal was to nonlethally extract tissue from freshwater mussels and then to develop a rapid and dependable method for the analysis of glycogen in the tissue extracts. A biopsy technique was developed to remove between 5 and 10 mg of food tissue in Amblema plicata plicata. The survival rate did not differ between biopsied and non-biopsied mussels during a 581-d observation period, demonstrating that the biopsy technique will allow nonlethal evaluation of the physiological condition of individual mussels through measurement of changes in contaminant, genetic, and biochemical indicators in tissue. We also modified the standard alkaline digestion and phenol-sulfuric acid analysis of glycogen for use on the small samples of biopsied tissue and to reduce analysis time and cost. We present quality control data, including method detection limits and estimates of precision and bias. The modified analytical method is rapid and accurate and has a method detection limit of 0.014 mg glycogen. Glycogen content in the biopsied samples was well above the method detection limit; it ranged from 0.09 to 0.36 mg, indicating that the method should be applicable to native mussels.
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-01-01
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method. PMID:28383505
Montoro, Alegría; Sebastià, Natividad; Candela-Juan, Cristian; Barquinero, Joan Francesc; Soriano, José Miguel; Almonacid, Miguel; Alonso, Oscar; Guasp, Miguel; Marques-Sule, Elena; Cervera, José; Such, Esperanza; Arnal, Clara; Villaescusa, Juan Ignacio
2013-11-01
To survey the possible presence of chromosomal damage and internal contamination in a group of Ukrainian children and adolescents, 20 years after the Chernobyl accident at the Nuclear Power Plant. Cytogenetical procedures were performed according to dicentric assay in 55 Ukrainian children and adolescents (29 boys and 26 girls), living near Chernobyl. In addition, a whole body detector and urinalysis were used to detect internal contamination. 36 dicentrics were found in a total of 53,477 metaphases scored in these children, which reflected a frequency of dicentrics below the background level. On the other hand, internal contamination was not detected in any subject studied. Since the estimated absorbed dose is below the detection limit, according to both biological and physical dosimetry, radiation overexposure during the last 3-5 years has not been detected in the considered subjects.
West Nile Virus Range Expansion into British Columbia
Henry, Bonnie; Mak, Sunny; Fraser, Mieke; Taylor, Marsha; Li, Min; Cooper, Ken; Furnell, Allen; Wong, Quantine; Morshed, Muhammad
2010-01-01
In 2009, an expansion of West Nile virus (WNV) into the Canadian province of British Columbia was detected. Two locally acquired cases of infection in humans and 3 cases of infection in horses were detected by ELISA and plaque-reduction neutralization tests. Ten positive mosquito pools were detected by reverse transcription PCR. Most WNV activity in British Columbia in 2009 occurred in the hot and dry southern Okanagan Valley. Virus establishment and amplification in this region was likely facilitated by above average nightly temperatures and a rapid accumulation of degree-days in late summer. Estimated exposure dates for humans and initial detection of WNV-positive mosquitoes occurred concurrently with a late summer increase in Culex tarsalis mosquitoes (which spread western equine encephalitis) in the southern Okanagan Valley. The conditions present during this range expansion suggest that temperature and Cx. tarsalis mosquito abundance may be limiting factors for WNV transmission in this portion of the Pacific Northwest. PMID:20678319
Wang, Bin; Cancilla, John C; Torrecilla, Jose S; Haick, Hossam
2014-02-12
The use of molecularly modified Si nanowire field effect transistors (SiNW FETs) for selective detection in the liquid phase has been successfully demonstrated. In contrast, selective detection of chemical species in the gas phase has been rather limited. In this paper, we show that the application of artificial intelligence on deliberately controlled SiNW FET device parameters can provide high selectivity toward specific volatile organic compounds (VOCs). The obtained selectivity allows identifying VOCs in both single-component and multicomponent environments as well as estimating the constituent VOC concentrations. The effect of the structural properties (functional group and/or chain length) of the molecular modifications on the accuracy of VOC detection is presented and discussed. The reported results have the potential to serve as a launching pad for the use of SiNW FET sensors in real-world counteracting conditions and/or applications.
Pino, Flavio; Ivandini, Tribidasari A; Nakata, Kazuya; Fujishima, Akira; Merkoçi, Arben; Einaga, Yasuaki
2015-01-01
A simple and reliable enzymatic system for organophosporus pesticide detection was successfully developed, by exploiting the synergy between the magnetic beads collection capacity and the outstanding electrochemistry property of boron-doped diamond electrodes. The determination of an organophosphate pesticide, chlorpyrifos (CPF), was performed based on the inhibition system of the enzyme acetylcholinesterase bonded to magnetic beads through a biotin-streptavidin complex system. A better sensitivity was found for a system with magnetic beads in the concentration range of 10(-9) to 10(-5) M. The estimated limits of detection based on IC10 (10% acetylcholinesterase (AChE) inhibition) have been detected and optimized to be 5.7 × 10(-10) M CPF. Spiked samples of water of Yokohama (Japan) have been measured to validate the efficiency of the enzymatic system. The results suggested that the use of magnetic beads to immobilize biomolecules or biosensing agents is suitable to maintain the superiority of BDD electrodes.
A sulfur hexafluoride sensor using quantum cascade and CO2 laser-based photoacoustic spectroscopy.
Rocha, Mila; Sthel, Marcelo; Lima, Guilherme; da Silva, Marcelo; Schramm, Delson; Miklós, András; Vargas, Helion
2010-01-01
The increase in greenhouse gas emissions is a serious environmental problem and has stimulated the scientific community to pay attention to the need for detection and monitoring of gases released into the atmosphere. In this regard, the development of sensitive and selective gas sensors has been the subject of several research programs. An important greenhouse gas is sulphur hexafluoride, an almost non-reactive gas widely employed in industrial processes worldwide. Indeed it is estimated that it has a radiative forcing of 0.52 W/m(2). This work compares two photoacoustic spectrometers, one coupled to a CO(2) laser and another one coupled to a Quantum Cascade (QC) laser, for the detection of SF(6). The laser photoacoustic spectrometers described in this work have been developed for gas detection at small concentrations. Detection limits of 20 ppbv for CO(2) laser and 50 ppbv for quantum cascade laser were obtained.
Dynamical models to explain observations with SPHERE in planetary systems with double debris belts
NASA Astrophysics Data System (ADS)
Lazzoni, C.; Desidera, S.; Marzari, F.; Boccaletti, A.; Langlois, M.; Mesa, D.; Gratton, R.; Kral, Q.; Pawellek, N.; Olofsson, J.; Bonnefoy, M.; Chauvin, G.; Lagrange, A. M.; Vigan, A.; Sissa, E.; Antichi, J.; Avenhaus, H.; Baruffolo, A.; Baudino, J. L.; Bazzon, A.; Beuzit, J. L.; Biller, B.; Bonavita, M.; Brandner, W.; Bruno, P.; Buenzli, E.; Cantalloube, F.; Cascone, E.; Cheetham, A.; Claudi, R. U.; Cudel, M.; Daemgen, S.; De Caprio, V.; Delorme, P.; Fantinel, D.; Farisato, G.; Feldt, M.; Galicher, R.; Ginski, C.; Girard, J.; Giro, E.; Janson, M.; Hagelberg, J.; Henning, T.; Incorvaia, S.; Kasper, M.; Kopytova, T.; LeCoroller, H.; Lessio, L.; Ligi, R.; Maire, A. L.; Ménard, F.; Meyer, M.; Milli, J.; Mouillet, D.; Peretti, S.; Perrot, C.; Rouan, D.; Samland, M.; Salasnich, B.; Salter, G.; Schmidt, T.; Scuderi, S.; Sezestre, E.; Turatto, M.; Udry, S.; Wildi, F.; Zurlo, A.
2018-03-01
Context. A large number of systems harboring a debris disk show evidence for a double belt architecture. One hypothesis for explaining the gap between the debris belts in these disks is the presence of one or more planets dynamically carving it. For this reason these disks represent prime targets for searching planets using direct imaging instruments, like the Spectro-Polarimetric High-constrast Exoplanet Research (SPHERE) at the Very Large Telescope. Aim. The goal of this work is to investigate this scenario in systems harboring debris disks divided into two components, placed, respectively, in the inner and outer parts of the system. All the targets in the sample were observed with the SPHERE instrument, which performs high-contrast direct imaging, during the SHINE guaranteed time observations. Positions of the inner and outer belts were estimated by spectral energy distribution fitting of the infrared excesses or, when available, from resolved images of the disk. Very few planets have been observed so far in debris disks gaps and we intended to test if such non-detections depend on the observational limits of the present instruments. This aim is achieved by deriving theoretical predictions of masses, eccentricities, and semi-major axes of planets able to open the observed gaps and comparing such parameters with detection limits obtained with SPHERE. Methods: The relation between the gap and the planet is due to the chaotic zone neighboring the orbit of the planet. The radial extent of this zone depends on the mass ratio between the planet and the star, on the semi-major axis, and on the eccentricity of the planet, and it can be estimated analytically. We first tested the different analytical predictions using a numerical tool for the detection of chaotic behavior and then selected the best formula for estimating a planet's physical and dynamical properties required to open the observed gap. We then apply the formalism to the case of one single planet on a circular or eccentric orbit. We then consider multi-planetary systems: two and three equal-mass planets on circular orbits and two equal-mass planets on eccentric orbits in a packed configuration. As a final step, we compare each couple of values (Mp, ap), derived from the dynamical analysis of single and multiple planetary models, with the detection limits obtained with SPHERE. Results: For one single planet on a circular orbit we obtain conclusive results that allow us to exclude such a hypothesis since in most cases this configuration requires massive planets which should have been detected by our observations. Unsatisfactory is also the case of one single planet on an eccentric orbit for which we obtained high masses and/or eccentricities which are still at odds with observations. Introducing multi planetary architectures is encouraging because for the case of three packed equal-mass planets on circular orbits we obtain quite low masses for the perturbing planets which would remain undetected by our SPHERE observations. The case of two equal-mass planets on eccentric orbits is also of interest since it suggests the possible presence of planets with masses lower than the detection limits and with moderate eccentricity. Our results show that the apparent lack of planets in gaps between double belts could be explained by the presence of a system of two or more planets possibly of low mass and on eccentric orbits whose sizes are below the present detection limits. Based on observations collected at Paranal Observatory, ESO (Chile) Program ID: 095.C-0298, 096.C-0241, 097.C-0865, and 198.C-0209.
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles
Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián
2016-01-01
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044
2013-01-01
Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11.0 (95% CI: 10.5-11.3). Conclusion These results contribute to the LRL of quantitative cord G6PD detection in full-term neonates. They are transferable to another laboratory when pre-analytical factors and testing methods are comparable and the IFCC-CLSI requirements of transference are satisfied. We are suggesting using estimated LRL in absence of the outliers as mislabeling G6PD-deficient neonates as normal is intolerable whereas mislabeling G6PD-normal neonates as deficient is tolerable. PMID:24016342
Gaylor, David W; Lutz, Werner K; Conolly, Rory B
2004-01-01
Statistical analyses of nonmonotonic dose-response curves are proposed, experimental designs to detect low-dose effects of J-shaped curves are suggested, and sample sizes are provided. For quantal data such as cancer incidence rates, much larger numbers of animals are required than for continuous data such as biomarker measurements. For example, 155 animals per dose group are required to have at least an 80% chance of detecting a decrease from a 20% incidence in controls to an incidence of 10% at a low dose. For a continuous measurement, only 14 animals per group are required to have at least an 80% chance of detecting a change of the mean by one standard deviation of the control group. Experimental designs based on three dose groups plus controls are discussed to detect nonmonotonicity or to estimate the zero equivalent dose (ZED), i.e., the dose that produces a response equal to the average response in the controls. Cell proliferation data in the nasal respiratory epithelium of rats exposed to formaldehyde by inhalation are used to illustrate the statistical procedures. Statistically significant departures from a monotonic dose response were obtained for time-weighted average labeling indices with an estimated ZED at a formaldehyde dose of 5.4 ppm, with a lower 95% confidence limit of 2.7 ppm. It is concluded that demonstration of a statistically significant bi-phasic dose-response curve, together with estimation of the resulting ZED, could serve as a point-of departure in establishing a reference dose for low-dose risk assessment.
Review of Methods and Algorithms for Dynamic Management of CBRNE Collection Assets
2013-07-01
where they should be looking. An example sensor is a satellite with a limited energy budget, which may have power to operate, say, only 10 percent of...calculations by incorporating sensor data with initial dispersion estimates.1 DTRA and the Joint Science and Technology Office for Chem- Bio Defense (JSTO-CBD...detection performance through remote processing and fusion of sensor data and modeling of the operational environment. DTRA is actively developing
Nanoparticle size detection limits by single particle ICP-MS for 40 elements.
Lee, Sungyun; Bi, Xiangyu; Reed, Robert B; Ranville, James F; Herckes, Pierre; Westerhoff, Paul
2014-09-02
The quantification and characterization of natural, engineered, and incidental nano- to micro-size particles are beneficial to assessing a nanomaterial's performance in manufacturing, their fate and transport in the environment, and their potential risk to human health. Single particle inductively coupled plasma mass spectrometry (spICP-MS) can sensitively quantify the amount and size distribution of metallic nanoparticles suspended in aqueous matrices. To accurately obtain the nanoparticle size distribution, it is critical to have knowledge of the size detection limit (denoted as Dmin) using spICP-MS for a wide range of elements (other than a few available assessed ones) that have been or will be synthesized into engineered nanoparticles. Herein is described a method to estimate the size detection limit using spICP-MS and then apply it to nanoparticles composed of 40 different elements. The calculated Dmin values correspond well for a few of the elements with their detectable sizes that are available in the literature. Assuming each nanoparticle sample is composed of one element, Dmin values vary substantially among the 40 elements: Ta, U, Ir, Rh, Th, Ce, and Hf showed the lowest Dmin values, ≤10 nm; Bi, W, In, Pb, Pt, Ag, Au, Tl, Pd, Y, Ru, Cd, and Sb had Dmin in the range of 11-20 nm; Dmin values of Co, Sr, Sn, Zr, Ba, Te, Mo, Ni, V, Cu, Cr, Mg, Zn, Fe, Al, Li, and Ti were located at 21-80 nm; and Se, Ca, and Si showed high Dmin values, greater than 200 nm. A range of parameters that influence the Dmin, such as instrument sensitivity, nanoparticle density, and background noise, is demonstrated. It is observed that, when the background noise is low, the instrument sensitivity and nanoparticle density dominate the Dmin significantly. Approaches for reducing the Dmin, e.g., collision cell technology (CCT) and analyte isotope selection, are also discussed. To validate the Dmin estimation approach, size distributions for three engineered nanoparticle samples were obtained using spICP-MS. The use of this methodology confirms that the observed minimum detectable sizes are consistent with the calculated Dmin values. Overall, this work identifies the elements and nanoparticles to which current spICP-MS approaches can be applied, in order to enable quantification of very small nanoparticles at low concentrations in aqueous media.
Pickering, John W; Than, Martin P; Cullen, Louise; Aldous, Sally; Ter Avest, Ewoud; Body, Richard; Carlton, Edward W; Collinson, Paul; Dupuy, Anne Marie; Ekelund, Ulf; Eggers, Kai M; Florkowski, Christopher M; Freund, Yonathan; George, Peter; Goodacre, Steve; Greenslade, Jaimi H; Jaffe, Allan S; Lord, Sarah J; Mokhtari, Arash; Mueller, Christian; Munro, Andrew; Mustapha, Sebbane; Parsonage, William; Peacock, W Frank; Pemberton, Christopher; Richards, A Mark; Sanchis, Juan; Staub, Lukas P; Troughton, Richard; Twerenbold, Raphael; Wildi, Karin; Young, Joanna
2017-05-16
High-sensitivity assays for cardiac troponin T (hs-cTnT) are sometimes used to rapidly rule out acute myocardial infarction (AMI). To estimate the ability of a single hs-cTnT concentration below the limit of detection (<0.005 µg/L) and a nonischemic electrocardiogram (ECG) to rule out AMI in adults presenting to the emergency department (ED) with chest pain. EMBASE and MEDLINE without language restrictions (1 January 2008 to 14 December 2016). Cohort studies involving adults presenting to the ED with possible acute coronary syndrome in whom an ECG and hs-cTnT measurements were obtained and AMI outcomes adjudicated during initial hospitalization. Investigators of studies provided data on the number of low-risk patients (no new ischemia on ECG and hs-cTnT measurements <0.005 µg/L) and the number who had AMI during hospitalization (primary outcome) or a major adverse cardiac event (MACE) or death within 30 days (secondary outcomes), by risk classification (low or not low risk). Two independent epidemiologists rated risk of bias of studies. Of 9241 patients in 11 cohort studies, 2825 (30.6%) were classified as low risk. Fourteen (0.5%) low-risk patients had AMI. Sensitivity of the risk classification for AMI ranged from 87.5% to 100% in individual studies. Pooled estimated sensitivity was 98.7% (95% CI, 96.6% to 99.5%). Sensitivity for 30-day MACEs ranged from 87.9% to 100%; pooled sensitivity was 98.0% (CI, 94.7% to 99.3%). No low-risk patients died. Few studies, variation in timing and methods of reference standard troponin tests, and heterogeneity of risk and prevalence of AMI across studies. A single hs-cTnT concentration below the limit of detection in combination with a nonischemic ECG may successfully rule out AMI in patients presenting to EDs with possible emergency acute coronary syndrome. Emergency Care Foundation.
Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.
Bauer, Martin; Trahms, Lutz; Sander, Tilmann
2015-04-01
The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.
Sánchez, Eduardo Munera; Alcobendas, Manuel Muñoz; Noguera, Juan Fco. Blanes; Gilabert, Ginés Benet; Simó Ten, José E.
2013-01-01
This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption. PMID:24193098
Fast clustering using adaptive density peak detection.
Wang, Xiao-Feng; Xu, Yifan
2017-12-01
Common limitations of clustering methods include the slow algorithm convergence, the instability of the pre-specification on a number of intrinsic parameters, and the lack of robustness to outliers. A recent clustering approach proposed a fast search algorithm of cluster centers based on their local densities. However, the selection of the key intrinsic parameters in the algorithm was not systematically investigated. It is relatively difficult to estimate the "optimal" parameters since the original definition of the local density in the algorithm is based on a truncated counting measure. In this paper, we propose a clustering procedure with adaptive density peak detection, where the local density is estimated through the nonparametric multivariate kernel estimation. The model parameter is then able to be calculated from the equations with statistical theoretical justification. We also develop an automatic cluster centroid selection method through maximizing an average silhouette index. The advantage and flexibility of the proposed method are demonstrated through simulation studies and the analysis of a few benchmark gene expression data sets. The method only needs to perform in one single step without any iteration and thus is fast and has a great potential to apply on big data analysis. A user-friendly R package ADPclust is developed for public use.
Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas.
Schires, Elliott; Georgiou, Pantelis; Lande, Tor Sverre
2018-04-01
Radar devices can be used in nonintrusive situations to monitor vital sign, through clothes or behind walls. By detecting and extracting body motion linked to physiological activity, accurate simultaneous estimations of both heart rate (HR) and respiration rate (RR) is possible. However, most research to date has focused on front monitoring of superficial motion of the chest. In this paper, body penetration of electromagnetic (EM) wave is investigated to perform back monitoring of human subjects. Using body-coupled antennas and an ultra-wideband (UWB) pulsed radar, in-body monitoring of lungs and heart motion was achieved. An optimised location of measurement in the back of a subject is presented, to enhance signal-to-noise ratio and limit attenuation of reflected radar signals. Phase-based detection techniques are then investigated for back measurements of vital sign, in conjunction with frequency estimation methods that reduce the impact of parasite signals. Finally, an algorithm combining these techniques is presented to allow robust and real-time estimation of both HR and RR. Static and dynamic tests were conducted, and demonstrated the possibility of using this sensor in future health monitoring systems, especially in the form of a smart car seat for driver monitoring.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Chatziprodromidou, I P; Apostolou, T
2018-04-01
The aim of the study was to estimate the sensitivity and specificity of enzyme-linked immunosorbent assay (ELISA) and immunoblot (IB) for detecting antibodies of Neospora caninum in dairy cows, in the absence of a gold standard. The study complies with STRADAS-paratuberculosis guidelines for reporting the accuracy of the test. We tried to apply Bayesian models that do not require conditional independence of the tests under evaluation, but as convergence problems appeared, we used Bayesian methodology, that does not assume conditional dependence of the tests. Informative prior probability distributions were constructed, based on scientific inputs regarding sensitivity and specificity of the IB test and the prevalence of disease in the studied populations. IB sensitivity and specificity were estimated to be 98.8% and 91.3%, respectively, while the respective estimates for ELISA were 60% and 96.7%. A sensitivity analysis, where modified prior probability distributions concerning IB diagnostic accuracy applied, showed a limited effect in posterior assessments. We concluded that ELISA can be used to screen the bulk milk and secondly, IB can be used whenever needed.
Sources of variation in detection of wading birds from aerial surveys in the Florida Everglades
Conroy, M.J.; Peterson, J.T.; Bass, O.L.; Fonnesbeck, C.J.; Howell, J.E.; Moore, C.T.; Runge, J.P.
2008-01-01
We conducted dual-observer trials to estimate detection probabilities (probability that a group that is present and available is detected) for fixed-wing aerial surveys of wading birds in the Everglades system, Florida. Detection probability ranged from <0.2 to similar to 0.75 and varied according to species, group size, observer, and the observer's position in the aircraft (front or rear seat). Aerial-survey simulations indicated that incomplete detection can have a substantial effect oil assessment of population trends, particularly river relatively short intervals (<= 3 years) and small annual changes in population size (<= 3%). We conclude that detection bias is an important consideration for interpreting observations from aerial surveys of wading birds, potentially limiting the use of these data for comparative purposes and trend analyses. We recommend that workers conducting aerial surveys for wading birds endeavor to reduce observer and other controllable sources of detection bias and account for uncontrollable sources through incorporation of dual-observer or other calibratior methods as part of survey design (e.g., using double sampling).
A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.
NASA Astrophysics Data System (ADS)
Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.
1996-02-01
The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.
Multifunctional Hyperbolic Nanogroove Metasurface for Submolecular Detection.
Jiang, Li; Zeng, Shuwen; Xu, Zhengji; Ouyang, Qingling; Zhang, Dao-Hua; Chong, Peter Han Joo; Coquet, Philippe; He, Sailing; Yong, Ken-Tye
2017-08-01
Metasurface serves as a promising plasmonic sensing platform for engineering the enhanced light-matter interactions. Here, a hyperbolic metasurface with the nanogroove structure in the subwavelength scale is designed. This metasurface is able to modify the wavefront and wavelength of surface plasmon wave with the variation of the nanogroove width or periodicity. At the specific optical frequency, surface plasmon polaritons are tightly confined and propagated with a diffraction-free feature due to the epsilon-near-zero effect. Most importantly, the groove hyperbolic metasurface can enhance the plasmonic sensing with an ultrahigh phase sensitivity of 30 373 deg RIU -1 and Goos-Hänchen shift sensitivity of 10.134 mm RIU -1 . The detection resolution for refractive index change of glycerol solution is achieved as 10 -8 RIU based on the phase measurement. The detection limit of bovine serum albumin (BSA) molecule is measured as low as 0.1 × 10 -18 m (1 × 10 -19 mol L -1 ), which corresponds to a submolecular detection level (0.13 BSA mm -2 ). As for low-weight biotin molecule, the detection limit is estimated below 1 × 10 -15 m (1 × 10 -15 mol L -1 , 1300 biotin mm -2 ). This enhanced plasmonic sensing performance is two orders of magnitude higher than those with current state-of-art plasmonic metamaterials and metasurfaces. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
NASA Astrophysics Data System (ADS)
Yadollahi, Azadeh
Tracheal respiratory sounds analysis has been investigated as a non--invasive method to estimate respiratory flow and upper airway obstruction. However, the flow--sound relationship is highly variable among subjects which makes it challenging to estimate flow in general applications. Therefore, a robust model for acoustical flow estimation in a large group of individuals did not exist before. On the other hand, a major application of acoustical flow estimation is to detect flow limitations in patients with obstructive sleep apnea (OSA) during sleep. However, previously the flow--sound relationship was only investigated during wakefulness among healthy individuals. Therefore, it was necessary to examine the flow--sound relationship during sleep in OSA patients. This thesis takes the above challenges and offers innovative solutions. First, a modified linear flow--sound model was proposed to estimate respiratory flow from tracheal sounds. To remove the individual based calibration process, the statistical correlation between the model parameters and anthropometric features of 93 healthy volunteers was investigated. The results show that gender, height and smoking are the most significant factors that affect the model parameters. Hence, a general acoustical flow estimation model was proposed for people with similar height and gender. Second, flow--sound relationship during sleep and wakefulness was studied among 13 OSA patients. The results show that during sleep and wakefulness, flow--sound relation- ship follows a power law, but with different parameters. Therefore, for acoustical flow estimation during sleep, the model parameters should be extracted from sleep data to have small errors. The results confirm reliability of the acoustical flow estimation for investigating flow variations during both sleep and wakefulness. Finally, a new method for sleep apnea detection and monitoring was developed, which only requires recording the tracheal sounds and the blood's oxygen saturation level (SaO2) data. It automatically classifies the sound segments into breath, snore and noise. A weighted average of features extracted from sound segments and SaO2 signal was used to detect apnea and hypopnea events. The performance of the proposed approach was evaluated on the data of 66 patients. The results show high correlation (0.96, p < 0.0001) between the outcomes of our system and those of the polysomnography. Also, sensitivity and specificity of the proposed method in differentiating simple snorers from OSA patients were found to be more than 91%. These results are superior or comparable with the existing commercialized sleep apnea portable monitors.
The detectability of brown dwarfs - Predictions and uncertainties
NASA Technical Reports Server (NTRS)
Nelson, L. A.; Rappaport, S.; Joss, P. C.
1993-01-01
In order to determine the likelihood for the detection of isolated brown dwarfs in ground-based observations as well as in future spaced-based astronomy missions, and in order to evaluate the significance of any detections that might be made, we must first know the expected surface density of brown dwarfs on the celestial sphere as a function of limiting magnitude, wavelength band, and Galactic latitude. It is the purpose of this paper to provide theoretical estimates of this surface density, as well as the range of uncertainty in these estimates resulting from various theoretical uncertainties. We first present theoretical cooling curves for low-mass stars that we have computed with the latest version of our stellar evolution code. We use our evolutionary results to compute theoretical brown-dwarf luminosity functions for a wide range of assumed initial mass functions and stellar birth rate functions. The luminosity functions, in turn, are utilized to compute theoretical surface density functions for brown dwarfs on the celestial sphere. We find, in particular, that for reasonable theoretical assumptions, the currently available upper bounds on the brown-dwarf surface density are consistent with the possibility that brown dwarfs contribute a substantial fraction of the mass of the Galactic disk.
Auditory performance in an open sound field
NASA Astrophysics Data System (ADS)
Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy
2003-04-01
Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.
Copper detection in the Asiatic clam Corbicula fluminea: optimum valve closure response.
Tran, Damien; Fournier, Elodie; Durrieu, Gilles; Massabuau, Jean-Charles
2004-02-25
When exposed to a contaminant, bivalves close their shell as a protective strategy. The aim of the present study was to estimate the maximum expected dissolved copper sensitivity in the freshwater bivalve Corbicula fluminea using a new approach to determine their potential and limit to detect contaminants. To take into account the rate of spontaneous closures, we integrated stress problems associated with fixation by a valve in usual valvometers and the spontaneous rhythm associated with nycthemeral activity, to optimize the response in conditions where the probability of spontaneous closing was lowest. Moreover, we used an original system with impedance valvometry, using lightweight impedance electrodes, to study free-ranging animals in low stress conditions combined with an analytical approach describing dose-response curves by logistic regression, with valve closure reaction as a function of response time and concentration of contaminant. In C. fluminea, we estimated that copper concentrations > 4 microg/l (95% confidence interval (CI95%), 2.3-8.8 microg/l) must be detected within 5 h after Cu addition. Lower values could not be distinguished from background noise. The threshold values were 2.5 times lower than the values reported in the literature.
Copper detection in the Asiatic clam Corbicula fluminea: optimum valve closure response.
Tran, Damien; Fournier, Elodie; Durrieu, Gilles; Massabuau, Jean Charles
2003-11-19
When exposed to a contaminant, bivalves close their shell as a protective strategy. The aim of the present study was to estimate the maximum expected dissolved copper sensitivity in the freshwater bivalve Corbicula fluminea using a new approach to determine their potential and limit to detect contaminants. To take into account the rate of spontaneous closures, we integrated stress problems associated with fixation by a valve in usual valvometers and the spontaneous rhythm associated with nycthemeral activity, to optimize the response in conditions where the probability of spontaneous closing was lowest. Moreover, we used an original system with impedance valvometry, using lightweight impedance electrodes, to study free-ranging animals in low stress conditions combined with an analytical approach describing dose-response curves by logistic regression, with valve closure reaction as a function of response time and concentration of contaminant. In C. fluminea, we estimated that copper concentrations >4 microg/l (95% confidence interval (CI(95%)), 2.3-8.8 microg/l) must be detected within 5 h after Cu addition. Lower values could not be distinguished from background noise. The threshold values were 2.5 times higher than the values reported in the literature.
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
Lux, Markus; Kruger, Jan; Rinke, Christian; ...
2016-12-20
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lux, Markus; Kruger, Jan; Rinke, Christian
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aidmore » the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.« less
[Estimation of time detection limit for human cytochrome b in females of Lutzomyia evansi].
Vergara, José Gabriel; Verbel-Vergara, Daniel; Montesino, Ana Milena; Pérez-Doria, Alveiro; Bejarano, Eduar Elías
2017-03-29
Molecular biology techniques have allowed a better knowledge of sources of blood meals in vector insects. However, the usefulness of these techniques depends on both the quantity of ingested blood and the digestion process in the insect. To identify the time limit for detection of the human cytochrome b (Cyt b) gene in experimentally fed females of Lutzomyia evansi. Eight groups of L. evansi females were fed on human blood and sacrificed at intervals of 24 hours post-ingestion. Total DNA was extracted from each female and a segment of 358 bp of Cyt b was amplified. In order to eliminate false positives, amplification products were subjected to a restriction fragment length polymorphism (RFLP) analysis. The human Cyt b gene segment was detected in 86% (49/57) of the females of L. evansi, from 0 to 168 hours after blood ingestion. In 7% (4/57) of the individuals we amplified insect DNA, while in the remaining 7%, the band of interest was not amplified. We did not find any statistical differences between groups of females sacrificed at different times post-blood meal regarding the amplification of the human Cyt b gene segment or the number of samples amplified. The human Cyt b gene segment was detectable in L. evansi females up to 168 hours after blood ingestion.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Acosta-Pérez, Gabriel; Rodríguez-Ábrego, Gabriela; Longoria-Revilla, Ernesto; Castro-Mussot, María Eugenia
2012-01-01
To estimate the prevalence of methicillin-resistant Staphylococcus aureus (MRSA) in clinical isolates and to compare different methods for detection of MRSA in a lab with limited available personnel and resources. 140 Staphylococcus aureus strains isolated from patients in several departments were assayed for β-lactamase production, MIC-Vitek 2 oxacillin, ChromID MRSA, disk diffusion in agar for cefoxitin 30 μg and PBP2a detection. The results of conventional tests were compared with the "gold standard" PCR test for mecA gene. Cohen´s kappa index was also calculated in order to evaluate the intra assay agreement between the used methods. The found prevalence was 90.7%. Sensitivity and specificity were: disk diffusion for cefoxitin 97 and 92% respectively, MIC Vitek 2-XL 97 and 69%, ChromoID MRSA 97 and 85%, and PBP2a detection 98 and 100%. All methods are very good for detecting MRSA, choosing a method to use will depend on each laboratory infrastructure.
On-chip detection of non-classical light by scalable integration of single-photon detectors
Najafi, Faraz; Mower, Jacob; Harris, Nicholas C.; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K.; Englund, Dirk
2015-01-01
Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit—required for scalable quantum photonic circuits—has been limited to <0.2%. Here we introduce a micrometer-scale flip-chip process that enables scalable integration of SNSPDs on a range of photonic circuits. Ten low-jitter detectors are integrated on one circuit with 100% device yield. With an average system detection efficiency beyond 10%, and estimated on-chip detection efficiency of 14–52% for four detectors operated simultaneously, we demonstrate, to the best of our knowledge, the first on-chip photon correlation measurements of non-classical light. PMID:25575346
Detection of cow milk adulteration in yak milk by ELISA.
Ren, Q R; Zhang, H; Guo, H Y; Jiang, L; Tian, M; Ren, F Z
2014-10-01
In the current study, a simple, sensitive, and specific ELISA assay using a high-affinity anti-bovine β-casein monoclonal antibody was developed for the rapid detection of cow milk in adulterated yak milk. The developed ELISA was highly specific and could be applied to detect bovine β-casein (10-8,000 μg/mL) and cow milk (1:1,300 to 1:2 dilution) in yak milk. Cross-reactivity was <1% when tested against yak milk. The linear range of adulterant concentration was 1 to 80% (vol/vol) and the minimum detection limit was 1% (vol/vol) cow milk in yak milk. Different treatments, including heating, acidification, and rennet addition, did not interfere with the assay. Moreover, the results were highly reproducible (coefficient of variation <10%) and we detected no significant differences between known and estimated values. Therefore, this assay is appropriate for the routine analysis of yak milk adulterated with cow milk. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
DNA-magnetic bead detection using disposable cards and the anisotropic magnetoresistive sensor
NASA Astrophysics Data System (ADS)
Hien, L. T.; Quynh, L. K.; Huyen, V. T.; Tu, B. D.; Hien, N. T.; Phuong, D. M.; Nhung, P. H.; Giang, D. T. H.; Duc, N. H.
2016-12-01
A disposable card incorporating specific DNA probes targeting the 16 S rRNA gene of Streptococcus suis was developed for magnetically labeled target DNA detection. A single-stranded target DNA was hybridized with the DNA probe on the SPA/APTES/PDMS/Si as-prepared card, which was subsequently magnetically labeled with superparamagnetic beads for detection using an anisotropic magnetoresistive (AMR) sensor. An almost linear response between the output signal of the AMR sensor and amount of single-stranded target DNA varied from 4.5 to 18 pmol was identified. From the sensor output signal response towards the mass of magnetic beads which were directly immobilized on the disposable card surface, the limit of detection was estimated about 312 ng ferrites, which corresponds to 3.8 μemu. In comparison with DNA detection by conventional biosensor based on magnetic bead labeling, disposable cards are featured with higher efficiency and performances, ease of use and less running cost with respects to consumables for biosensor in biomedical analysis systems operating with immobilized bioreceptor.
Incorporating detection probability into northern Great Plains pronghorn population estimates
Jacques, Christopher N.; Jenks, Jonathan A.; Grovenburg, Troy W.; Klaver, Robert W.; DePerno, Christopher S.
2014-01-01
Pronghorn (Antilocapra americana) abundances commonly are estimated using fixed-wing surveys, but these estimates are likely to be negatively biased because of violations of key assumptions underpinning line-transect methodology. Reducing bias and improving precision of abundance estimates through use of detection probability and mark-resight models may allow for more responsive pronghorn management actions. Given their potential application in population estimation, we evaluated detection probability and mark-resight models for use in estimating pronghorn population abundance. We used logistic regression to quantify probabilities that detecting pronghorn might be influenced by group size, animal activity, percent vegetation, cover type, and topography. We estimated pronghorn population size by study area and year using mixed logit-normal mark-resight (MLNM) models. Pronghorn detection probability increased with group size, animal activity, and percent vegetation; overall detection probability was 0.639 (95% CI = 0.612–0.667) with 396 of 620 pronghorn groups detected. Despite model selection uncertainty, the best detection probability models were 44% (range = 8–79%) and 180% (range = 139–217%) greater than traditional pronghorn population estimates. Similarly, the best MLNM models were 28% (range = 3–58%) and 147% (range = 124–180%) greater than traditional population estimates. Detection probability of pronghorn was not constant but depended on both intrinsic and extrinsic factors. When pronghorn detection probability is a function of animal group size, animal activity, landscape complexity, and percent vegetation, traditional aerial survey techniques will result in biased pronghorn abundance estimates. Standardizing survey conditions, increasing resighting occasions, or accounting for variation in individual heterogeneity in mark-resight models will increase the accuracy and precision of pronghorn population estimates.
Estimating Crop Growth Stage by Combining Meteorological and Remote Sensing Based Techniques
NASA Astrophysics Data System (ADS)
Champagne, C.; Alavi-Shoushtari, N.; Davidson, A. M.; Chipanshi, A.; Zhang, Y.; Shang, J.
2016-12-01
Estimations of seeding, harvest and phenological growth stage of crops are important sources of information for monitoring crop progress and crop yield forecasting. Growth stage has been traditionally estimated at the regional level through surveys, which rely on field staff to collect the information. Automated techniques to estimate growth stage have included agrometeorological approaches that use temperature and day length information to estimate accumulated heat and photoperiod, with thresholds used to determine when these stages are most likely. These approaches however, are crop and hybrid dependent, and can give widely varying results depending on the method used, particularly if the seeding date is unknown. Methods to estimate growth stage from remote sensing have progressed greatly in the past decade, with time series information from the Normalized Difference Vegetation Index (NDVI) the most common approach. Time series NDVI provide information on growth stage through a variety of techniques, including fitting functions to a series of measured NDVI values or smoothing these values and using thresholds to detect changes in slope that are indicative of rapidly increasing or decreasing `greeness' in the vegetation cover. The key limitations of these techniques for agriculture are frequent cloud cover in optical data that lead to errors in estimating local features in the time series function, and the incongruity between changes in greenness and traditional agricultural growth stages. There is great potential to combine both meteorological approaches and remote sensing to overcome the limitations of each technique. This research will examine the accuracy of both meteorological and remote sensing approaches over several agricultural sites in Canada, and look at the potential to integrate these techniques to provide improved estimates of crop growth stage for common field crops.
Two-species occupancy modeling accounting for species misidentification and nondetection
Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,
2018-01-01
In occupancy studies, species misidentification can lead to false‐positive detections, which can cause severe estimator biases. Currently, all models that account for false‐positive errors only consider omnibus sources of false detections and are limited to single‐species occupancy.However, false detections for a given species often occur because of the misidentification with another, closely related species. To exploit this explicit source of false‐positive detection error, we develop a two‐species occupancy model that accounts for misidentifications between two species of interest. As with other false‐positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site x occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”).We performed three simulation studies to (1) assess the model's performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy and (3) compare the performance of this two‐species model with that of the single‐species false‐positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g. 5%). It also clearly outperforms the single‐species model.We illustrate application of this model using a 4‐year dataset on two sympatric species of lungless salamanders: the US federally endangered Shenandoah salamander Plethodon shenandoah, and its presumed competitor, the red‐backed salamander Plethodon cinereus. Occupancy of red‐backed salamanders appeared very stable across the 4 years of study, whereas the Shenandoah salamander displayed substantial turnover in occupancy of forest habitats among years.Given the extent of species misidentification issues in occupancy studies, this modelling approach should help improve the reliability of estimates of species distribution, which is the goal of many studies and monitoring programmes. Further developments, to account for different forms of state uncertainty, can be readily undertaken under our general approach.
Coalescent genealogy samplers: windows into population history
Kuhner, Mary K.
2016-01-01
Coalescent genealogy samplers attempt to estimate past qualities of a population, such as its size, growth rate, patterns of gene flow or time of divergence from another population, based on samples of molecular data. Genealogy samplers are increasingly popular because of their potential to disentangle complex population histories. In the last decade they have been widely applied to systems ranging from humans to viruses. Findings include detection of unexpected reproductive inequality in fish, new estimates of historical whale abundance, exoneration of humans for the prehistoric decline of bison and inference of a selective sweep on the human Y chromosome. This review summarizes available genealogy-sampler software, including data requirements and limitations on the use of each program. PMID:19101058
Coalescent genealogy samplers: windows into population history.
Kuhner, Mary K
2009-02-01
Coalescent genealogy samplers attempt to estimate past qualities of a population, such as its size, growth rate, patterns of gene flow or time of divergence from another population, based on samples of molecular data. Genealogy samplers are increasingly popular because of their potential to disentangle complex population histories. In the last decade they have been widely applied to systems ranging from humans to viruses. Findings include detection of unexpected reproductive inequality in fish, new estimates of historical whale abundance, exoneration of humans for the prehistoric decline of bison and inference of a selective sweep on the human Y chromosome. This review summarizes available genealogy-sampler software, including data requirements and limitations on the use of each program.
NASA Technical Reports Server (NTRS)
Fordyce, J. S.; Sheibley, D. W.
1974-01-01
Samples of ASTM type A jet fuel were analyzed for trace-element content by instrumental neutron activation techniques. Forty-nine elements were sought. Only ten, aluminum, gold, indium, lanthanum, titanium, vandium, barium, dysprosium, tellurium, and uranium, were observed at levels above the detection limits encountered; of these only aluminum, titanium, and barium were present at concentrations greater than 0.1 ppm. Estimates of exhaust gas concentrations are made, and the ambient contribution at or near airports is calculated by using the Los Angeles International Airport dispersion model. It is shown that the ambient contribution is about an order of magnitude below typical urban levels for virtually all elements sought.
NASA Technical Reports Server (NTRS)
Fordyce, J. S.; Sheibley, D. W.
1975-01-01
Samples of ASTM type A jet fuel were analyzed for trace element content by instrumental neutron activation techniques. Forty-nine elements were sought. Only ten, aluminum, gold, indium, lanthanum, titanium, vanadium, barium, dysprosium, tellurium, and uranium, were observed at levels above the detection limits encountered; of these only aluminum, titanium, and barium were present at concentrations greater than 0.1 ppm. Estimates of exhaust gas concentrations are made, and the ambient contribution at or near airports is calculated by using the Los Angeles International Airport dispersion model. It is shown that the ambient contribution is about an order of magnitude below typical urban levels for virtually all elements sought.
Al-Hunayan, A; Al-Ateeqi, A; Kehinde, E O; Thalib, L; Loutfi, I; Mojiminiyi, O A
2008-01-01
To determine the diagnostic accuracy of spot urine creatinine concentration (UCC) as a new test for the evaluation of differential renal function in obstructed kidneys (DRF(ok)) drained by percutaneous nephrostomy tube (PCNT). In patients with obstructed kidneys drained by PCNT, DRF(ok) was derived from UCC by comparing the value of UCC in the obstructed kidney to the value in the contralateral kidney, and was derived from dimercaptosuccinic acid (DMSA) renal scans and creatinine clearance (CCr) using standard methods. Subsequently, the results of UCC were compared to the results of DMSA and CCr. 61 patients were enrolled. Bland-Altman plots to compare DMSA and UCC showed that the upper limit of agreement was 14.8% (95% CI 10.7-18.5) and the lower limit was -19.9% (95% CI -23.8 to -16.1). The sensitivity and specificity of detecting DMSA DRF(ok) < or = 35% using UCC was 85.2 and 91.2%, respectively. When UCC was compared to CCr, Bland-Altman tests gave an upper limit of agreement of 10.4% (95% CI 7.9-12.8) and a lower limit of agreement of -11.3% (95% CI -13.8 to -8.9). UCC is accurate in the estimation of DRF(ok) drained by PCNT. 2008 S. Karger AG, Basel
Accounting for Effects of Orography in LDAS Precipitation Forcing Data
NASA Astrophysics Data System (ADS)
Schaake, J.; Higgins, W.; Cong, S.; Shi, W.; Duan, Q.; Yarosh, E.
2001-05-01
Precipitation analysis procedures that are widely used to make gridded precipitation estimates do not work well in mountainous areas because the gage density is too sparse relative to the spatacial frequency content of the actual precipitation field. Moreover, in the western U.S. most of the precipitation observations are low elevations and may not even detect occurrence of storms at high elevations. Although there are indeed significant limits to how accurately actual fields of orographic precipitation can be estimated from gage data alone, it is possible to make estimates for each time period that, over a period of time, have a climatology that should approximate the true climatology of the actual events. Analysis schemes that use the PRISM precipitation climatology to aid the precipitation analysis are being tested. The results of these tests will be presented.
Apparatus for sensor failure detection and correction in a gas turbine engine control system
NASA Technical Reports Server (NTRS)
Spang, H. A., III; Wanger, R. P. (Inventor)
1981-01-01
A gas turbine engine control system maintains a selected level of engine performance despite the failure or abnormal operation of one or more engine parameter sensors. The control system employs a continuously updated engine model which simulates engine performance and generates signals representing real time estimates of the engine parameter sensor signals. The estimate signals are transmitted to a control computational unit which utilizes them in lieu of the actual engine parameter sensor signals to control the operation of the engine. The estimate signals are also compared with the corresponding actual engine parameter sensor signals and the resulting difference signals are utilized to update the engine model. If a particular difference signal exceeds specific tolerance limits, the difference signal is inhibited from updating the model and a sensor failure indication is provided to the engine operator.
Directional MTF measurement using sphere phantoms for a digital breast tomosynthesis system
NASA Astrophysics Data System (ADS)
Lee, Changwoo; Baek, Jongduk
2015-03-01
The digital breast tomosynthesis (DBT) has been widely used as a diagnosis imaging modality of breast cancer because of potential for structure noise reduction, better detectability, and less breast compression. Since 3D modulation transfer function (MTF) is one of the quantitative metrics to assess the spatial resolution of medical imaging systems, it is very important to measure 3D MTF of the DBT system to evaluate the resolution performance. In order to do that, Samei et al. used sphere phantoms and applied Thornton's method to the DBT system. However, due to the limitation of Thornton's method, the low frequency drop, caused by the limited data acquisition angle and reconstruction filters, was not measured correctly. To overcome this limitation, we propose a Richardson-Lucy (RL) deconvolution based estimation method to measure the directional MTF. We reconstructed point and sphere objects using FDK algorithm within a 40⁰ data acquisition angle. The ideal 3D MTF is obtained by taking Fourier transform of the reconstructed point object, and three directions (i.e., fx-direction, fy-direction, and fxy-direction) of the ideal 3D MTF are used as a reference. To estimate the directional MTF, the plane integrals of the reconstructed and ideal sphere object were calculated and used to estimate the directional PSF using RL deconvolution technique. Finally, the directional MTF was calculated by taking Fourier transform of the estimated PSF. Compared to the previous method, the proposed method showed a good agreement with the ideal directional MTF, especially at low frequency regions.
Shah, Umang; Patel, Shraddha; Raval, Manan
2018-01-01
High performance liquid chromatography is an integral analytical tool in assessing drug product stability. HPLC methods should be able to separate, detect, and quantify the various drug-related degradants that can form on storage or manufacturing, plus detect any drug-related impurities that may be introduced during synthesis. A simple, economic, selective, precise, and stability-indicating HPLC method has been developed and validated for analysis of Rifampicin (RIFA) and Piperine (PIPE) in bulk drug and in the formulation. Reversed-phase chromatography was performed on a C18 column with Buffer (Potassium Dihydrogen Orthophosphate) pH 6.5 and Acetonitrile, 30:70), (%, v/v), as mobile phase at a flow rate of 1 mL min-1. The detection was performed at 341 nm and sharp peaks were obtained for RIFA and PIPE at retention time of 3.3 ± 0.01 min and 5.9 ± 0.01 min, respectively. The detection limits were found to be 2.385 ng/ml and 0.107 ng/ml and quantification limits were found to be 7.228ng/ml and 0.325ng/ml for RIFA and PIPE, respectively. The method was validated for accuracy, precision, reproducibility, specificity, robustness, and detection and quantification limits, in accordance with ICH guidelines. Stress study was performed on RIFA and PIPE and it was found that these degraded sufficiently in all applied chemical and physical conditions. Thus, the developed RP-HPLC method was found to be suitable for the determination of both the drugs in bulk as well as stability samples of capsule containing various excipients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Constraining the High-Energy Emission from Gamma-Ray Bursts with Fermi
NASA Technical Reports Server (NTRS)
Gehrels, Neil; Harding, A. K.; Hays, E.; Racusin, J. L.; Sonbas, E.; Stamatikos, M.; Guirec, S.
2012-01-01
We examine 288 GRBs detected by the Fermi Gamma-ray Space Telescope's Gamma-ray Burst Monitor (GBM) that fell within the field-of-view of Fermi's Large Area Telescope (LAT) during the first 2.5 years of observations, which showed no evidence for emission above 100 MeV. We report the photon flux upper limits in the 0.1-10 GeV range during the prompt emission phase as well as for fixed 30 s and 100 s integrations starting from the trigger time for each burst. We compare these limits with the fluxes that would be expected from extrapolations of spectral fits presented in the first GBM spectral catalog and infer that roughly half of the GBM-detected bursts either require spectral breaks between the GBM and LAT energy bands or have intrinsically steeper spectra above the peak of the nuF(sub v) spectra (E(sub pk)). In order to distinguish between these two scenarios, we perform joint GBM and LAT spectral fits to the 30 brightest GBM-detected bursts and find that a majority of these bursts are indeed softer above E(sub pk) than would be inferred from fitting the GBM data alone. Approximately 20% of this spectroscopic subsample show statistically significant evidence for a cut-off in their high-energy spectra, which if assumed to be due to gamma gamma attenuation, places limits on the maximum Lorentz factor associated with the relativistic outflow producing this emission. All of these latter bursts have maximum Lorentz factor estimates that are well below the minimum Lorentz factors calculated for LAT-detected GRBs, revealing a wide distribution in the bulk Lorentz factor of GRB outflows and indicating that LAT-detected bursts may represent the high end of this distribution.