Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.
2013-01-01
Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.
O'Connell, Allan F.; Talancy, Neil W.; Bailey, Larissa L.; Sauer, John R.; Cook, Robert; Gilbert, Andrew T.
2006-01-01
Large-scale, multispecies monitoring programs are widely used to assess changes in wildlife populations but they often assume constant detectability when documenting species occurrence. This assumption is rarely met in practice because animal populations vary across time and space. As a result, detectability of a species can be influenced by a number of physical, biological, or anthropogenic factors (e.g., weather, seasonality, topography, biological rhythms, sampling methods). To evaluate some of these influences, we estimated site occupancy rates using species-specific detection probabilities for meso- and large terrestrial mammal species on Cape Cod, Massachusetts, USA. We used model selection to assess the influence of different sampling methods and major environmental factors on our ability to detect individual species. Remote cameras detected the most species (9), followed by cubby boxes (7) and hair traps (4) over a 13-month period. Estimated site occupancy rates were similar among sampling methods for most species when detection probabilities exceeded 0.15, but we question estimates obtained from methods with detection probabilities between 0.05 and 0.15, and we consider methods with lower probabilities unacceptable for occupancy estimation and inference. Estimated detection probabilities can be used to accommodate variation in sampling methods, which allows for comparison of monitoring programs using different protocols. Vegetation and seasonality produced species-specific differences in detectability and occupancy, but differences were not consistent within or among species, which suggests that our results should be considered in the context of local habitat features and life history traits for the target species. We believe that site occupancy is a useful state variable and suggest that monitoring programs for mammals using occupancy data consider detectability prior to making inferences about species distributions or population change.
Method for predicting peptide detection in mass spectrometry
Kangas, Lars [West Richland, WA; Smith, Richard D [Richland, WA; Petritis, Konstantinos [Richland, WA
2010-07-13
A method of predicting whether a peptide present in a biological sample will be detected by analysis with a mass spectrometer. The method uses at least one mass spectrometer to perform repeated analysis of a sample containing peptides from proteins with known amino acids. The method then generates a data set of peptides identified as contained within the sample by the repeated analysis. The method then calculates the probability that a specific peptide in the data set was detected in the repeated analysis. The method then creates a plurality of vectors, where each vector has a plurality of dimensions, and each dimension represents a property of one or more of the amino acids present in each peptide and adjacent peptides in the data set. Using these vectors, the method then generates an algorithm from the plurality of vectors and the calculated probabilities that specific peptides in the data set were detected in the repeated analysis. The algorithm is thus capable of calculating the probability that a hypothetical peptide represented as a vector will be detected by a mass spectrometry based proteomic platform, given that the peptide is present in a sample introduced into a mass spectrometer.
Red-shouldered hawk occupancy surveys in central Minnesota, USA
Henneman, C.; McLeod, M.A.; Andersen, D.E.
2007-01-01
Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.
NASA Astrophysics Data System (ADS)
Vio, R.; Vergès, C.; Andreani, P.
2017-08-01
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
Dunham, Jason B.; Chelgren, Nathan D.; Heck, Michael P.; Clark, Steven M.
2013-01-01
We evaluated the probability of detecting larval lampreys using different methods of backpack electrofishing in wadeable streams in the U.S. Pacific Northwest. Our primary objective was to compare capture of lampreys using electrofishing with standard settings for salmon and trout to settings specifically adapted for capture of lampreys. Field work consisted of removal sampling by means of backpack electrofishing in 19 sites in streams representing a broad range of conditions in the region. Captures of lampreys at these sites were analyzed with a modified removal-sampling model and Bayesian estimation to measure the relative odds of capture using the lamprey-specific settings compared with the standard salmonid settings. We found that the odds of capture were 2.66 (95% credible interval, 0.87–78.18) times greater for the lamprey-specific settings relative to standard salmonid settings. When estimates of capture probability were applied to estimating the probabilities of detection, we found high (>0.80) detectability when the actual number of lampreys in a site was greater than 10 individuals and effort was at least two passes of electrofishing, regardless of the settings used. Further work is needed to evaluate key assumptions in our approach, including the evaluation of individual-specific capture probabilities and population closure. For now our results suggest comparable results are possible for detection of lampreys by using backpack electrofishing with salmonid- or lamprey-specific settings.
NASA Astrophysics Data System (ADS)
Jenkins, Colleen; Jordan, Jay; Carlson, Jeff
2007-02-01
This paper presents parameter estimation techniques useful for detecting background changes in a video sequence with extreme foreground activity. A specific application of interest is automated detection of the covert placement of threats (e.g., a briefcase bomb) inside crowded public facilities. We propose that a histogram of pixel intensity acquired from a fixed mounted camera over time for a series of images will be a mixture of two Gaussian functions: the foreground probability distribution function and background probability distribution function. We will use Pearson's Method of Moments to separate the two probability distribution functions. The background function can then be "remembered" and changes in the background can be detected. Subsequent comparisons of background estimates are used to detect changes. Changes are flagged to alert security forces to the presence and location of potential threats. Results are presented that indicate the significant potential for robust parameter estimation techniques as applied to video surveillance.
Detection of sea otters in boat-based surveys of Prince William Sound, Alaska
Udevitz, Mark S.; Bodkin, James L.; Costa, Daniel P.
1995-01-01
Boat-based surveys have been commonly used to monitor sea otter populations, but there has been little quantitative work to evaluate detection biases that may affect these surveys. We used ground-based observers to investigate sea otter detection probabilities in a boat-based survey of Prince William Sound, Alaska. We estimated that 30% of the otters present on surveyed transects were not detected by boat crews. Approximately half (53%) of the undetected otters were missed because the otters left the transects, apparently in response to the approaching boat. Unbiased estimates of detection probabilities will be required for obtaining unbiased population estimates from boat-based surveys of sea otters. Therefore, boat-based surveys should include methods to estimate sea otter detection probabilities under the conditions specific to each survey. Unbiased estimation of detection probabilities with ground-based observers requires either that the ground crews detect all of the otters in observed subunits, or that there are no errors in determining which crews saw each detected otter. Ground-based observer methods may be appropriate in areas where nearly all of the sea otter habitat is potentially visible from ground-based vantage points.
Helble, Tyler A; D'Spain, Gerald L; Hildebrand, John A; Campbell, Gregory S; Campbell, Richard L; Heaney, Kevin D
2013-09-01
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. A common mistake in the analysis of marine mammal acoustic data is formulating conclusions about these animals without first understanding how environmental properties such as bathymetry, sediment properties, water column sound speed, and ocean acoustic noise influence the detection and character of vocalizations in the acoustic data. The approach in this paper is to use Monte Carlo simulations with a full wave field acoustic propagation model to characterize the site specific probability of detection of six types of humpback whale calls at three passive acoustic monitoring locations off the California coast. Results show that the probability of detection can vary by factors greater than ten when comparing detections across locations, or comparing detections at the same location over time, due to environmental effects. Effects of uncertainties in the inputs to the propagation model are also quantified, and the model accuracy is assessed by comparing calling statistics amassed from 24,690 humpback units recorded in the month of October 2008. Under certain conditions, the probability of detection can be estimated with uncertainties sufficiently small to allow for accurate density estimates.
NASA Astrophysics Data System (ADS)
Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei
2017-06-01
In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.
Bird Radar Validation in the Field by Time-Referencing Line-Transect Surveys
Dokter, Adriaan M.; Baptist, Martin J.; Ens, Bruno J.; Krijgsveld, Karen L.; van Loon, E. Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar’s detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer’s accuracy in determining a bird’s transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ∼1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50±0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms. PMID:24066103
Bird radar validation in the field by time-referencing line-transect surveys.
Dokter, Adriaan M; Baptist, Martin J; Ens, Bruno J; Krijgsveld, Karen L; van Loon, E Emiel
2013-01-01
Track-while-scan bird radars are widely used in ornithological studies, but often the precise detection capabilities of these systems are unknown. Quantification of radar performance is essential to avoid observational biases, which requires practical methods for validating a radar's detection capability in specific field settings. In this study a method to quantify the detection capability of a bird radar is presented, as well a demonstration of this method in a case study. By time-referencing line-transect surveys, visually identified birds were automatically linked to individual tracks using their transect crossing time. Detection probabilities were determined as the fraction of the total set of visual observations that could be linked to radar tracks. To avoid ambiguities in assigning radar tracks to visual observations, the observer's accuracy in determining a bird's transect crossing time was taken into account. The accuracy was determined by examining the effect of a time lag applied to the visual observations on the number of matches found with radar tracks. Effects of flight altitude, distance, surface substrate and species size on the detection probability by the radar were quantified in a marine intertidal study area. Detection probability varied strongly with all these factors, as well as species-specific flight behaviour. The effective detection range for single birds flying at low altitude for an X-band marine radar based system was estimated at ~1.5 km. Within this range the fraction of individual flying birds that were detected by the radar was 0.50 ± 0.06 with a detection bias towards higher flight altitudes, larger birds and high tide situations. Besides radar validation, which we consider essential when quantification of bird numbers is important, our method of linking radar tracks to ground-truthed field observations can facilitate species-specific studies using surveillance radars. The methodology may prove equally useful for optimising tracking algorithms.
A multimodal detection model of dolphins to estimate abundance validated by field experiments.
Akamatsu, Tomonari; Ura, Tamaki; Sugimatsu, Harumi; Bahl, Rajendar; Behera, Sandeep; Panda, Sudarsan; Khan, Muntaz; Kar, S K; Kar, C S; Kimura, Satoko; Sasaki-Yamamoto, Yukiko
2013-09-01
Abundance estimation of marine mammals requires matching of detection of an animal or a group of animal by two independent means. A multimodal detection model using visual and acoustic cues (surfacing and phonation) that enables abundance estimation of dolphins is proposed. The method does not require a specific time window to match the cues of both means for applying mark-recapture method. The proposed model was evaluated using data obtained in field observations of Ganges River dolphins and Irrawaddy dolphins, as examples of dispersed and condensed distributions of animals, respectively. The acoustic detection probability was approximately 80%, 20% higher than that of visual detection for both species, regardless of the distribution of the animals in present study sites. The abundance estimates of Ganges River dolphins and Irrawaddy dolphins fairly agreed with the numbers reported in previous monitoring studies. The single animal detection probability was smaller than that of larger cluster size, as predicted by the model and confirmed by field data. However, dense groups of Irrawaddy dolphins showed difference in cluster sizes observed by visual and acoustic methods. Lower detection probability of single clusters of this species seemed to be caused by the clumped distribution of this species.
On the timing problem in optical PPM communications.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1971-01-01
Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Dynamical complexity changes during two forms of meditation
NASA Astrophysics Data System (ADS)
Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng
2011-06-01
Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letant, S E; Kane, S R; Murphy, G A
2008-05-30
This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence ofmore » dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.« less
White, P. Lewis; Archer, Alice E.; Barnes, Rosemary A.
2005-01-01
The accepted limitations associated with classic culture techniques for the diagnosis of invasive fungal infections have lead to the emergence of many non-culture-based methods. With superior sensitivities and quicker turnaround times, non-culture-based methods may aid the diagnosis of invasive fungal infections. In this review of the diagnostic service, we assessed the performances of two antigen detection techniques (enzyme-linked immunosorbent assay [ELISA] and latex agglutination) with a molecular method for the detection of invasive Candida infection and invasive aspergillosis. The specificities for all three assays were high (≥97%), although the Candida PCR method had enhanced sensitivity over both ELISA and latex agglutination with values of 95%, 75%, and 25%, respectively. However, calculating significant sensitivity values for the Aspergillus detection methods was not feasible due to a low number of proven/probable cases. Despite enhanced sensitivity, the PCR method failed to detect nucleic acid in a probable case of invasive Candida infection that was detected by ELISA. In conclusion, both PCR and ELISA techniques should be used in unison to aid the detection of invasive fungal infections. PMID:15872239
A double-observer approach for estimating detection probability and abundance from point counts
Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.
2000-01-01
Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
A new prior for bayesian anomaly detection: application to biosurveillance.
Shen, Y; Cooper, G F
2010-01-01
Bayesian anomaly detection computes posterior probabilities of anomalous events by combining prior beliefs and evidence from data. However, the specification of prior probabilities can be challenging. This paper describes a Bayesian prior in the context of disease outbreak detection. The goal is to provide a meaningful, easy-to-use prior that yields a posterior probability of an outbreak that performs at least as well as a standard frequentist approach. If this goal is achieved, the resulting posterior could be usefully incorporated into a decision analysis about how to act in light of a possible disease outbreak. This paper describes a Bayesian method for anomaly detection that combines learning from data with a semi-informative prior probability over patterns of anomalous events. A univariate version of the algorithm is presented here for ease of illustration of the essential ideas. The paper describes the algorithm in the context of disease-outbreak detection, but it is general and can be used in other anomaly detection applications. For this application, the semi-informative prior specifies that an increased count over baseline is expected for the variable being monitored, such as the number of respiratory chief complaints per day at a given emergency department. The semi-informative prior is derived based on the baseline prior, which is estimated from using historical data. The evaluation reported here used semi-synthetic data to evaluate the detection performance of the proposed Bayesian method and a control chart method, which is a standard frequentist algorithm that is closest to the Bayesian method in terms of the type of data it uses. The disease-outbreak detection performance of the Bayesian method was statistically significantly better than that of the control chart method when proper baseline periods were used to estimate the baseline behavior to avoid seasonal effects. When using longer baseline periods, the Bayesian method performed as well as the control chart method. The time complexity of the Bayesian algorithm is linear in the number of the observed events being monitored, due to a novel, closed-form derivation that is introduced in the paper. This paper introduces a novel prior probability for Bayesian outbreak detection that is expressive, easy-to-apply, computationally efficient, and performs as well or better than a standard frequentist method.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
Probabilistic approach to lysozyme crystal nucleation kinetics.
Dimitrov, Ivaylo L; Hodzhaoglu, Feyzim V; Koleva, Dobryana P
2015-09-01
Nucleation of lysozyme crystals in quiescent solutions at a regime of progressive nucleation is investigated under an optical microscope at conditions of constant supersaturation. A method based on the stochastic nature of crystal nucleation and using discrete time sampling of small solution volumes for the presence or absence of detectable crystals is developed. It allows probabilities for crystal detection to be experimentally estimated. One hundred single samplings were used for each probability determination for 18 time intervals and six lysozyme concentrations. Fitting of a particular probability function to experimentally obtained data made possible the direct evaluation of stationary rates for lysozyme crystal nucleation, the time for growth of supernuclei to a detectable size and probability distribution of nucleation times. Obtained stationary nucleation rates were then used for the calculation of other nucleation parameters, such as the kinetic nucleation factor, nucleus size, work for nucleus formation and effective specific surface energy of the nucleus. The experimental method itself is simple and adaptable and can be used for crystal nucleation studies of arbitrary soluble substances with known solubility at particular solution conditions.
NASA Astrophysics Data System (ADS)
Helble, Tyler Adam
Passive acoustic monitoring of marine mammal calls is an increasingly important method for assessing population numbers, distribution, and behavior. Automated methods are needed to aid in the analyses of the recorded data. When a mammal vocalizes in the marine environment, the received signal is a filtered version of the original waveform emitted by the marine mammal. The waveform is reduced in amplitude and distorted due to propagation effects that are influenced by the bathymetry and environment. It is important to account for these effects to determine a site-specific probability of detection for marine mammal calls in a given study area. A knowledge of that probability function over a range of environmental and ocean noise conditions allows vocalization statistics from recordings of single, fixed, omnidirectional sensors to be compared across sensors and at the same sensor over time with less bias and uncertainty in the results than direct comparison of the raw statistics. This dissertation focuses on both the development of new tools needed to automatically detect humpback whale vocalizations from single-fixed omnidirectional sensors as well as the determination of the site-specific probability of detection for monitoring sites off the coast of California. Using these tools, detected humpback calls are "calibrated" for environmental properties using the site-specific probability of detection values, and presented as call densities (calls per square kilometer per time). A two-year monitoring effort using these calibrated call densities reveals important biological and ecological information on migrating humpback whales off the coast of California. Call density trends are compared between the monitoring sites and at the same monitoring site over time. Call densities also are compared to several natural and human-influenced variables including season, time of day, lunar illumination, and ocean noise. The results reveal substantial differences in call densities between the two sites which were not noticeable using uncorrected (raw) call counts. Additionally, a Lombard effect was observed for humpback whale vocalizations in response to increasing ocean noise. The results presented in this thesis develop techniques to accurately measure marine mammal abundances from passive acoustic sensors.
On the performance of energy detection-based CR with SC diversity over IG channel
NASA Astrophysics Data System (ADS)
Verma, Pappu Kumar; Soni, Sanjay Kumar; Jain, Priyanka
2017-12-01
Cognitive radio (CR) is a viable 5G technology to address the scarcity of the spectrum. Energy detection-based sensing is known to be the simplest method as far as hardware complexity is concerned. In this paper, the performance of spectrum sensing-based energy detection technique in CR networks over inverse Gaussian channel for selection combining diversity technique is analysed. More specifically, accurate analytical expressions for the average detection probability under different detection scenarios such as single channel (no diversity) and with diversity reception are derived and evaluated. Further, the detection threshold parameter is optimised by minimising the probability of error over several diversity branches. The results clearly show the significant improvement in the probability of detection when optimised threshold parameter is applied. The impact of shadowing parameters on the performance of energy detector is studied in terms of complimentary receiver operating characteristic curve. To verify the correctness of our analysis, the derived analytical expressions are corroborated via exact result and Monte Carlo simulations.
Optimizing Probability of Detection Point Estimate Demonstration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.
Enumeration of Vibrio cholerae O1 in Bangladesh waters by fluorescent-antibody direct viable count.
Brayton, P R; Tamplin, M L; Huq, A; Colwell, R R
1987-01-01
A field trial to enumerate Vibrio cholerae O1 in aquatic environments in Bangladesh was conducted, comparing fluorescent-antibody direct viable count with culture detection by the most-probable-number index. Specificity of a monoclonal antibody prepared against the O1 antigen was assessed and incorporated into the fluorescence staining method. All pond and water samples yielded higher counts of viable V. cholerae O1 by fluorescent-antibody direct viable count than by the most-probable-number index. Fluorescence microscopy is a more sensitive detection system than culture methods because it allows the enumeration of both culturable and nonculturable cells and therefore provides more precise monitoring of microbiological water quality. PMID:3324967
Ensemble learning and model averaging for material identification in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.
2017-05-01
In this paper we present a method for identifying the material contained in a pixel or region of pixels in a hyperspectral image. An identification process can be performed on a spectrum from an image from pixels that has been pre-determined to be of interest, generally comparing the spectrum from the image to spectra in an identification library. The metric for comparison used in this paper a Bayesian probability for each material. This probability can be computed either from Bayes' theorem applied to normal distributions for each library spectrum or using model averaging. Using probabilities has the advantage that the probabilities can be summed over spectra for any material class to obtain a class probability. For example, the probability that the spectrum of interest is a fabric is equal to the sum of all probabilities for fabric spectra in the library. We can do the same to determine the probability for a specific type of fabric, or any level of specificity contained in our library. Probabilities not only tell us which material is most likely, the tell us how confident we can be in the material presence; a probability close to 1 indicates near certainty of the presence of a material in the given class, and a probability close to 0.5 indicates that we cannot know if the material is present at the given level of specificity. This is much more informative than a detection score from a target detection algorithm or a label from a classification algorithm. In this paper we present results in the form of a hierarchical tree with probabilities for each node. We use Forest Radiance imagery with 159 bands.
Spering, Cynthia C.; Hobson, Valerie; Lucas, John A.; Menon, Chloe V.; Hall, James R.
2012-01-01
Background. To validate and extend the findings of a raised cut score of O’Bryant and colleagues (O’Bryant SE, Humphreys JD, Smith GE, et al. Detecting dementia with the mini-mental state examination in highly educated individuals. Arch Neurol. 2008;65(7):963–967.) for the Mini-Mental State Examination in detecting cognitive dysfunction in a bilingual sample of highly educated ethnically diverse individuals. Methods. Archival data were reviewed from participants enrolled in the National Alzheimer's Coordinating Center minimum data set. Data on 7,093 individuals with 16 or more years of education were analyzed, including 2,337 cases with probable and possible Alzheimer's disease, 1,418 mild cognitive impairment patients, and 3,088 nondemented controls. Ethnic composition was characterized as follows: 6,296 Caucasians, 581 African Americans, 4 American Indians or Alaska natives, 2 native Hawaiians or Pacific Islanders, 149 Asians, 43 “Other,” and 18 of unknown origin. Results. Diagnostic accuracy estimates (sensitivity, specificity, and likelihood ratio) of Mini-Mental State Examination cut scores in detecting probable and possible Alzheimer's disease were examined. A standard Mini-Mental State Examination cut score of 24 (≤23) yielded a sensitivity of 0.58 and a specificity of 0.98 in detecting probable and possible Alzheimer's disease across ethnicities. A cut score of 27 (≤26) resulted in an improved balance of sensitivity and specificity (0.79 and 0.90, respectively). In the cognitively impaired group (mild cognitive impairment and probable and possible Alzheimer's disease), the standard cut score yielded a sensitivity of 0.38 and a specificity of 1.00 while raising the cut score to 27 resulted in an improved balance of 0.59 and 0.96 of sensitivity and specificity, respectively. Conclusions. These findings cross-validate our previous work and extend them to an ethnically diverse cohort. A higher cut score is needed to maximize diagnostic accuracy of the Mini-Mental State Examination in individuals with college degrees. PMID:22396476
Effectiveness of scat detection dogs for detecting forest carnivores
Long, Robert A.; Donovan, T.M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.
2007-01-01
We assessed the detection and accuracy rates of detection dogs trained to locate scats from free-ranging black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus). During the summers of 2003-2004, 5 detection teams located 1,565 scats (747 putative black bear, 665 putative fisher, and 153 putative bobcat) at 168 survey sites throughout Vermont, USA. Of 347 scats genetically analyzed for species identification, 179 (51.6%) yielded a positive identification, 131 (37.8%) failed to yield DNA information, and 37 (10.7%) yielded DNA but provided no species confirmation. For 70 survey sites where confirmation of a putative target species' scat was not possible, we assessed the probability that ???1 of the scats collected at the site was deposited by the target species (probability of correct identification; P ID). Based on species confirmations or PID values, we detected bears at 57.1% (96) of sites, fishers at 61.3% (103) of sites, and bobcats at 12.5%o (21) of sites. We estimated that the mean probability of detecting the target species (when present) during a single visit to a site was 0.86 for black bears, 0.95 for fishers, and 0.40 for bobcats. The probability of detecting black bears was largely unaffected by site- or visit-specific covariates, but the probability of detecting fishers varied by detection team. We found little or no effect of topographic ruggedness, vegetation density, or local weather (e.g., temp, humidity) on detection probability for fishers or black bears (data were insufficient for bobcat analyses). Detection dogs were highly effective at locating scats from forest carnivores and provided an efficient and accurate method for collecting detection-nondetection data on multiple species.
2013-09-01
of sperm whales. Although the methods developed in those papers demonstrate feasibility, they are not applicable to a)Author to whom correspondence...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...location clicks (Marques et al., 2009) instead of detecting individual animals or groups of animals; these cue- counting methods will not be specifically
Chan, Cheng Leng; Rudrappa, Sowmya; Ang, Pei San; Li, Shu Chuen; Evans, Stephen J W
2017-08-01
The ability to detect safety concerns from spontaneous adverse drug reaction reports in a timely and efficient manner remains important in public health. This paper explores the behaviour of the Sequential Probability Ratio Test (SPRT) and ability to detect signals of disproportionate reporting (SDRs) in the Singapore context. We used SPRT with a combination of two hypothesised relative risks (hRRs) of 2 and 4.1 to detect signals of both common and rare adverse events in our small database. We compared SPRT with other methods in terms of number of signals detected and whether labelled adverse drug reactions were detected or the reaction terms were considered serious. The other methods used were reporting odds ratio (ROR), Bayesian Confidence Propagation Neural Network (BCPNN) and Gamma Poisson Shrinker (GPS). The SPRT produced 2187 signals in common with all methods, 268 unique signals, and 70 signals in common with at least one other method, and did not produce signals in 178 cases where two other methods detected them, and there were 403 signals unique to one of the other methods. In terms of sensitivity, ROR performed better than other methods, but the SPRT method found more new signals. The performances of the methods were similar for negative predictive value and specificity. Using a combination of hRRs for SPRT could be a useful screening tool for regulatory agencies, and more detailed investigation of the medical utility of the system is merited.
Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
NASA Astrophysics Data System (ADS)
Kaftan, Jens N.; Kiraly, Atilla P.; Bakai, Annemarie; Das, Marco; Novak, Carol L.; Aach, Til
2008-03-01
Pulmonary vascular tree segmentation has numerous applications in medical imaging and computer-aided diagnosis (CAD), including detection and visualization of pulmonary emboli (PE), improved lung nodule detection, and quantitative vessel analysis. We present a novel approach to pulmonary vessel segmentation based on a fuzzy segmentation concept, combining the strengths of both threshold and seed point based methods. The lungs of the original image are first segmented and a threshold-based approach identifies core vessel components with a high specificity. These components are then used to automatically identify reliable seed points for a fuzzy seed point based segmentation method, namely fuzzy connectedness. The output of the method consists of the probability of each voxel belonging to the vascular tree. Hence, our method provides the possibility to adjust the sensitivity/specificity of the segmentation result a posteriori according to application-specific requirements, through definition of a minimum vessel-probability required to classify a voxel as belonging to the vascular tree. The method has been evaluated on contrast-enhanced thoracic CT scans from clinical PE cases and demonstrates overall promising results. For quantitative validation we compare the segmentation results to randomly selected, semi-automatically segmented sub-volumes and present the resulting receiver operating characteristic (ROC) curves. Although we focus on contrast enhanced chest CT data, the method can be generalized to other regions of the body as well as to different imaging modalities.
The effect of timing errors in optical digital systems.
NASA Technical Reports Server (NTRS)
Gagliardi, R. M.
1972-01-01
The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.
A generic nuclei detection method for histopathological breast images
NASA Astrophysics Data System (ADS)
Kost, Henning; Homeyer, André; Bult, Peter; Balkenhol, Maschenka C. A.; van der Laak, Jeroen A. W. M.; Hahn, Horst K.
2016-03-01
The detection of cell nuclei plays a key role in various histopathological image analysis problems. Considering the high variability of its applications, we propose a novel generic and trainable detection approach. Adaption to specific nuclei detection tasks is done by providing training samples. A trainable deconvolution and classification algorithm is used to generate a probability map indicating the presence of a nucleus. The map is processed by an extended watershed segmentation step to identify the nuclei positions. We have tested our method on data sets with different stains and target nuclear types. We obtained F1-measures between 0.83 and 0.93.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Can camera traps monitor Komodo dragons a large ectothermic predator?
Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S
2013-01-01
Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.
Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?
Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.
2013-01-01
Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site*survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species. PMID:23527027
The relationship between species detection probability and local extinction probability
Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.
2004-01-01
In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.
Smith, Christopher D.; Quist, Michael C.; Hardy, Ryan S.
2015-01-01
Research comparing different sampling techniques helps improve the efficiency and efficacy of sampling efforts. We compared the effectiveness of three sampling techniques (small-mesh hoop nets, benthic trawls, boat-mounted electrofishing) for 30 species in the Green (WY, USA) and Kootenai (ID, USA) rivers by estimating conditional detection probabilities (probability of detecting a species given its presence at a site). Electrofishing had the highest detection probabilities (generally greater than 0.60) for most species (88%), but hoop nets also had high detectability for several taxa (e.g., adult burbot Lota lota, juvenile northern pikeminnow Ptychocheilus oregonensis). Benthic trawls had low detection probabilities (<0.05) for most taxa (84%). Gear-specific effects were present for most species indicating large differences in gear effectiveness among techniques. In addition to gear effects, habitat characteristics also influenced detectability of fishes. Most species-specific habitat relationships were idiosyncratic and reflected the ecology of the species. Overall findings of our study indicate that boat-mounted electrofishing and hoop nets are the most effective techniques for sampling fish assemblages in large, coldwater rivers.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
2017-11-01
When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.
Gotelli, Nicholas J.; Dorazio, Robert M.; Ellison, Aaron M.; Grossman, Gary D.
2010-01-01
Quantifying patterns of temporal trends in species assemblages is an important analytical challenge in community ecology. We describe methods of analysis that can be applied to a matrix of counts of individuals that is organized by species (rows) and time-ordered sampling periods (columns). We first developed a bootstrapping procedure to test the null hypothesis of random sampling from a stationary species abundance distribution with temporally varying sampling probabilities. This procedure can be modified to account for undetected species. We next developed a hierarchical model to estimate species-specific trends in abundance while accounting for species-specific probabilities of detection. We analysed two long-term datasets on stream fishes and grassland insects to demonstrate these methods. For both assemblages, the bootstrap test indicated that temporal trends in abundance were more heterogeneous than expected under the null model. We used the hierarchical model to estimate trends in abundance and identified sets of species in each assemblage that were steadily increasing, decreasing or remaining constant in abundance over more than a decade of standardized annual surveys. Our methods of analysis are broadly applicable to other ecological datasets, and they represent an advance over most existing procedures, which do not incorporate effects of incomplete sampling and imperfect detection.
Environmental DNA as a Tool for Inventory and Monitoring of Aquatic Vertebrates
2017-07-01
geomorphic calculations and description of each reach. Methods Channel Surveys We initially selected reaches based on access and visual indicators...WA 99164 I-2 Environmental DNA lab protocol: designing species-specific qPCR assays Species-specific surveys should use quantitative polymerase...to traditional field sampling with respect to sensitivity, detection probabilities, and cost efficiency. Compared to field surveys , eDNA sampling
Peck, Michael W.; Plowman, June; Aldus, Clare F.; Wyatt, Gary M.; Penaloza Izurieta, Walter; Stringer, Sandra C.; Barker, Gary C.
2010-01-01
The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined. PMID:20709854
Peck, Michael W; Plowman, June; Aldus, Clare F; Wyatt, Gary M; Izurieta, Walter Penaloza; Stringer, Sandra C; Barker, Gary C
2010-10-01
The highly potent botulinum neurotoxins are responsible for botulism, a severe neuroparalytic disease. Strains of nonproteolytic Clostridium botulinum form neurotoxins of types B, E, and F and are the main hazard associated with minimally heated refrigerated foods. Recent developments in quantitative microbiological risk assessment (QMRA) and food safety objectives (FSO) have made food safety more quantitative and include, as inputs, probability distributions for the contamination of food materials and foods. A new method that combines a selective enrichment culture with multiplex PCR has been developed and validated to enumerate specifically the spores of nonproteolytic C. botulinum. Key features of this new method include the following: (i) it is specific for nonproteolytic C. botulinum (and does not detect proteolytic C. botulinum), (ii) the detection limit has been determined for each food tested (using carefully structured control samples), and (iii) a low detection limit has been achieved by the use of selective enrichment and large test samples. The method has been used to enumerate spores of nonproteolytic C. botulinum in 637 samples of 19 food materials included in pasta-based minimally heated refrigerated foods and in 7 complete foods. A total of 32 samples (5 egg pastas and 27 scallops) contained spores of nonproteolytic C. botulinum type B or F. The majority of samples contained <100 spores/kg, but one sample of scallops contained 444 spores/kg. Nonproteolytic C. botulinum type E was not detected. Importantly, for QMRA and FSO, the construction of probability distributions will enable the frequency of packs containing particular levels of contamination to be determined.
Isothermal amplification detection of nucleic acids by a double-nicked beacon.
Shi, Chao; Zhou, Meiling; Pan, Mei; Zhong, Guilin; Ma, Cuiping
2016-03-01
Isothermal and rapid amplification detection of nucleic acids is an important technology in environmental monitoring, foodborne pathogen detection, and point-of-care clinical diagnostics. Here we have developed a novel method of isothermal signal amplification for single-stranded DNA (ssDNA) detection. The ssDNA target could be used as an initiator, coupled with a double-nicked molecular beacon, to originate amplification cycles, achieving cascade signal amplification. In addition, the method showed good specificity and strong anti-jamming capability. Overall, it is a one-pot and isothermal strand displacement amplification method without the requirement of a stepwise procedure, which greatly simplifies the experimental procedure and decreases the probability of contamination of samples. With its advantages, the method would be very useful to detect nucleic acids in point-of-care or field use. Copyright © 2015 Elsevier Inc. All rights reserved.
A Gibbs sampler for Bayesian analysis of site-occupancy data
Dorazio, Robert M.; Rodriguez, Daniel Taylor
2012-01-01
1. A Bayesian analysis of site-occupancy data containing covariates of species occurrence and species detection probabilities is usually completed using Markov chain Monte Carlo methods in conjunction with software programs that can implement those methods for any statistical model, not just site-occupancy models. Although these software programs are quite flexible, considerable experience is often required to specify a model and to initialize the Markov chain so that summaries of the posterior distribution can be estimated efficiently and accurately. 2. As an alternative to these programs, we develop a Gibbs sampler for Bayesian analysis of site-occupancy data that include covariates of species occurrence and species detection probabilities. This Gibbs sampler is based on a class of site-occupancy models in which probabilities of species occurrence and detection are specified as probit-regression functions of site- and survey-specific covariate measurements. 3. To illustrate the Gibbs sampler, we analyse site-occupancy data of the blue hawker, Aeshna cyanea (Odonata, Aeshnidae), a common dragonfly species in Switzerland. Our analysis includes a comparison of results based on Bayesian and classical (non-Bayesian) methods of inference. We also provide code (based on the R software program) for conducting Bayesian and classical analyses of site-occupancy data.
Heart sounds analysis using probability assessment.
Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P
2017-07-31
This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.
Schmelzle, Molly C; Kinziger, Andrew P
2016-07-01
Environmental DNA (eDNA) monitoring approaches promise to greatly improve detection of rare, endangered and invasive species in comparison with traditional field approaches. Herein, eDNA approaches and traditional seining methods were applied at 29 research locations to compare method-specific estimates of detection and occupancy probabilities for endangered tidewater goby (Eucyclogobius newberryi). At each location, multiple paired seine hauls and water samples for eDNA analysis were taken, ranging from two to 23 samples per site, depending upon habitat size. Analysis using a multimethod occupancy modelling framework indicated that the probability of detection using eDNA was nearly double (0.74) the rate of detection for seining (0.39). The higher detection rates afforded by eDNA allowed determination of tidewater goby occupancy at two locations where they have not been previously detected and at one location considered to be locally extirpated. Additionally, eDNA concentration was positively related to tidewater goby catch per unit effort, suggesting eDNA could potentially be used as a proxy for local tidewater goby abundance. Compared to traditional field sampling, eDNA provided improved occupancy parameter estimates and can be applied to increase management efficiency across a broad spatial range and within a diversity of habitats. © 2015 John Wiley & Sons Ltd.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Fujiwara, T.; Lin, S.
1986-01-01
In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.
Human versus automation in responding to failures: an expected-value analysis
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Parasuraman, R.
2000-01-01
A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
2014-01-01
Background The objective of this study was to perform a systematic review and a meta-analysis in order to estimate the diagnostic accuracy of diffusion weighted imaging (DWI) in the preoperative assessment of deep myometrial invasion in patients with endometrial carcinoma. Methods Studies evaluating DWI for the detection of deep myometrial invasion in patients with endometrial carcinoma were systematically searched for in the MEDLINE, EMBASE, and Cochrane Library from January 1995 to January 2014. Methodologic quality was assessed by using the Quality Assessment of Diagnostic Accuracy Studies tool. Bivariate random-effects meta-analytic methods were used to obtain pooled estimates of sensitivity, specificity, diagnostic odds ratio (DOR) and receiver operating characteristic (ROC) curves. The study also evaluated the clinical utility of DWI in preoperative assessment of deep myometrial invasion. Results Seven studies enrolling a total of 320 individuals met the study inclusion criteria. The summary area under the ROC curve was 0.91. There was no evidence of publication bias (P = 0.90, bias coefficient analysis). Sensitivity and specificity of DWI for detection of deep myometrial invasion across all studies were 0.90 and 0.89, respectively. Positive and negative likelihood ratios with DWI were 8 and 0.11 respectively. In patients with high pre-test probabilities, DWI enabled confirmation of deep myometrial invasion; in patients with low pre-test probabilities, DWI enabled exclusion of deep myometrial invasion. The worst case scenario (pre-test probability, 50%) post-test probabilities were 89% and 10% for positive and negative DWI results, respectively. Conclusion DWI has high sensitivity and specificity for detecting deep myometrial invasion and more importantly can reliably rule out deep myometrial invasion. Therefore, it would be worthwhile to add a DWI sequence to the standard MRI protocols in preoperative evaluation of endometrial cancer in order to detect deep myometrial invasion, which along with other poor prognostic factors like age, tumor grade, and LVSI would be useful in stratifying high risk groups thereby helping in the tailoring of surgical approach in patient with low risk of endometrial carcinoma. PMID:25608571
Quantifying seining detection probability for fishes of Great Plains sand‐bed rivers
Mollenhauer, Robert; Logue, Daniel R.; Brewer, Shannon K.
2018-01-01
Species detection error (i.e., imperfect and variable detection probability) is an essential consideration when investigators map distributions and interpret habitat associations. When fish detection error that is due to highly variable instream environments needs to be addressed, sand‐bed streams of the Great Plains represent a unique challenge. We quantified seining detection probability for diminutive Great Plains fishes across a range of sampling conditions in two sand‐bed rivers in Oklahoma. Imperfect detection resulted in underestimates of species occurrence using naïve estimates, particularly for less common fishes. Seining detection probability also varied among fishes and across sampling conditions. We observed a quadratic relationship between water depth and detection probability, in which the exact nature of the relationship was species‐specific and dependent on water clarity. Similarly, the direction of the relationship between water clarity and detection probability was species‐specific and dependent on differences in water depth. The relationship between water temperature and detection probability was also species dependent, where both the magnitude and direction of the relationship varied among fishes. We showed how ignoring detection error confounded an underlying relationship between species occurrence and water depth. Despite imperfect and heterogeneous detection, our results support that determining species absence can be accomplished with two to six spatially replicated seine hauls per 200‐m reach under average sampling conditions; however, required effort would be higher under certain conditions. Detection probability was low for the Arkansas River Shiner Notropis girardi, which is federally listed as threatened, and more than 10 seine hauls per 200‐m reach would be required to assess presence across sampling conditions. Our model allows scientists to estimate sampling effort to confidently assess species occurrence, which maximizes the use of available resources. Increased implementation of approaches that consider detection error promote ecological advancements and conservation and management decisions that are better informed.
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
Woldegebriel, Michael; Derks, Eduard
2017-01-17
In this work, a novel probabilistic untargeted feature detection algorithm for liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) using artificial neural network (ANN) is presented. The feature detection process is approached as a pattern recognition problem, and thus, ANN was utilized as an efficient feature recognition tool. Unlike most existing feature detection algorithms, with this approach, any suspected chromatographic profile (i.e., shape of a peak) can easily be incorporated by training the network, avoiding the need to perform computationally expensive regression methods with specific mathematical models. In addition, with this method, we have shown that the high-resolution raw data can be fully utilized without applying any arbitrary thresholds or data reduction, therefore improving the sensitivity of the method for compound identification purposes. Furthermore, opposed to existing deterministic (binary) approaches, this method rather estimates the probability of a feature being present/absent at a given point of interest, thus giving chance for all data points to be propagated down the data analysis pipeline, weighed with their probability. The algorithm was tested with data sets generated from spiked samples in forensic and food safety context and has shown promising results by detecting features for all compounds in a computationally reasonable time.
Schmidt, Benedikt R
2003-08-01
The evidence for amphibian population declines is based on count data that were not adjusted for detection probabilities. Such data are not reliable even when collected using standard methods. The formula C = Np (where C is a count, N the true parameter value, and p is a detection probability) relates count data to demography, population size, or distributions. With unadjusted count data, one assumes a linear relationship between C and N and that p is constant. These assumptions are unlikely to be met in studies of amphibian populations. Amphibian population data should be based on methods that account for detection probabilities.
Nielson, Ryan M.; Gray, Brian R.; McDonald, Lyman L.; Heglund, Patricia J.
2011-01-01
Estimation of site occupancy rates when detection probabilities are <1 is well established in wildlife science. Data from multiple visits to a sample of sites are used to estimate detection probabilities and the proportion of sites occupied by focal species. In this article we describe how site occupancy methods can be applied to estimate occupancy rates of plants and other sessile organisms. We illustrate this approach and the pitfalls of ignoring incomplete detection using spatial data for 2 aquatic vascular plants collected under the Upper Mississippi River's Long Term Resource Monitoring Program (LTRMP). Site occupancy models considered include: a naïve model that ignores incomplete detection, a simple site occupancy model assuming a constant occupancy rate and a constant probability of detection across sites, several models that allow site occupancy rates and probabilities of detection to vary with habitat characteristics, and mixture models that allow for unexplained variation in detection probabilities. We used information theoretic methods to rank competing models and bootstrapping to evaluate the goodness-of-fit of the final models. Results of our analysis confirm that ignoring incomplete detection can result in biased estimates of occupancy rates. Estimates of site occupancy rates for 2 aquatic plant species were 19–36% higher compared to naive estimates that ignored probabilities of detection <1. Simulations indicate that final models have little bias when 50 or more sites are sampled, and little gains in precision could be expected for sample sizes >300. We recommend applying site occupancy methods for monitoring presence of aquatic species.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2002-01-01
Use of point-count surveys is a popular method for collecting data on abundance and distribution of birds. However, analyses of such data often ignore potential differences in detection probability. We adapted a removal model to directly estimate detection probability during point-count surveys. The model assumes that singing frequency is a major factor influencing probability of detection when birds are surveyed using point counts. This may be appropriate for surveys in which most detections are by sound. The model requires counts to be divided into several time intervals. Point counts are often conducted for 10 min, where the number of birds recorded is divided into those first observed in the first 3 min, the subsequent 2 min, and the last 5 min. We developed a maximum-likelihood estimator for the detectability of birds recorded during counts divided into those intervals. This technique can easily be adapted to point counts divided into intervals of any length. We applied this method to unlimited-radius counts conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. We found differences in detection probability among species. Species that sing frequently such as Winter Wren (Troglodytes troglodytes) and Acadian Flycatcher (Empidonax virescens) had high detection probabilities (∼90%) and species that call infrequently such as Pileated Woodpecker (Dryocopus pileatus) had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. We used the same approach to estimate detection probability and density for a subset of the observations with limited-radius point counts.
Blauch, A J; Schiano, J L; Ginsberg, M D
2000-06-01
The performance of a nuclear resonance detection system can be quantified using binary detection theory. Within this framework, signal averaging increases the probability of a correct detection and decreases the probability of a false alarm by reducing the variance of the noise in the average signal. In conjunction with signal averaging, we propose another method based on feedback control concepts that further improves detection performance. By maximizing the nuclear resonance signal amplitude, feedback raises the probability of correct detection. Furthermore, information generated by the feedback algorithm can be used to reduce the probability of false alarm. We discuss the advantages afforded by feedback that cannot be obtained using signal averaging. As an example, we show how this method is applicable to the detection of explosives using nuclear quadrupole resonance. Copyright 2000 Academic Press.
Hostetter, Nathan J.; Evans, Allen F.; Cramer, Bradley M.; Collis, Ken; Lyons, Donald E.; Roby, Daniel D.
2015-01-01
Accurate assessment of specific mortality factors is vital to prioritize recovery actions for threatened and endangered species. For decades, tag recovery methods have been used to estimate fish mortality due to avian predation. Predation probabilities derived from fish tag recoveries on piscivorous waterbird colonies typically reflect minimum estimates of predation due to an unknown and unaccounted-for fraction of tags that are consumed but not deposited on-colony (i.e., deposition probability). We applied an integrated tag recovery modeling approach in a Bayesian context to estimate predation probabilities that accounted for predator-specific tag detection and deposition probabilities in a multiple-predator system. Studies of PIT tag deposition were conducted across three bird species nesting at seven different colonies in the Columbia River basin, USA. Tag deposition probabilities differed significantly among predator species (Caspian ternsHydroprogne caspia: deposition probability = 0.71, 95% credible interval [CRI] = 0.51–0.89; double-crested cormorants Phalacrocorax auritus: 0.51, 95% CRI = 0.34–0.70; California gulls Larus californicus: 0.15, 95% CRI = 0.11–0.21) but showed little variation across trials within a species or across years. Data from a 6-year study (2008–2013) of PIT-tagged juvenile Snake River steelhead Oncorhynchus mykiss (listed as threatened under the Endangered Species Act) indicated that colony-specific predation probabilities ranged from less than 0.01 to 0.17 and varied by predator species, colony location, and year. Integrating the predator-specific deposition probabilities increased the predation probabilities by a factor of approximately 1.4 for Caspian terns, 2.0 for double-crested cormorants, and 6.7 for California gulls compared with traditional minimum predation rate methods, which do not account for deposition probabilities. Results supported previous findings on the high predation impacts from strictly piscivorous waterbirds nesting in the Columbia River estuary (i.e., terns and cormorants), but our findings also revealed greater impacts of a generalist predator species (i.e., California gulls) than were previously documented. Approaches used in this study allow for direct comparisons among multiple fish mortality factors and considerably improve the reliability of tag recovery models for estimating predation probabilities in multiple-predator systems.
Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.
Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo
2018-01-01
Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Removal of anti-Stokes emission background in STED microscopy by FPGA-based synchronous detection
NASA Astrophysics Data System (ADS)
Castello, M.; Tortarolo, G.; Coto Hernández, I.; Deguchi, T.; Diaspro, A.; Vicidomini, G.
2017-05-01
In stimulated emission depletion (STED) microscopy, the role of the STED beam is to de-excite, via stimulated emission, the fluorophores that have been previously excited by the excitation beam. This condition, together with specific beam intensity distributions, allows obtaining true sub-diffraction spatial resolution images. However, if the STED beam has a non-negligible probability to excite the fluorophores, a strong fluorescent background signal (anti-Stokes emission) reduces the effective resolution. For STED scanning microscopy, different synchronous detection methods have been proposed to remove this anti-Stokes emission background and recover the resolution. However, every method works only for a specific STED microscopy implementation. Here we present a user-friendly synchronous detection method compatible with any STED scanning microscope. It exploits a data acquisition (DAQ) card based on a field-programmable gate array (FPGA), which is progressively used in STED microscopy. In essence, the FPGA-based DAQ card synchronizes the fluorescent signal registration, the beam deflection, and the excitation beam interruption, providing a fully automatic pixel-by-pixel synchronous detection method. We validate the proposed method in both continuous wave and pulsed STED microscope systems.
USDA-ARS?s Scientific Manuscript database
An algorithm is presented to fuse the Normalized Difference Vegetation Index (NDVI) with Light Detection and Ranging (LiDAR) elevation data to produce a map potentially useful for the site-specific scouting and pest management of several insect pests. In cotton, these pests include the Tarnished Pl...
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
Meixell, Brandt W.; Arnold, Todd W.; Lindberg, Mark S.; Smith, Matthew M.; Runstadler, Jonathan A.; Ramey, Andy M.
2016-01-01
Methods: We used molecular methods to screen blood samples and cloacal/oropharyngeal swabs collected from 1347 ducks of five species during May-August 2010, in interior Alaska, for the presence of hematozoa, Influenza A Virus (IAV), and IAV antibodies. Using models to account for imperfect detection of parasites, we estimated seasonal variation in prevalence of three parasite genera (Haemoproteus, Plasmodium, Leucocytozoon) and investigated how co-infection with parasites and viruses were related to the probability of infection. Results: We detected parasites from each hematozoan genus in adult and juvenile ducks of all species sampled. Seasonal patterns in detection and prevalence varied by parasite genus and species, age, and sex of duck hosts. The probabilities of infection for Haemoproteus and Leucocytozoon parasites were strongly positively correlated, but hematozoa infection was not correlated with IAV infection or serostatus. The probability of Haemoproteus infection was negatively related to body condition in juvenile ducks; relationships between Leucocytozoon infection and body condition varied among host species. Conclusions: We present prevalence estimates for Haemoproteus, Leucocytozoon, and Plasmodium infections in waterfowl at the interface of the sub-Arctic and Arctic and provide evidence for local transmission of all three parasite genera. Variation in prevalence and molecular detection of hematozoa parasites in wild ducks is influenced by seasonal timing and a number of host traits. A positive correlation in co-infection of Leucocytozoon and Haemoproteus suggests that infection probability by parasites in one or both genera is enhanced by infection with the other, or that encounter rates of hosts and genus-specific vectors are correlated. Using size-adjusted mass as an index of host condition, we did not find evidence for strong deleterious consequences of hematozoa infection in wild ducks.
QQ-SNV: single nucleotide variant detection at low frequency by comparing the quality quantiles.
Van der Borght, Koen; Thys, Kim; Wetzels, Yves; Clement, Lieven; Verbist, Bie; Reumers, Joke; van Vlijmen, Herman; Aerssens, Jeroen
2015-11-10
Next generation sequencing enables studying heterogeneous populations of viral infections. When the sequencing is done at high coverage depth ("deep sequencing"), low frequency variants can be detected. Here we present QQ-SNV (http://sourceforge.net/projects/qqsnv), a logistic regression classifier model developed for the Illumina sequencing platforms that uses the quantiles of the quality scores, to distinguish true single nucleotide variants from sequencing errors based on the estimated SNV probability. To train the model, we created a dataset of an in silico mixture of five HIV-1 plasmids. Testing of our method in comparison to the existing methods LoFreq, ShoRAH, and V-Phaser 2 was performed on two HIV and four HCV plasmid mixture datasets and one influenza H1N1 clinical dataset. For default application of QQ-SNV, variants were called using a SNV probability cutoff of 0.5 (QQ-SNV(D)). To improve the sensitivity we used a SNV probability cutoff of 0.0001 (QQ-SNV(HS)). To also increase specificity, SNVs called were overruled when their frequency was below the 80(th) percentile calculated on the distribution of error frequencies (QQ-SNV(HS-P80)). When comparing QQ-SNV versus the other methods on the plasmid mixture test sets, QQ-SNV(D) performed similarly to the existing approaches. QQ-SNV(HS) was more sensitive on all test sets but with more false positives. QQ-SNV(HS-P80) was found to be the most accurate method over all test sets by balancing sensitivity and specificity. When applied to a paired-end HCV sequencing study, with lowest spiked-in true frequency of 0.5%, QQ-SNV(HS-P80) revealed a sensitivity of 100% (vs. 40-60% for the existing methods) and a specificity of 100% (vs. 98.0-99.7% for the existing methods). In addition, QQ-SNV required the least overall computation time to process the test sets. Finally, when testing on a clinical sample, four putative true variants with frequency below 0.5% were consistently detected by QQ-SNV(HS-P80) from different generations of Illumina sequencers. We developed and successfully evaluated a novel method, called QQ-SNV, for highly efficient single nucleotide variant calling on Illumina deep sequencing virology data.
Hagihara, Rie; Jones, Rhondda E; Sobtzick, Susan; Cleguer, Christophe; Garrigue, Claire; Marsh, Helene
2018-01-01
The probability of an aquatic animal being available for detection is typically <1. Accounting for covariates that reduce the probability of detection is important for obtaining robust estimates of the population abundance and determining its status and trends. The dugong (Dugong dugon) is a bottom-feeding marine mammal and a seagrass community specialist. We hypothesized that the probability of a dugong being available for detection is dependent on water depth and that dugongs spend more time underwater in deep-water seagrass habitats than in shallow-water seagrass habitats. We tested this hypothesis by quantifying the depth use of 28 wild dugongs fitted with GPS satellite transmitters and time-depth recorders (TDRs) at three sites with distinct seagrass depth distributions: 1) open waters supporting extensive seagrass meadows to 40 m deep (Torres Strait, 6 dugongs, 2015); 2) a protected bay (average water depth 6.8 m) with extensive shallow seagrass beds (Moreton Bay, 13 dugongs, 2011 and 2012); and 3) a mixture of lagoon, coral and seagrass habitats to 60 m deep (New Caledonia, 9 dugongs, 2013). The fitted instruments were used to measure the times the dugongs spent in the experimentally determined detection zones under various environmental conditions. The estimated probability of detection was applied to aerial survey data previously collected at each location. In general, dugongs were least available for detection in Torres Strait, and the population estimates increased 6-7 fold using depth-specific availability correction factors compared with earlier estimates that assumed homogeneous detection probability across water depth and location. Detection probabilities were higher in Moreton Bay and New Caledonia than Torres Strait because the water transparency in these two locations was much greater than in Torres Strait and the effect of correcting for depth-specific detection probability much less. The methodology has application to visual survey of coastal megafauna including surveys using Unmanned Aerial Vehicles.
NASA Astrophysics Data System (ADS)
Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming
2013-03-01
The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.
Comparison of probability statistics for automated ship detection in SAR imagery
NASA Astrophysics Data System (ADS)
Henschel, Michael D.; Rey, Maria T.; Campbell, J. W. M.; Petrovic, D.
1998-12-01
This paper discuses the initial results of a recent operational trial of the Ocean Monitoring Workstation's (OMW) ship detection algorithm which is essentially a Constant False Alarm Rate filter applied to Synthetic Aperture Radar data. The choice of probability distribution and methodologies for calculating scene specific statistics are discussed in some detail. An empirical basis for the choice of probability distribution used is discussed. We compare the results using a l-look, k-distribution function with various parameter choices and methods of estimation. As a special case of sea clutter statistics the application of a (chi) 2-distribution is also discussed. Comparisons are made with reference to RADARSAT data collected during the Maritime Command Operation Training exercise conducted in Atlantic Canadian Waters in June 1998. Reference is also made to previously collected statistics. The OMW is a commercial software suite that provides modules for automated vessel detection, oil spill monitoring, and environmental monitoring. This work has been undertaken to fine tune the OMW algorithm's, with special emphasis on the false alarm rate of each algorithm.
Finding Kuiper Belt Objects Below the Detection Limit
NASA Astrophysics Data System (ADS)
Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew
2018-01-01
We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Exploiting vibrational resonance in weak-signal detection
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Exploiting vibrational resonance in weak-signal detection.
Ren, Yuhao; Pan, Yan; Duan, Fabing; Chapeau-Blondeau, François; Abbott, Derek
2017-08-01
In this paper, we investigate the first exploitation of the vibrational resonance (VR) effect to detect weak signals in the presence of strong background noise. By injecting a series of sinusoidal interference signals of the same amplitude but with different frequencies into a generalized correlation detector, we show that the detection probability can be maximized at an appropriate interference amplitude. Based on a dual-Dirac probability density model, we compare the VR method with the stochastic resonance approach via adding dichotomous noise. The compared results indicate that the VR method can achieve a higher detection probability for a wider variety of noise distributions.
Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S.; Wilson, David J.; Huang, David; Li, Dengwang; Jia, Yali
2017-01-01
Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution. PMID:29296475
Wang, Zhuo; Camino, Acner; Zhang, Miao; Wang, Jie; Hwang, Thomas S; Wilson, David J; Huang, David; Li, Dengwang; Jia, Yali
2017-12-01
Diabetic retinopathy is a pathology where microvascular circulation abnormalities ultimately result in photoreceptor disruption and, consequently, permanent loss of vision. Here, we developed a method that automatically detects photoreceptor disruption in mild diabetic retinopathy by mapping ellipsoid zone reflectance abnormalities from en face optical coherence tomography images. The algorithm uses a fuzzy c-means scheme with a redefined membership function to assign a defect severity level on each pixel and generate a probability map of defect category affiliation. A novel scheme of unsupervised clustering optimization allows accurate detection of the affected area. The achieved accuracy, sensitivity and specificity were about 90% on a population of thirteen diseased subjects. This method shows potential for accurate and fast detection of early biomarkers in diabetic retinopathy evolution.
Video Shot Boundary Detection Using QR-Decomposition and Gaussian Transition Detection
NASA Astrophysics Data System (ADS)
Amiri, Ali; Fathy, Mahmood
2010-12-01
This article explores the problem of video shot boundary detection and examines a novel shot boundary detection algorithm by using QR-decomposition and modeling of gradual transitions by Gaussian functions. Specifically, the authors attend to the challenges of detecting gradual shots and extracting appropriate spatiotemporal features that affect the ability of algorithms to efficiently detect shot boundaries. The algorithm utilizes the properties of QR-decomposition and extracts a block-wise probability function that illustrates the probability of video frames to be in shot transitions. The probability function has abrupt changes in hard cut transitions, and semi-Gaussian behavior in gradual transitions. The algorithm detects these transitions by analyzing the probability function. Finally, we will report the results of the experiments using large-scale test sets provided by the TRECVID 2006, which has assessments for hard cut and gradual shot boundary detection. These results confirm the high performance of the proposed algorithm.
Kahbazi, Manijeh; Sarmadian, Hossein; Ahmadi, Azam; Didgar, Farshideh; Sadrnia, Maryam; Poolad, Toktam; Arjomandzadegan, Mohammad
2018-04-16
In clinical isolates of Mycobacterium tuberculosis (MTB), resistance to pyrazinamide occurs by mutations in any positions of the pncA gene (NC_000962.3) especially in nucleotides 359 and 374. In this study we examined the pncA gene sequence in clinical isolates of MTB. Genomic DNA of 33 clinical isolates of MTB was extracted by the Chelex100 method. The polymerase chain reactions (PCR) were performed using specific primers for amplification of 744 bp amplicon comprising the coding sequences (CDS) of the pncA gene. PCR products were sequenced by an automated sequencing Bioscience system. Additionally, semi Nested-allele specific (sNASP) and polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) methods were carried out for verification of probable mutations in nucleotides 359 and 374. Sequencing results showed that from 33 MTB clinical isolates, nine pyrazinamide-resistant isolates have mutations. Furthermore, no mutation was detected in 24 susceptible strains in the entire 561 bp of the pncA gene. Moreover, new mutations of G→A at position 3 of the pncA gene were identified in some of the resistant isolates. Results showed that the sNASP method could detect mutations in nucleotide 359 and 374 of the pncA gene, but the PCR-RFLP method by the SacII enzyme could not detect these mutations. In conclusion, the identification of new mutations in the pncA gene confirmed the probable occurrence of mutations in any nucleotides of the pncA gene sequence in resistant isolates of MTB.
Statistics provide guidance for indigenous organic carbon detection on Mars missions.
Sephton, Mark A; Carter, Jonathan N
2014-08-01
Data from the Viking and Mars Science Laboratory missions indicate the presence of organic compounds that are not definitively martian in origin. Both contamination and confounding mineralogies have been suggested as alternatives to indigenous organic carbon. Intuitive thought suggests that we are repeatedly obtaining data that confirms the same level of uncertainty. Bayesian statistics may suggest otherwise. If an organic detection method has a true positive to false positive ratio greater than one, then repeated organic matter detection progressively increases the probability of indigeneity. Bayesian statistics also reveal that methods with higher ratios of true positives to false positives give higher overall probabilities and that detection of organic matter in a sample with a higher prior probability of indigenous organic carbon produces greater confidence. Bayesian statistics, therefore, provide guidance for the planning and operation of organic carbon detection activities on Mars. Suggestions for future organic carbon detection missions and instruments are as follows: (i) On Earth, instruments should be tested with analog samples of known organic content to determine their true positive to false positive ratios. (ii) On the mission, for an instrument with a true positive to false positive ratio above one, it should be recognized that each positive detection of organic carbon will result in a progressive increase in the probability of indigenous organic carbon being present; repeated measurements, therefore, can overcome some of the deficiencies of a less-than-definitive test. (iii) For a fixed number of analyses, the highest true positive to false positive ratio method or instrument will provide the greatest probability that indigenous organic carbon is present. (iv) On Mars, analyses should concentrate on samples with highest prior probability of indigenous organic carbon; intuitive desires to contrast samples of high prior probability and low prior probability of indigenous organic carbon should be resisted.
High lifetime probability of screen-detected cervical abnormalities.
Pankakoski, Maiju; Heinävaara, Sirpa; Sarkeala, Tytti; Anttila, Ahti
2017-12-01
Objective Regular screening and follow-up is an important key to cervical cancer prevention; however, screening inevitably detects mild or borderline abnormalities that would never progress to a more severe stage. We analysed the cumulative probability and recurrence of cervical abnormalities in the Finnish organized screening programme during a 22-year follow-up. Methods Screening histories were collected for 364,487 women born between 1950 and 1965. Data consisted of 1 207,017 routine screens and 88,143 follow-up screens between 1991 and 2012. Probabilities of cervical abnormalities by age were estimated using logistic regression and generalized estimating equations methodology. Results The probability of experiencing any abnormality at least once at ages 30-64 was 34.0% (95% confidence interval [CI]: 33.3-34.6%) . Probability was 5.4% (95% CI: 5.0-5.8%) for results warranting referral and 2.2% (95% CI: 2.0-2.4%) for results with histologically confirmed findings. Previous occurrences were associated with an increased risk of detecting new ones, specifically in older women. Conclusion A considerable proportion of women experience at least one abnormal screening result during their lifetime, and yet very few eventually develop an actual precancerous lesion. Re-evaluation of diagnostic criteria concerning mild abnormalities might improve the balance of harms and benefits of screening. Special monitoring of women with recurrent abnormalities especially at older ages may also be needed.
Estimation of the POD function and the LOD of a qualitative microbiological measurement method.
Wilrich, Cordula; Wilrich, Peter-Theodor
2009-01-01
Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.
A critical evaluation of a flow cytometer used for detecting enterococci in recreational waters.
King, Dawn N; Brenner, Kristen P; Rodgers, Mark R
2007-06-01
The current U. S. Environmental Protection Agency-approved method for enterococci (Method 1600) in recreational water is a membrane filter (MF) method that takes 24 hours to obtain results. If the recreational water is not in compliance with the standard, the risk of exposure to enteric pathogens may occur before the water is identified as hazardous. Because flow cytometry combined with specific fluorescent antibodies has the potential to be used as a rapid detection method for microorganisms, this technology was evaluated as a rapid, same-day method to detect enterococci in bathing beach waters. The flow cytometer chosen for this study was a laser microbial detection system designed to detect labeled antibodies. A comparison of MF counts with flow cytometry counts of enterococci in phosphate buffer and sterile-filtered recreational water showed good agreement between the two methods. However, when flow cytometry was used, the counts were several orders of magnitude higher than the MF counts with no correlation to Enterococcus spike concentrations. The unspiked sample controls frequently had higher counts than the samples spiked with enterococci. Particles within the spiked water samples were probably counted as target cells by the flow cytometer because of autofluorescence or non-specific adsorption of antibody and carryover to subsequent samples. For these reasons, this technology may not be suitable for enterococci detection in recreational waters. Improvements in research and instrument design that will eliminate high background and carryover may make this a viable technology in the
Maximum ikelihood estimation for the double-count method with independent observers
Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.
1996-01-01
Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.
A discrimination method for the detection of pneumonia using chest radiograph.
Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R
2010-03-01
This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.
Detection of a Serum Siderophore by LC-MS/MS as a Potential Biomarker of Invasive Aspergillosis
Carroll, Cassandra S.; Amankwa, Lawrence N.; Pinto, Linda J.; Fuller, Jeffrey D.; Moore, Margo M.
2016-01-01
Invasive aspergillosis (IA) is a life-threatening systemic mycosis caused primarily by Aspergillus fumigatus. Early diagnosis of IA is based, in part, on an immunoassay for circulating fungal cell wall carbohydrate, galactomannan (GM). However, a wide range of sensitivity and specificity rates have been reported for the GM test across various patient populations. To obtain iron in vivo, A. fumigatus secretes the siderophore, N,N',N"-triacetylfusarinine C (TAFC) and we hypothesize that TAFC may represent a possible biomarker for early detection of IA. We developed an ultra performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS) method for TAFC analysis from serum, and measured TAFC in serum samples collected from patients at risk for IA. The method showed lower and upper limits of quantitation (LOQ) of 5 ng/ml and 750 ng/ml, respectively, and complete TAFC recovery from spiked serum. As proof of concept, we evaluated 76 serum samples from 58 patients with suspected IA that were investigated for the presence of GM. Fourteen serum samples obtained from 11 patients diagnosed with probable or proven IA were also analyzed for the presence of TAFC. Control sera (n = 16) were analyzed to establish a TAFC cut-off value (≥6 ng/ml). Of the 36 GM-positive samples (≥0.5 GM index) from suspected IA patients, TAFC was considered positive in 25 (69%). TAFC was also found in 28 additional GM-negative samples. TAFC was detected in 4 of the 14 samples (28%) from patients with proven/probable aspergillosis. Log-transformed TAFC and GM values from patients with proven/probable IA, healthy individuals and SLE patients showed a significant correlation with a Pearson r value of 0.77. In summary, we have developed a method for the detection of TAFC in serum that revealed this fungal product in the sera of patients at risk for invasive aspergillosis. A prospective study is warranted to determine whether this method provides improved early detection of IA. PMID:26974544
Spering, Cynthia C; Hobson, Valerie; Lucas, John A; Menon, Chloe V; Hall, James R; O'Bryant, Sid E
2012-08-01
To validate and extend the findings of a raised cut score of O'Bryant and colleagues (O'Bryant SE, Humphreys JD, Smith GE, et al. Detecting dementia with the mini-mental state examination in highly educated individuals. Arch Neurol. 2008;65(7):963-967.) for the Mini-Mental State Examination in detecting cognitive dysfunction in a bilingual sample of highly educated ethnically diverse individuals. Archival data were reviewed from participants enrolled in the National Alzheimer's Coordinating Center minimum data set. Data on 7,093 individuals with 16 or more years of education were analyzed, including 2,337 cases with probable and possible Alzheimer's disease, 1,418 mild cognitive impairment patients, and 3,088 nondemented controls. Ethnic composition was characterized as follows: 6,296 Caucasians, 581 African Americans, 4 American Indians or Alaska natives, 2 native Hawaiians or Pacific Islanders, 149 Asians, 43 "Other," and 18 of unknown origin. Diagnostic accuracy estimates (sensitivity, specificity, and likelihood ratio) of Mini-Mental State Examination cut scores in detecting probable and possible Alzheimer's disease were examined. A standard Mini-Mental State Examination cut score of 24 (≤23) yielded a sensitivity of 0.58 and a specificity of 0.98 in detecting probable and possible Alzheimer's disease across ethnicities. A cut score of 27 (≤26) resulted in an improved balance of sensitivity and specificity (0.79 and 0.90, respectively). In the cognitively impaired group (mild cognitive impairment and probable and possible Alzheimer's disease), the standard cut score yielded a sensitivity of 0.38 and a specificity of 1.00 while raising the cut score to 27 resulted in an improved balance of 0.59 and 0.96 of sensitivity and specificity, respectively. These findings cross-validate our previous work and extend them to an ethnically diverse cohort. A higher cut score is needed to maximize diagnostic accuracy of the Mini-Mental State Examination in individuals with college degrees.
A removal model for estimating detection probabilities from point-count surveys
Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.
2000-01-01
We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.
Petitot, Maud; Manceau, Nicolas; Geniez, Philippe; Besnard, Aurélien
2014-09-01
Setting up effective conservation strategies requires the precise determination of the targeted species' distribution area and, if possible, its local abundance. However, detection issues make these objectives complex for most vertebrates. The detection probability is usually <1 and is highly dependent on species phenology and other environmental variables. The aim of this study was to define an optimized survey protocol for the Mediterranean amphibian community, that is, to determine the most favorable periods and the most effective sampling techniques for detecting all species present on a site in a minimum number of field sessions and a minimum amount of prospecting effort. We visited 49 ponds located in the Languedoc region of southern France on four occasions between February and June 2011. Amphibians were detected using three methods: nighttime call count, nighttime visual encounter, and daytime netting. The detection nondetection data obtained was then modeled using site-occupancy models. The detection probability of amphibians sharply differed between species, the survey method used and the date of the survey. These three covariates also interacted. Thus, a minimum of three visits spread over the breeding season, using a combination of all three survey methods, is needed to reach a 95% detection level for all species in the Mediterranean region. Synthesis and applications: detection nondetection surveys combined to site occupancy modeling approach are powerful methods that can be used to estimate the detection probability and to determine the prospecting effort necessary to assert that a species is absent from a site.
Qi, Peng; Zhang, Dun; Wan, Yi
2014-11-01
Sulfate-reducing bacteria (SRB) have been extensively studied in corrosion and environmental science. However, fast enumeration of SRB population is still a difficult task. This work presents a novel specific SRB detection method based on inhibition of cysteine protease activity. The hydrolytic activity of cysteine protease was inhibited by taking advantage of sulfide, the characteristic metabolic product of SRB, to attack active cysteine thiol group in cysteine protease catalytic sites. The active thiol S-sulfhydration process could be used for SRB detection, since the amount of sulfide accumulated in culture medium was highly related with initial bacterial concentration. The working conditions of cysteine protease have been optimized to obtain better detection capability, and the SRB detection performances have been evaluated in this work. The proposed SRB detection method based on inhibition of cysteine protease activity avoided the use of biological recognition elements. In addition, compared with the widely used most probable number (MPN) method which would take up to at least 15days to accomplish whole detection process, the method based on inhibition of papain activity could detect SRB in 2 days, with a detection limit of 5.21×10(2) cfu mL(-1). The detection time for SRB population quantitative analysis was greatly shortened. Copyright © 2014 Elsevier B.V. All rights reserved.
Wilcox, Taylor M; Mckelvey, Kevin S.; Young, Michael K.; Sepulveda, Adam; Shepard, Bradley B.; Jane, Stephen F; Whiteley, Andrew R.; Lowe, Winsor H.; Schwartz, Michael K.
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive independent estimates of eDNA production rates and downstream persistence from brook trout (Salvelinus fontinalis) in streams. We use these estimates to parameterize models comparing the false negative detection rates of eDNA sampling and traditional backpack electrofishing. We find that using the protocols in this study eDNA had reasonable detection probabilities at extremely low animal densities (e.g., probability of detection 0.18 at densities of one fish per stream kilometer) and very high detection probabilities at population-level densities (e.g., probability of detection > 0.99 at densities of ≥ 3 fish per 100 m). This is substantially more sensitive than traditional electrofishing for determining the presence of brook trout and may translate into important cost savings when animals are rare. Our findings are consistent with a growing body of literature showing that eDNA sampling is a powerful tool for the detection of aquatic species, particularly those that are rare and difficult to sample using traditional methods.
NASA Astrophysics Data System (ADS)
Kim, Hannah; Hong, Helen
2014-03-01
We propose an automatic method for nipple detection on 3D automated breast ultrasound (3D ABUS) images using coronal slab-average-projection and cumulative probability map. First, to identify coronal images that appeared remarkable distinction between nipple-areola region and skin, skewness of each coronal image is measured and the negatively skewed images are selected. Then, coronal slab-average-projection image is reformatted from selected images. Second, to localize nipple-areola region, elliptical ROI covering nipple-areola region is detected using Hough ellipse transform in coronal slab-average-projection image. Finally, to separate the nipple from areola region, 3D Otsu's thresholding is applied to the elliptical ROI and cumulative probability map in the elliptical ROI is generated by assigning high probability to low intensity region. False detected small components are eliminated using morphological opening and the center point of detected nipple region is calculated. Experimental results show that our method provides 94.4% nipple detection rate.
Quantum temporal probabilities in tunneling systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anastopoulos, Charis, E-mail: anastop@physics.upatras.gr; Savvidou, Ntina, E-mail: ksavvidou@physics.upatras.gr
We study the temporal aspects of quantum tunneling as manifested in time-of-arrival experiments in which the detected particle tunnels through a potential barrier. In particular, we present a general method for constructing temporal probabilities in tunneling systems that (i) defines ‘classical’ time observables for quantum systems and (ii) applies to relativistic particles interacting through quantum fields. We show that the relevant probabilities are defined in terms of specific correlation functions of the quantum field associated with tunneling particles. We construct a probability distribution with respect to the time of particle detection that contains all information about the temporal aspects ofmore » the tunneling process. In specific cases, this probability distribution leads to the definition of a delay time that, for parity-symmetric potentials, reduces to the phase time of Bohm and Wigner. We apply our results to piecewise constant potentials, by deriving the appropriate junction conditions on the points of discontinuity. For the double square potential, in particular, we demonstrate the existence of (at least) two physically relevant time parameters, the delay time and a decay rate that describes the escape of particles trapped in the inter-barrier region. Finally, we propose a resolution to the paradox of apparent superluminal velocities for tunneling particles. We demonstrate that the idea of faster-than-light speeds in tunneling follows from an inadmissible use of classical reasoning in the description of quantum systems. -- Highlights: •Present a general methodology for deriving temporal probabilities in tunneling systems. •Treatment applies to relativistic particles interacting through quantum fields. •Derive a new expression for tunneling time. •Identify new time parameters relevant to tunneling. •Propose a resolution of the superluminality paradox in tunneling.« less
Fosgate, G T; Motimele, B; Ganswindt, A; Irons, P C
2017-09-15
Accurate diagnosis of pregnancy is an essential component of an effective reproductive management plan for dairy cattle. Indirect methods of pregnancy detection can be performed soon after breeding and offer an advantage over traditional direct methods in not requiring an experienced veterinarian and having potential for automation. The objective of this study was to estimate the sensitivity and specificity of pregnancy-associated glycoprotein (PAG) detection ELISA and transrectal ultrasound (TRUS) in dairy cows of South Africa using a Bayesian latent class approach. Commercial dairy cattle from the five important dairy regions in South Africa were enrolled in a short-term prospective cohort study. Cattle were examined at 28-35days after artificial insemination (AI) and then followed up 14days later. At both sampling times, TRUS was performed to detect pregnancy and commercially available PAG detection ELISAs were performed on collected serum and milk. A total of 1236 cows were sampled and 1006 had complete test information for use in the Bayesian latent class model. The estimated sensitivity (95% probability interval) and specificity for PAG detection serum ELISA were 99.4% (98.5, 99.9) and 97.4% (94.7, 99.2), respectively. The estimated sensitivity and specificity for PAG detection milk ELISA were 99.2% (98.2, 99.8) and 93.4% (89.7, 96.1), respectively. Sensitivity of veterinarian performed TRUS at 28-35days post-AI varied between 77.8% and 90.5% and specificity varied between 94.7% and 99.8%. In summary, indirect detection of pregnancy using PAG ELISA is an accurate method for use in dairy cattle. The method is descriptively more sensitive than veterinarian-performed TRUS and therefore could be an economically viable addition to a reproductive management plan. Copyright © 2017 Elsevier B.V. All rights reserved.
Seo, K H; Valentin-Bon, I E; Brackett, R E
2006-03-01
Salmonellosis caused by Salmonella Enteritidis (SE) is a significant cause of foodborne illnesses in the United States. Consumption of undercooked eggs and egg-containing products has been the primary risk factor for the disease. The importance of the bacterial enumeration technique has been enormously stressed because of the quantitative risk analysis of SE in shell eggs. Traditional enumeration methods mainly depend on slow and tedious most-probable-number (MPN) methods. Therefore, specific, sensitive, and rapid methods for SE quantitation are needed to collect sufficient data for risk assessment and food safety policy development. We previously developed a real-time quantitative PCR assay for the direct detection and enumeration of SE and, in this study, applied it to naturally contaminated ice cream samples with and without enrichment. The detection limit of the real-time PCR assay was determined with artificially inoculated ice cream. When applied to the direct detection and quantification of SE in ice cream, the real-time PCR assay was as sensitive as the conventional plate count method in frequency of detection. However, populations of SE derived from real-time quantitative PCR were approximately 1 log higher than provided by MPN and CFU values obtained by conventional culture methods. The detection and enumeration of SE in naturally contaminated ice cream can be completed in 3 h by this real-time PCR method, whereas the cultural enrichment method requires 5 to 7 days. A commercial immunoassay for the specific detection of SE was also included in the study. The real-time PCR assay proved to be a valuable tool that may be useful to the food industry in monitoring its processes to improve product quality and safety.
Comparison of methods for estimating density of forest songbirds from point counts
Jennifer L. Reidy; Frank R. Thompson; J. Wesley. Bailey
2011-01-01
New analytical methods have been promoted for estimating the probability of detection and density of birds from count data but few studies have compared these methods using real data. We compared estimates of detection probability and density from distance and time-removal models and survey protocols based on 5- or 10-min counts and outer radii of 50 or 100 m. We...
A bayesian analysis for identifying DNA copy number variations using a compound poisson process.
Chen, Jie; Yiğiter, Ayten; Wang, Yu-Ping; Deng, Hong-Wen
2010-01-01
To study chromosomal aberrations that may lead to cancer formation or genetic diseases, the array-based Comparative Genomic Hybridization (aCGH) technique is often used for detecting DNA copy number variants (CNVs). Various methods have been developed for gaining CNVs information based on aCGH data. However, most of these methods make use of the log-intensity ratios in aCGH data without taking advantage of other information such as the DNA probe (e.g., biomarker) positions/distances contained in the data. Motivated by the specific features of aCGH data, we developed a novel method that takes into account the estimation of a change point or locus of the CNV in aCGH data with its associated biomarker position on the chromosome using a compound Poisson process. We used a Bayesian approach to derive the posterior probability for the estimation of the CNV locus. To detect loci of multiple CNVs in the data, a sliding window process combined with our derived Bayesian posterior probability was proposed. To evaluate the performance of the method in the estimation of the CNV locus, we first performed simulation studies. Finally, we applied our approach to real data from aCGH experiments, demonstrating its applicability.
Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin
2014-03-01
To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.
Löfström, Charlotta; Knutsson, Rickard; Axelsson, Charlotta Engdahl; Rådström, Peter
2004-01-01
A PCR procedure has been developed for routine analysis of viable Salmonella spp. in feed samples. The objective was to develop a simple PCR-compatible enrichment procedure to enable DNA amplification without any sample pretreatment such as DNA extraction or cell lysis. PCR inhibition by 14 different feed samples and natural background flora was circumvented by the use of the DNA polymerase Tth. This DNA polymerase was found to exhibit a high level of resistance to PCR inhibitors present in these feed samples compared to DyNAzyme II, FastStart Taq, Platinum Taq, Pwo, rTth, Taq, and Tfl. The specificity of the Tth assay was confirmed by testing 101 Salmonella and 43 non-Salmonella strains isolated from feed and food samples. A sample preparation method based on culture enrichment in buffered peptone water and DNA amplification with Tth DNA polymerase was developed. The probability of detecting small numbers of salmonellae in feed, in the presence of natural background flora, was accurately determined and found to follow a logistic regression model. From this model, the probability of detecting 1 CFU per 25 g of feed in artificially contaminated soy samples was calculated and found to be 0.81. The PCR protocol was evaluated on 155 naturally contaminated feed samples and compared to an established culture-based method, NMKL-71. Eight percent of the samples were positive by PCR, compared with 3% with the conventional method. The reasons for the differences in sensitivity are discussed. Use of this method in the routine analysis of animal feed samples would improve safety in the food chain. PMID:14711627
Probabilistic pipe fracture evaluations for leak-rate-detection applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, S.; Ghadiali, N.; Paul, D.
1995-04-01
Regulatory Guide 1.45, {open_quotes}Reactor Coolant Pressure Boundary Leakage Detection Systems,{close_quotes} was published by the U.S. Nuclear Regulatory Commission (NRC) in May 1973, and provides guidance on leak detection methods and system requirements for Light Water Reactors. Additionally, leak detection limits are specified in plant Technical Specifications and are different for Boiling Water Reactors (BWRs) and Pressurized Water Reactors (PWRs). These leak detection limits are also used in leak-before-break evaluations performed in accordance with Draft Standard Review Plan, Section 3.6.3, {open_quotes}Leak Before Break Evaluation Procedures{close_quotes} where a margin of 10 on the leak detection limit is used in determining the crackmore » size considered in subsequent fracture analyses. This study was requested by the NRC to: (1) evaluate the conditional failure probability for BWR and PWR piping for pipes that were leaking at the allowable leak detection limit, and (2) evaluate the margin of 10 to determine if it was unnecessarily large. A probabilistic approach was undertaken to conduct fracture evaluations of circumferentially cracked pipes for leak-rate-detection applications. Sixteen nuclear piping systems in BWR and PWR plants were analyzed to evaluate conditional failure probability and effects of crack-morphology variability on the current margins used in leak rate detection for leak-before-break.« less
Chen, Peichen; Liu, Shih-Chia; Liu, Hung-I; Chen, Tse-Wei
2011-01-01
For quarantine sampling, it is of fundamental importance to determine the probability of finding an infestation when a specified number of units are inspected. In general, current sampling procedures assume 100% probability (perfect) of detecting a pest if it is present within a unit. Ideally, a nematode extraction method should remove all stages of all species with 100% efficiency regardless of season, temperature, or other environmental conditions; in practice however, no method approaches these criteria. In this study we determined the probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy. Also, the required sample and the risk involved in detecting nematode infestations with imperfect extraction efficacy are presented. Moreover, we developed a computer program to calculate confidence levels for different scenarios with varying proportions of infestation and efficacy of detection. In addition, a case study, presenting the extraction efficacy of the modified Baermann's Funnel method on Aphelenchoides besseyi, is used to exemplify the use of our program to calculate the probability of detecting nematode infestations in quarantine sampling with imperfect extraction efficacy. The result has important implications for quarantine programs and highlights the need for a very large number of samples if perfect extraction efficacy is not achieved in such programs. We believe that the results of the study will be useful for the determination of realistic goals in the implementation of quarantine sampling. PMID:22791911
Surveying Europe's Only Cave-Dwelling Chordate Species (Proteus anguinus) Using Environmental DNA.
Vörös, Judit; Márton, Orsolya; Schmidt, Benedikt R; Gál, Júlia Tünde; Jelić, Dušan
2017-01-01
In surveillance of subterranean fauna, especially in the case of rare or elusive aquatic species, traditional techniques used for epigean species are often not feasible. We developed a non-invasive survey method based on environmental DNA (eDNA) to detect the presence of the red-listed cave-dwelling amphibian, Proteus anguinus, in the caves of the Dinaric Karst. We tested the method in fifteen caves in Croatia, from which the species was previously recorded or expected to occur. We successfully confirmed the presence of P. anguinus from ten caves and detected the species for the first time in five others. Using a hierarchical occupancy model we compared the availability and detection probability of eDNA of two water sampling methods, filtration and precipitation. The statistical analysis showed that both availability and detection probability depended on the method and estimates for both probabilities were higher using filter samples than for precipitation samples. Combining reliable field and laboratory methods with robust statistical modeling will give the best estimates of species occurrence.
NASA Astrophysics Data System (ADS)
Kim, Kyungmin; Harry, Ian W.; Hodge, Kari A.; Kim, Young-Min; Lee, Chang-Hwan; Lee, Hyun Kyu; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.
2015-12-01
We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%-14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs.
Fuzzy-logic detection and probability of hail exploiting short-range X-band weather radar
NASA Astrophysics Data System (ADS)
Capozzi, Vincenzo; Picciotti, Errico; Mazzarella, Vincenzo; Marzano, Frank Silvio; Budillon, Giorgio
2018-03-01
This work proposes a new method for hail precipitation detection and probability, based on single-polarization X-band radar measurements. Using a dataset consisting of reflectivity volumes, ground truth observations and atmospheric sounding data, a probability of hail index, which provides a simple estimate of the hail potential, has been trained and adapted within Naples metropolitan environment study area. The probability of hail has been calculated starting by four different hail detection methods. The first two, based on (1) reflectivity data and temperature measurements and (2) on vertically-integrated liquid density product, respectively, have been selected from the available literature. The other two techniques are based on combined criteria of the above mentioned methods: the first one (3) is based on the linear discriminant analysis, whereas the other one (4) relies on the fuzzy-logic approach. The latter is an innovative criterion based on a fuzzyfication step performed through ramp membership functions. The performances of the four methods have been tested using an independent dataset: the results highlight that the fuzzy-oriented combined method performs slightly better in terms of false alarm ratio, critical success index and area under the relative operating characteristic. An example of application of the proposed hail detection and probability products is also presented for a relevant hail event, occurred on 21 July 2014.
Dim target detection method based on salient graph fusion
NASA Astrophysics Data System (ADS)
Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun
2018-02-01
Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.
Probability of detection evaluation results for railroad tank cars : final report.
DOT National Transportation Integrated Search
2016-08-01
The Transportation Technology Center, Inc. (TTCI) used the approach developed for the National Aeronautics and Space : Association to determine the probability of detection (POD) for various nondestructive test (NDT) methods used during inspection : ...
A SVM-based quantitative fMRI method for resting-state functional network detection.
Song, Xiaomu; Chen, Nan-kuei
2014-09-01
Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Probability of detection of defects in coatings with electronic shearography
NASA Astrophysics Data System (ADS)
Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.
1994-07-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Probability of detection of defects in coatings with electronic shearography
NASA Technical Reports Server (NTRS)
Maddux, Gary A.; Horton, Charles M.; Lansing, Matthew D.; Gnacek, William J.; Newton, Patrick L.
1994-01-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA Space Transportation System (STS) Revised Solid Rocket Motor (RSRM) boosters. The population of samples was to be large enough to determine the minimum defect size for 90 percent probability of detection of 95 percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Optimizing probability of detection point estimate demonstration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using point estimate method. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. Traditionally largest flaw size in the set is considered to be a conservative estimate of the flaw size with minimum 90% probability and 95% confidence. The flaw size is denoted as α90/95PE. The paper investigates relationship between range of flaw sizes in relation to α90, i.e. 90% probability flaw size, to provide a desired PPD. The range of flaw sizes is expressed as a proportion of the standard deviation of the probability density distribution. Difference between median or average of the 29 flaws and α90 is also expressed as a proportion of standard deviation of the probability density distribution. In general, it is concluded that, if probability of detection increases with flaw size, average of 29 flaw sizes would always be larger than or equal to α90 and is an acceptable measure of α90/95PE. If NDE technique has sufficient sensitivity and signal-to-noise ratio, then the 29 flaw-set can be optimized to meet requirements of minimum required PPD, maximum allowable POF, requirements on flaw size tolerance about mean flaw size and flaw size detectability requirements. The paper provides procedure for optimizing flaw sizes in the point estimate demonstration flaw-set.
NASA Technical Reports Server (NTRS)
Ussery, Warren; Johnson, Kenneth; Walker, James; Rummel, Ward
2008-01-01
This slide presentation reviews the use of terahertz imaging and Backscatter Radiography in a probability of detection study of the foam on the external tank (ET) shedding and damaging the shuttle orbiter. Non-destructive Examination (NDE) is performed as one method of preventing critical foam debris during the launch. Conventional NDE methods for inspection of the foam are assessed and the deficiencies are reviewed. Two methods for NDE inspection are reviewed: Backscatter Radiography (BSX) and Terahertz (THZ) Imaging. The purpose of the Probability of Detection (POD) study was to assess performance and reliability of the use of BSX and or THZ as an appropriate NDE method. The study used a test article with inserted defects, and a sample of blanks included to test for false positives. The results of the POD study are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Kevin J.; Parke, Stephen J.
Quantum mechanical interactions between neutrinos and matter along the path of propagation, the Wolfenstein matter effect, are of particular importance for the upcoming long-baseline neutrino oscillation experiments, specifically the Deep Underground Neutrino Experiment (DUNE). Here, we explore specifically what about the matter density profile can be measured by DUNE, considering both the shape and normalization of the profile between the neutrinos' origin and detection. Additionally, we explore the capability of a perturbative method for calculating neutrino oscillation probabilities and whether this method is suitable for DUNE. We also briefly quantitatively explore the ability of DUNE to measure the Earth's mattermore » density, and the impact of performing this measurement on measuring standard neutrino oscillation parameters.« less
Cao, Youfang; Liang, Jie
2013-01-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966
NASA Astrophysics Data System (ADS)
Cao, Youfang; Liang, Jie
2013-07-01
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
Cao, Youfang; Liang, Jie
2013-07-14
Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.
NASA Astrophysics Data System (ADS)
James, P.
2011-12-01
With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance assessed. The density of survey points required to achieve a required probability of detection can be calculated. The software aids discriminate choice of technique, improves survey design, and increases the likelihood of survey success; all factors sought in the engineering industry. As a simple example, the response from magnetometry, gravimetry, and gravity gradient techniques above an example 3m deep, 1m cube air cavity in limestone across a 15m grid was calculated. The maximum responses above the cavity are small (amplitudes of 0.018nT, 0.0013mGal, 8.3eotvos respectively), but at typical site noise levels the detection reliability is over 50% for the gradient gravity method on a single survey line. Increasing the number of survey points across the site increases the reliability of detection of the anomaly by the addition of probabilities. We can calculate the probability of detection at different profile spacings to assess the best possible survey design. At 1m spacing the overall probability of by the gradient gravity method is over 90%, and over 60% for magnetometry (at 3m spacing the probability drops to 32%). The use of modelling in near surface surveys is a useful tool to assess the feasibility of a range of techniques to detect subtle signals. Future work will integrate this work with borehole measured parameters.
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2016-11-03
Copy Number Variation (CNV) is envisaged to be a major source of large structural variations in the human genome. In recent years, many studies apply Next Generation Sequencing (NGS) data for the CNV detection. However, still there is a necessity to invent more accurate computational tools. In this study, mate pair NGS data are used for the CNV detection in a Hidden Markov Model (HMM). The proposed HMM has position specific emission probabilities, i.e. a Gaussian mixture distribution. Each component in the Gaussian mixture distribution captures a different type of aberration that is observed in the mate pairs, after being mapped to the reference genome. These aberrations may include any increase (decrease) in the insertion size or change in the direction of mate pairs that are mapped to the reference genome. This HMM with Position-Specific Emission probabilities (PSE-HMM) is utilized for the genome-wide detection of deletions and tandem duplications. The performance of PSE-HMM is evaluated on a simulated dataset and also on a real data of a Yoruban HapMap individual, NA18507. PSE-HMM is effective in taking observation dependencies into account and reaches a high accuracy in detecting genome-wide CNVs. MATLAB programs are available at http://bs.ipm.ir/softwares/PSE-HMM/ .
Designing Medical Tests: The Other Side of Bayes' Theorem
ERIC Educational Resources Information Center
Ross, Andrew M.
2012-01-01
To compute the probability of having a disease, given a positive test result, is a standard probability problem. The sensitivity and specificity of the test must be given and the prevalence of the disease. We ask how a test-maker might determine the tradeoff between sensitivity and specificity. Adding hypothetical costs for detecting or failing to…
Designing occupancy studies when false-positive detections occur
Clement, Matthew
2016-01-01
1.Recently, estimators have been developed to estimate occupancy probabilities when false-positive detections occur during presence-absence surveys. Some of these estimators combine different types of survey data to improve estimates of occupancy. With these estimators, there is a tradeoff between the number of sample units surveyed, and the number and type of surveys at each sample unit. Guidance on efficient design of studies when false positives occur is unavailable. 2.For a range of scenarios, I identified survey designs that minimized the mean square error of the estimate of occupancy. I considered an approach that uses one survey method and two observation states and an approach that uses two survey methods. For each approach, I used numerical methods to identify optimal survey designs when model assumptions were met and parameter values were correctly anticipated, when parameter values were not correctly anticipated, and when the assumption of no unmodelled detection heterogeneity was violated. 3.Under the approach with two observation states, false positive detections increased the number of recommended surveys, relative to standard occupancy models. If parameter values could not be anticipated, pessimism about detection probabilities avoided poor designs. Detection heterogeneity could require more or fewer repeat surveys, depending on parameter values. If model assumptions were met, the approach with two survey methods was inefficient. However, with poor anticipation of parameter values, with detection heterogeneity, or with removal sampling schemes, combining two survey methods could improve estimates of occupancy. 4.Ignoring false positives can yield biased parameter estimates, yet false positives greatly complicate the design of occupancy studies. Specific guidance for major types of false-positive occupancy models, and for two assumption violations common in field data, can conserve survey resources. This guidance can be used to design efficient monitoring programs and studies of species occurrence, species distribution, or habitat selection, when false positives occur during surveys.
Modelling detection probabilities to evaluate management and control tools for an invasive species
Christy, M.T.; Yackel Adams, A.A.; Rodda, G.H.; Savidge, J.A.; Tyrrell, C.L.
2010-01-01
For most ecologists, detection probability (p) is a nuisance variable that must be modelled to estimate the state variable of interest (i.e. survival, abundance, or occupancy). However, in the realm of invasive species control, the rate of detection and removal is the rate-limiting step for management of this pervasive environmental problem. For strategic planning of an eradication (removal of every individual), one must identify the least likely individual to be removed, and determine the probability of removing it. To evaluate visual searching as a control tool for populations of the invasive brown treesnake Boiga irregularis, we designed a mark-recapture study to evaluate detection probability as a function of time, gender, size, body condition, recent detection history, residency status, searcher team and environmental covariates. We evaluated these factors using 654 captures resulting from visual detections of 117 snakes residing in a 5-ha semi-forested enclosure on Guam, fenced to prevent immigration and emigration of snakes but not their prey. Visual detection probability was low overall (= 0??07 per occasion) but reached 0??18 under optimal circumstances. Our results supported sex-specific differences in detectability that were a quadratic function of size, with both small and large females having lower detection probabilities than males of those sizes. There was strong evidence for individual periodic changes in detectability of a few days duration, roughly doubling detection probability (comparing peak to non-elevated detections). Snakes in poor body condition had estimated mean detection probabilities greater than snakes with high body condition. Search teams with high average detection rates exhibited detection probabilities about twice that of search teams with low average detection rates. Surveys conducted with bright moonlight and strong wind gusts exhibited moderately decreased probabilities of detecting snakes. Synthesis and applications. By emphasizing and modelling detection probabilities, we now know: (i) that eradication of this species by searching is possible, (ii) how much searching effort would be required, (iii) under what environmental conditions searching would be most efficient, and (iv) several factors that are likely to modulate this quantification when searching is applied to new areas. The same approach can be use for evaluation of any control technology or population monitoring programme. ?? 2009 The Authors. Journal compilation ?? 2009 British Ecological Society.
Fischer, Jason L.; Pritt, Jeremy J.; Roseman, Edward; Prichard, Carson G.; Craig, Jaquelyn M.; Kennedy, Gregory W.; Manny, Bruce A.
2018-01-01
Egg deposition and use of restored spawning substrates by lithophilic fishes (e.g., Lake Sturgeon Acipenser fulvescens, Lake Whitefish Coregonus clupeaformis, and Walleye Sander vitreus) were assessed throughout the St. Clair–Detroit River system from 2005 to 2016. Bayesian models were used to quantify egg abundance and presence/absence relative to site-specific variables (e.g., depth, velocity, and artificial spawning reef presence) and temperature to evaluate fish use of restored artificial spawning reefs and assess patterns in egg deposition. Lake Whitefish and Walleye egg abundance, probability of detection, and probability of occupancy were assessed with detection-adjusted methods; Lake Sturgeon egg abundance and probability of occurrence were assessed using delta-lognormal methods. The models indicated that the probability of Walleye eggs occupying a site increased with water velocity and that the rate of increase decreased with depth, whereas Lake Whitefish egg occupancy was not correlated with any of the attributes considered. Egg deposition by Lake Whitefish and Walleyes was greater at sites with high water velocities and was lower over artificial spawning reefs. Lake Sturgeon eggs were collected least frequently but were more likely to be collected over artificial spawning reefs and in greater abundances than elsewhere. Detection-adjusted egg abundances were not greater over artificial spawning reefs, indicating that these projects may not directly benefit spawning Walleyes and Lake Whitefish. However, 98% of the Lake Sturgeon eggs observed were collected over artificial spawning reefs, supporting the hypothesis that the reefs provided spawning sites for Lake Sturgeon and could mitigate historic losses of Lake Sturgeon spawning habitat.
Banting, Graham S.; Braithwaite, Shannon; Scott, Candis; Kim, Jinyong; Jeon, Byeonghwa; Ashbolt, Nicholas; Ruecker, Norma; Tymensen, Lisa; Charest, Jollin; Pintar, Katarina; Checkley, Sylvia
2016-01-01
ABSTRACT Campylobacter spp. are the leading cause of bacterial gastroenteritis worldwide, and water is increasingly seen as a risk factor in transmission. Here we describe a most-probable-number (MPN)–quantitative PCR (qPCR) assay in which water samples are centrifuged and aliquoted into microtiter plates and the bacteria are enumerated by qPCR. We observed that commonly used Campylobacter molecular assays produced vastly different detection rates. In irrigation water samples, detection rates varied depending upon the PCR assay and culture method used, as follows: 0% by the de Boer Lv1-16S qPCR assay, 2.5% by the Van Dyke 16S and Jensen glyA qPCR assays, and 75% by the Linton 16S endpoint PCR when cultured at 37°C. Primer/probe specificity was the major confounder, with Arcobacter spp. routinely yielding false-positive results. The primers and PCR conditions described by Van Dyke et al. (M. I. Van Dyke, V. K. Morton, N. L. McLellan, and P. M. Huck, J Appl Microbiol 109:1053–1066, 2010, http://dx.doi.org/10.1111/j.1365-2672.2010.04730.x) proved to be the most sensitive and specific for Campylobacter detection in water. Campylobacter occurrence in irrigation water was found to be very low (<2 MPN/300 ml) when this Campylobacter-specific qPCR was used, with the most commonly detected species being C. jejuni, C. coli, and C. lari. Campylobacters in raw sewage were present at ∼102/100 ml, with incubation at 42°C required for reducing microbial growth competition from arcobacters. Overall, when Campylobacter prevalence and/or concentration in water is reported using molecular methods, considerable validation is recommended when adapting methods largely developed for clinical applications. Furthermore, combining MPN methods with molecular biology-based detection algorithms allows for the detection and quantification of Campylobacter spp. in environmental samples and is potentially suited to quantitative microbial risk assessment for improved public health disease prevention related to food and water exposures. IMPORTANCE The results of this study demonstrate the importance of assay validation upon data interpretation of environmental monitoring for Campylobacter when using molecular biology-based assays. Previous studies describing Campylobacter prevalence in Canada utilized primers that we have determined to be nonspecific due to their cross-amplification of Arcobacter spp. As such, Campylobacter prevalence may have been vastly overestimated in other studies. Additionally, the development of a quantitative assay described in this study will allow accurate determination of Campylobacter concentrations in environmental water samples, allowing more informed decisions to be made about water usage based on quantitative microbial risk assessment. PMID:27235434
Guerriero, S; Ajossa, S; Minguez, J A; Jurado, M; Mais, V; Melis, G B; Alcazar, J L
2015-11-01
To review the diagnostic accuracy of transvaginal ultrasound (TVS) in the preoperative detection of endometriosis in the uterosacral ligaments (USL), rectovaginal septum (RVS), vagina and bladder in patients with clinical suspicion of deep infiltrating endometriosis (DIE). An extensive search was performed in MEDLINE (PubMed) and EMBASE for studies published between January 1989 and December 2014. Studies were considered eligible if they reported on the use of TVS for the preoperative detection of endometriosis in the USL, RVS, vagina and bladder in women with clinical suspicion of DIE using the surgical data as a reference standard. Study quality was assessed using the PRISMA guidelines and QUADAS-2 tool. Of the 801 citations identified, 11 studies (n = 1583) were considered eligible and were included in the meta-analysis. For detection of endometriosis in the USL, the overall pooled sensitivity and specificity of TVS were 53% (95%CI, 35-70%) and 93% (95%CI, 83-97%), respectively. The pretest probability of USL endometriosis was 54%, which increased to 90% when suspicion of endometriosis was present after TVS examination. For detection of endometriosis in the RVS, the overall pooled sensitivity and specificity were 49% (95%CI, 36-62%) and 98% (95%CI, 95-99%), respectively. The pretest probability of RVS endometriosis was 24%, which increased to 89% when suspicion of endometriosis was present after TVS examination. For detection of vaginal endometriosis, the overall pooled sensitivity and specificity were 58% (95%CI, 40-74%) and 96% (95%CI, 87-99%), respectively. The pretest probability of vaginal endometriosis was 17%, which increased to 76% when suspicion of endometriosis was present after TVS assessment. Substantial heterogeneity was found for sensitivity and specificity for all these locations. For detection of bladder endometriosis, the overall pooled sensitivity and specificity were 62% (95%CI, 40-80%) and 100% (95%CI, 97-100%), respectively. Moderate heterogeneity was found for sensitivity and specificity for bladder endometriosis. The pretest probability of bladder endometriosis was 5%, which increased to 92% when suspicion of endometriosis was present after TVS assessment. Overall diagnostic performance of TVS for detecting DIE in uterosacral ligaments, rectovaginal septum, vagina and bladder is fair with high specificity. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena
2004-01-01
Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086
An integrated framework for detecting suspicious behaviors in video surveillance
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi
2014-03-01
In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.
Making great leaps forward: Accounting for detectability in herpetological field studies
Mazerolle, Marc J.; Bailey, Larissa L.; Kendall, William L.; Royle, J. Andrew; Converse, Sarah J.; Nichols, James D.
2007-01-01
Detecting individuals of amphibian and reptile species can be a daunting task. Detection can be hindered by various factors such as cryptic behavior, color patterns, or observer experience. These factors complicate the estimation of state variables of interest (e.g., abundance, occupancy, species richness) as well as the vital rates that induce changes in these state variables (e.g., survival probabilities for abundance; extinction probabilities for occupancy). Although ad hoc methods (e.g., counts uncorrected for detection, return rates) typically perform poorly in the face of no detection, they continue to be used extensively in various fields, including herpetology. However, formal approaches that estimate and account for the probability of detection, such as capture-mark-recapture (CMR) methods and distance sampling, are available. In this paper, we present classical approaches and recent advances in methods accounting for detectability that are particularly pertinent for herpetological data sets. Through examples, we illustrate the use of several methods, discuss their performance compared to that of ad hoc methods, and we suggest available software to perform these analyses. The methods we discuss control for imperfect detection and reduce bias in estimates of demographic parameters such as population size, survival, or, at other levels of biological organization, species occurrence. Among these methods, recently developed approaches that no longer require marked or resighted individuals should be particularly of interest to field herpetologists. We hope that our effort will encourage practitioners to implement some of the estimation methods presented herein instead of relying on ad hoc methods that make more limiting assumptions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
Bradley, Phelim; Gordon, N. Claire; Walker, Timothy M.; Dunn, Laura; Heys, Simon; Huang, Bill; Earle, Sarah; Pankhurst, Louise J.; Anson, Luke; de Cesare, Mariateresa; Piazza, Paolo; Votintseva, Antonina A.; Golubchik, Tanya; Wilson, Daniel J.; Wyllie, David H.; Diel, Roland; Niemann, Stefan; Feuerriegel, Silke; Kohl, Thomas A.; Ismail, Nazir; Omar, Shaheed V.; Smith, E. Grace; Buck, David; McVean, Gil; Walker, A. Sarah; Peto, Tim E. A.; Crook, Derrick W.; Iqbal, Zamin
2015-01-01
The rise of antibiotic-resistant bacteria has led to an urgent need for rapid detection of drug resistance in clinical samples, and improvements in global surveillance. Here we show how de Bruijn graph representation of bacterial diversity can be used to identify species and resistance profiles of clinical isolates. We implement this method for Staphylococcus aureus and Mycobacterium tuberculosis in a software package (‘Mykrobe predictor') that takes raw sequence data as input, and generates a clinician-friendly report within 3 minutes on a laptop. For S. aureus, the error rates of our method are comparable to gold-standard phenotypic methods, with sensitivity/specificity of 99.1%/99.6% across 12 antibiotics (using an independent validation set, n=470). For M. tuberculosis, our method predicts resistance with sensitivity/specificity of 82.6%/98.5% (independent validation set, n=1,609); sensitivity is lower here, probably because of limited understanding of the underlying genetic mechanisms. We give evidence that minor alleles improve detection of extremely drug-resistant strains, and demonstrate feasibility of the use of emerging single-molecule nanopore sequencing techniques for these purposes. PMID:26686880
Rácil, Z; Kocmanová, I; Wagnerová, B; Winterová, J; Lengerová, M; Moulis, M; Mayer, J
2008-01-01
PREMISES AND OBJECTIVES: Timely diagnosis is of critical importance for the prognosis of invasive aspergilosis (IA) patients. Over recent years, IA detection of galactomannan using the ELISA method has assumed growing importance. The objective of the study was to analyse the usability of the method in current clinical practice of a hemato-oncological ward. From May 2003 to October 2006, blood samples were taken from patients at IA risk to detect galactomannan (GM) in serum using the ELISA method. The patients who underwent the tests were classified by the probability of IA presence on the basis of the results of conventional diagnostic methods and section findings. A total of 11,360 serum samples from 911 adult patients were tested for GM presence. IA (probable/proven) was diagnosed in 42 (4.6%) of them. The rates of sensitivity, specificity, positive and negative predictive value of galactomannan detection for IA diagnosis in our ward were, respectively, 95.2%, 90.0%, 31.5% and 99.7%. The principal causes of the limited positive predictive value of the test were the high percentage of false-positive test results (mainly caused by concomitant administration of some penicillin antibiotics or Plasma-Lyte infusion solution), as well as the fact that a large percentage of patients we examined fell within the group of patients with hematological malignity with a very low prevalence of IA. GM detection in serum is associated with high sensitivity and excellent negative predictive value in IA diagnosis in hemato-oncological patients. Knowledge and elimination of possible causes of false-positive results as well as focusing the screening on patients at greatest risk of infection are necessary for an even better exploitation of the test.
A Kalman Filter Based Technique for Stator Turn-Fault Detection of the Induction Motors
NASA Astrophysics Data System (ADS)
Ghanbari, Teymoor; Samet, Haidar
2017-11-01
Monitoring of the Induction Motors (IMs) through stator current for different faults diagnosis has considerable economic and technical advantages in comparison with the other techniques in this content. Among different faults of an IM, stator and bearing faults are more probable types, which can be detected by analyzing signatures of the stator currents. One of the most reliable indicators for fault detection of IMs is lower sidebands of power frequency in the stator currents. This paper deals with a novel simple technique for detecting stator turn-fault of the IMs. Frequencies of the lower sidebands are determined using the motor specifications and their amplitudes are estimated by a Kalman Filter (KF). Instantaneous Total Harmonic Distortion (ITHD) of these harmonics is calculated. Since variation of the ITHD for the three-phase currents is considerable in case of stator turn-fault, the fault can be detected using this criterion, confidently. Different simulation results verify high performance of the proposed method. The performance of the method is also confirmed using some experiments.
Tanadini, Lorenzo G; Schmidt, Benedikt R
2011-01-01
Monitoring is an integral part of species conservation. Monitoring programs must take imperfect detection of species into account in order to be reliable. Theory suggests that detection probability may be determined by population size but this relationship has not yet been assessed empirically. Population size is particularly important because it may induce heterogeneity in detection probability and thereby cause bias in estimates of biodiversity. We used a site occupancy model to analyse data from a volunteer-based amphibian monitoring program to assess how well different variables explain variation in detection probability. An index to population size best explained detection probabilities for four out of six species (to avoid circular reasoning, we used the count of individuals at a previous site visit as an index to current population size). The relationship between the population index and detection probability was positive. Commonly used weather variables best explained detection probabilities for two out of six species. Estimates of site occupancy probabilities differed depending on whether the population index was or was not used to model detection probability. The relationship between the population index and detectability has implications for the design of monitoring and species conservation. Most importantly, because many small populations are likely to be overlooked, monitoring programs should be designed in such a way that small populations are not overlooked. The results also imply that methods cannot be standardized in such a way that detection probabilities are constant. As we have shown here, one can easily account for variation in population size in the analysis of data from long-term monitoring programs by using counts of individuals from surveys at the same site in previous years. Accounting for variation in population size is important because it can affect the results of long-term monitoring programs and ultimately the conservation of imperiled species.
Surveying Europe’s Only Cave-Dwelling Chordate Species (Proteus anguinus) Using Environmental DNA
Márton, Orsolya; Schmidt, Benedikt R.; Gál, Júlia Tünde; Jelić, Dušan
2017-01-01
In surveillance of subterranean fauna, especially in the case of rare or elusive aquatic species, traditional techniques used for epigean species are often not feasible. We developed a non-invasive survey method based on environmental DNA (eDNA) to detect the presence of the red-listed cave-dwelling amphibian, Proteus anguinus, in the caves of the Dinaric Karst. We tested the method in fifteen caves in Croatia, from which the species was previously recorded or expected to occur. We successfully confirmed the presence of P. anguinus from ten caves and detected the species for the first time in five others. Using a hierarchical occupancy model we compared the availability and detection probability of eDNA of two water sampling methods, filtration and precipitation. The statistical analysis showed that both availability and detection probability depended on the method and estimates for both probabilities were higher using filter samples than for precipitation samples. Combining reliable field and laboratory methods with robust statistical modeling will give the best estimates of species occurrence. PMID:28129383
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2010 CFR
2010-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2011 CFR
2011-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
40 CFR 280.40 - General requirements for all UST systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
... release detection that: (1) Can detect a release from any portion of the tank and the connected... shown in the table) with a probability of detection (Pd) of 0.95 and a probability of false alarm (Pfa) of 0.05. Method Section Date after which Pd/Pfa must be demonstrated Manual Tank Gauging 280.43(b...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simon, Joseph; Polin, Abigail; Lommen, Andrea
2014-03-20
The steadily improving sensitivity of pulsar timing arrays (PTAs) suggests that gravitational waves (GWs) from supermassive black hole binary (SMBHB) systems in the nearby universe will be detectable sometime during the next decade. Currently, PTAs assume an equal probability of detection from every sky position, but as evidence grows for a non-isotropic distribution of sources, is there a most likely sky position for a detectable single source of GWs? In this paper, a collection of Galactic catalogs is used to calculate various metrics related to the detectability of a single GW source resolvable above a GW background, assuming that everymore » galaxy has the same probability of containing an SMBHB. Our analyses of these data reveal small probabilities that one of these sources is currently in the PTA band, but as sensitivity is improved regions of consistent probability density are found in predictable locations, specifically around local galaxy clusters.« less
NASA Astrophysics Data System (ADS)
Giannini, Valentina; Vignati, Anna; Mazzetti, Simone; De Luca, Massimo; Bracco, Christian; Stasi, Michele; Russo, Filippo; Armando, Enrico; Regge, Daniele
2013-02-01
Prostate specific antigen (PSA)-based screening reduces the rate of death from prostate cancer (PCa) by 31%, but this benefit is associated with a high risk of overdiagnosis and overtreatment. As prostate transrectal ultrasound-guided biopsy, the standard procedure for prostate histological sampling, has a sensitivity of 77% with a considerable false-negative rate, more accurate methods need to be found to detect or rule out significant disease. Prostate magnetic resonance imaging has the potential to improve the specificity of PSA-based screening scenarios as a non-invasive detection tool, in particular exploiting the combination of anatomical and functional information in a multiparametric framework. The purpose of this study was to describe a computer aided diagnosis (CAD) method that automatically produces a malignancy likelihood map by combining information from dynamic contrast enhanced MR images and diffusion weighted images. The CAD system consists of multiple sequential stages, from a preliminary registration of images of different sequences, in order to correct for susceptibility deformation and/or movement artifacts, to a Bayesian classifier, which fused all the extracted features into a probability map. The promising results (AUROC=0.87) should be validated on a larger dataset, but they suggest that the discrimination on a voxel basis between benign and malignant tissues is feasible with good performances. This method can be of benefit to improve the diagnostic accuracy of the radiologist, reduce reader variability and speed up the reading time, automatically highlighting probably cancer suspicious regions.
Clinical evaluation of a Mucorales-specific real-time PCR assay in tissue and serum samples.
Springer, Jan; Lackner, Michaela; Ensinger, Christian; Risslegger, Brigitte; Morton, Charles Oliver; Nachbaur, David; Lass-Flörl, Cornelia; Einsele, Hermann; Heinz, Werner J; Loeffler, Juergen
2016-12-01
Molecular diagnostic assays can accelerate the diagnosis of fungal infections and subsequently improve patient outcomes. In particular, the detection of infections due to Mucorales is still challenging for laboratories and physicians. The aim of this study was to evaluate a probe-based Mucorales-specific real-time PCR assay (Muc18S) using tissue and serum samples from patients suffering from invasive mucormycosis (IMM). This assay can detect a broad range of clinically relevant Mucorales species and can be used to complement existing diagnostic tests or to screen high-risk patients. An advantage of the Muc18S assay is that it exclusively detects Mucorales species allowing the diagnosis of Mucorales DNA without sequencing within a few hours. In paraffin-embedded tissue samples this PCR-based method allowed rapid identification of Mucorales in comparison with standard methods and showed 91 % sensitivity in the IMM tissue samples. We also evaluated serum samples, an easily accessible material, from patients at risk from IMM. Mucorales DNA was detected in all patients with probable/proven IMM (100 %) and in 29 % of the possible cases. Detection of IMM in serum could enable an earlier diagnosis (up to 21 days) than current methods including tissue samples, which were gained mainly post-mortem. A screening strategy for high-risk patients, which would enable targeted treatment to improve patient outcomes, is therefore possible.
Questel, E; Durbise, E; Bardy, A-L; Schmitt, A-M; Josse, G
2015-05-01
To assess an objective method evaluating the effects of a retinaldehyde-based cream (RA-cream) on solar lentigines; 29 women randomly applied RA-cream on lentigines of one hand and a control cream on the other, once daily for 3 months. A specific method enabling a reliable visualisation of the lesions was proposed, using high-magnification colour-calibrated camera imaging. Assessment was performed using clinical evaluation by Physician Global Assessment score and image analysis. Luminance determination on the numeric images was performed either on the basis of 5 independent expert's consensus borders or probability map analysis via an algorithm automatically detecting the pigmented area. Both image analysis methods showed a similar lightening of ΔL* = 2 after a 3-month treatment by RA-cream, in agreement with single-blind clinical evaluation. High-magnification colour-calibrated camera imaging combined with probability map analysis is a fast and precise method to follow lentigo depigmentation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Analysis of a potential trigger of an acute illness.
Becker, Niels G; Salim, Agus; Kelman, Christopher W
2006-01-01
Sometimes certain short-term risk exposures are postulated to act as a trigger for the onset of a specific acute illness. When the incidence of the illness is low it is desirable to investigate this possible association using only data on cases detected during a specific observation period. Here we propose an analysis for such a study based on a model expressed in terms of the probability that the exposure triggers the illness and a random delay from a triggered illness until its diagnosis. Both the natural hazard rate for the illness and the probability that the exposure triggers the illness are assumed to be small and possibly dependent on age and covariates such as sex and duration or severity of the exposure. The method of analysis is illustrated with a study of the association between long flights and hospitalization for venous thromboembolism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morise, A.P.; Duval, R.D.
To determine whether recent refinements in Bayesian methods have led to improved diagnostic ability, 3 methods using Bayes' theorem and the independence assumption for estimating posttest probability after exercise stress testing were compared. Each method differed in the number of variables considered in the posttest probability estimate (method A = 5, method B = 6 and method C = 15). Method C is better known as CADENZA. There were 436 patients (250 men and 186 women) who underwent stress testing (135 had concurrent thallium scintigraphy) followed within 2 months by coronary arteriography. Coronary artery disease ((CAD), at least 1 vesselmore » with greater than or equal to 50% diameter narrowing) was seen in 169 (38%). Mean pretest probabilities using each method were not different. However, the mean posttest probabilities for CADENZA were significantly greater than those for method A or B (p less than 0.0001). Each decile of posttest probability was compared to the actual prevalence of CAD in that decile. At posttest probabilities less than or equal to 20%, there was underestimation of CAD. However, at posttest probabilities greater than or equal to 60%, there was overestimation of CAD by all methods, especially CADENZA. Comparison of sensitivity and specificity at every fifth percentile of posttest probability revealed that CADENZA was significantly more sensitive and less specific than methods A and B. Therefore, at lower probability thresholds, CADENZA was a better screening method. However, methods A or B still had merit as a means to confirm higher probabilities generated by CADENZA (especially greater than or equal to 60%).« less
Probability of Detection (POD) as a statistical model for the validation of qualitative methods.
Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T
2011-01-01
A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.
Comparing scat detection dogs, cameras, and hair snares for surveying carnivores
Long, Robert A.; Donovan, T.M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.
2007-01-01
Carnivores typically require large areas of habitat, exist at low natural densities, and exhibit elusive behavior - characteristics that render them difficult to study. Noninvasive survey methods increasingly provide means to collect extensive data on carnivore occupancy, distribution, and abundance. During the summers of 2003-2004, we compared the abilities of scat detection dogs, remote cameras, and hair snares to detect black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus) at 168 sites throughout Vermont. All 3 methods detected black bears; neither fishers nor bobcats were detected by hair snares. Scat detection dogs yielded the highest raw detection rate and probability of detection (given presence) for each of the target species, as well as the greatest number of unique detections (i.e., occasions when only one method detected the target species). We estimated that the mean probability of detecting the target species during a single visit to a site with a detection dog was 0.87 for black bears, 0.84 for fishers, and 0.27 for bobcats. Although the cost of surveying with detection dogs was higher than that of remote cameras or hair snares, the efficiency of this method rendered it the most cost-effective survey method.
Vehicle Detection for RCTA/ANS (Autonomous Navigation System)
NASA Technical Reports Server (NTRS)
Brennan, Shane; Bajracharya, Max; Matthies, Larry H.; Howard, Andrew B.
2012-01-01
Using a stereo camera pair, imagery is acquired and processed through the JPLV stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity- based methods that involve learning typical vehicle appearances from a large corpus of training data. In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps: 1. Take as input a left disparity image and left rectified image from JPLV stereo. 2. Project the disparity data onto a two-dimensional Cartesian map. 3. Perform some post-processing of the map built in the previous step in order to clean it up. 4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene. 5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not. The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were ground-truthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.
Automated detection of retinal whitening in malarial retinopathy
NASA Astrophysics Data System (ADS)
Joshi, V.; Agurto, C.; Barriga, S.; Nemeth, S.; Soliz, P.; MacCormick, I.; Taylor, T.; Lewallen, S.; Harding, S.
2016-03-01
Cerebral malaria (CM) is a severe neurological complication associated with malarial infection. Malaria affects approximately 200 million people worldwide, and claims 600,000 lives annually, 75% of whom are African children under five years of age. Because most of these mortalities are caused by the high incidence of CM misdiagnosis, there is a need for an accurate diagnostic to confirm the presence of CM. The retinal lesions associated with malarial retinopathy (MR) such as retinal whitening, vessel discoloration, and hemorrhages, are highly specific to CM, and their detection can improve the accuracy of CM diagnosis. This paper will focus on development of an automated method for the detection of retinal whitening which is a unique sign of MR that manifests due to retinal ischemia resulting from CM. We propose to detect the whitening region in retinal color images based on multiple color and textural features. First, we preprocess the image using color and textural features of the CMYK and CIE-XYZ color spaces to minimize camera reflex. Next, we utilize color features of the HSL, CMYK, and CIE-XYZ channels, along with the structural features of difference of Gaussians. A watershed segmentation algorithm is used to assign each image region a probability of being inside the whitening, based on extracted features. The algorithm was applied to a dataset of 54 images (40 with whitening and 14 controls) that resulted in an image-based (binary) classification with an AUC of 0.80. This provides 88% sensitivity at a specificity of 65%. For a clinical application that requires a high specificity setting, the algorithm can be tuned to a specificity of 89% at a sensitivity of 82%. This is the first published method for retinal whitening detection and combining it with the detection methods for vessel discoloration and hemorrhages can further improve the detection accuracy for malarial retinopathy.
2011-01-01
Background Epilepsy is a common neurological disorder characterized by recurrent electrophysiological activities, known as seizures. Without the appropriate detection strategies, these seizure episodes can dramatically affect the quality of life for those afflicted. The rationale of this study is to develop an unsupervised algorithm for the detection of seizure states so that it may be implemented along with potential intervention strategies. Methods Hidden Markov model (HMM) was developed to interpret the state transitions of the in vitro rat hippocampal slice local field potentials (LFPs) during seizure episodes. It can be used to estimate the probability of state transitions and the corresponding characteristics of each state. Wavelet features were clustered and used to differentiate the electrophysiological characteristics at each corresponding HMM states. Using unsupervised training method, the HMM and the clustering parameters were obtained simultaneously. The HMM states were then assigned to the electrophysiological data using expert guided technique. Minimum redundancy maximum relevance (mRMR) analysis and Akaike Information Criterion (AICc) were applied to reduce the effect of over-fitting. The sensitivity, specificity and optimality index of chronic seizure detection were compared for various HMM topologies. The ability of distinguishing early and late tonic firing patterns prior to chronic seizures were also evaluated. Results Significant improvement in state detection performance was achieved when additional wavelet coefficient rates of change information were used as features. The final HMM topology obtained using mRMR and AICc was able to detect non-ictal (interictal), early and late tonic firing, chronic seizures and postictal activities. A mean sensitivity of 95.7%, mean specificity of 98.9% and optimality index of 0.995 in the detection of chronic seizures was achieved. The detection of early and late tonic firing was validated with experimental intracellular electrical recordings of seizures. Conclusions The HMM implementation of a seizure dynamics detector is an improvement over existing approaches using visual detection and complexity measures. The subjectivity involved in partitioning the observed data prior to training can be eliminated. It can also decipher the probabilities of seizure state transitions using the magnitude and rate of change wavelet information of the LFPs. PMID:21504608
Park, Douglas L; Coates, Scott; Brewer, Vickery A; Garber, Eric A E; Abouzied, Mohamed; Johnson, Kurt; Ritter, Bruce; McKenzie, Deborah
2005-01-01
Performance Tested Method multiple laboratory validations for the detection of peanut protein in 4 different food matrixes were conducted under the auspices of the AOAC Research Institute. In this blind study, 3 commercially available ELISA test kits were validated: Neogen Veratox for Peanut, R-Biopharm RIDASCREEN FAST Peanut, and Tepnel BioKits for Peanut Assay. The food matrixes used were breakfast cereal, cookies, ice cream, and milk chocolate spiked at 0 and 5 ppm peanut. Analyses of the samples were conducted by laboratories representing industry and international and U.S governmental agencies. All 3 commercial test kits successfully identified spiked and peanut-free samples. The validation study required 60 analyses on test samples at the target level 5 microg peanut/g food and 60 analyses at a peanut-free level, which was designed to ensure that the lower 95% confidence limit for the sensitivity and specificity would not be <90%. The probability that a test sample contains an allergen given a prevalence rate of 5% and a positive test result using a single test kit analysis with 95% sensitivity and 95% specificity, which was demonstrated for these test kits, would be 50%. When 2 test kits are run simultaneously on all samples, the probability becomes 95%. It is therefore recommended that all field samples be analyzed with at least 2 of the validated kits.
Environmental DNA (eDNA) Detection Probability Is Influenced by Seasonal Activity of Organisms.
de Souza, Lesley S; Godwin, James C; Renshaw, Mark A; Larson, Eric
2016-01-01
Environmental DNA (eDNA) holds great promise for conservation applications like the monitoring of invasive or imperiled species, yet this emerging technique requires ongoing testing in order to determine the contexts over which it is effective. For example, little research to date has evaluated how seasonality of organism behavior or activity may influence detection probability of eDNA. We applied eDNA to survey for two highly imperiled species endemic to the upper Black Warrior River basin in Alabama, US: the Black Warrior Waterdog (Necturus alabamensis) and the Flattened Musk Turtle (Sternotherus depressus). Importantly, these species have contrasting patterns of seasonal activity, with N. alabamensis more active in the cool season (October-April) and S. depressus more active in the warm season (May-September). We surveyed sites historically occupied by these species across cool and warm seasons over two years with replicated eDNA water samples, which were analyzed in the laboratory using species-specific quantitative PCR (qPCR) assays. We then used occupancy estimation with detection probability modeling to evaluate both the effects of landscape attributes on organism presence and season of sampling on detection probability of eDNA. Importantly, we found that season strongly affected eDNA detection probability for both species, with N. alabamensis having higher eDNA detection probabilities during the cool season and S. depressus have higher eDNA detection probabilities during the warm season. These results illustrate the influence of organismal behavior or activity on eDNA detection in the environment and identify an important role for basic natural history in designing eDNA monitoring programs.
Environmental DNA (eDNA) Detection Probability Is Influenced by Seasonal Activity of Organisms
de Souza, Lesley S.; Godwin, James C.; Renshaw, Mark A.; Larson, Eric
2016-01-01
Environmental DNA (eDNA) holds great promise for conservation applications like the monitoring of invasive or imperiled species, yet this emerging technique requires ongoing testing in order to determine the contexts over which it is effective. For example, little research to date has evaluated how seasonality of organism behavior or activity may influence detection probability of eDNA. We applied eDNA to survey for two highly imperiled species endemic to the upper Black Warrior River basin in Alabama, US: the Black Warrior Waterdog (Necturus alabamensis) and the Flattened Musk Turtle (Sternotherus depressus). Importantly, these species have contrasting patterns of seasonal activity, with N. alabamensis more active in the cool season (October-April) and S. depressus more active in the warm season (May-September). We surveyed sites historically occupied by these species across cool and warm seasons over two years with replicated eDNA water samples, which were analyzed in the laboratory using species-specific quantitative PCR (qPCR) assays. We then used occupancy estimation with detection probability modeling to evaluate both the effects of landscape attributes on organism presence and season of sampling on detection probability of eDNA. Importantly, we found that season strongly affected eDNA detection probability for both species, with N. alabamensis having higher eDNA detection probabilities during the cool season and S. depressus have higher eDNA detection probabilities during the warm season. These results illustrate the influence of organismal behavior or activity on eDNA detection in the environment and identify an important role for basic natural history in designing eDNA monitoring programs. PMID:27776150
On the choice of statistical models for estimating occurrence and extinction from animal surveys
Dorazio, R.M.
2007-01-01
In surveys of natural animal populations the number of animals that are present and available to be detected at a sample location is often low, resulting in few or no detections. Low detection frequencies are especially common in surveys of imperiled species; however, the choice of sampling method and protocol also may influence the size of the population that is vulnerable to detection. In these circumstances, probabilities of animal occurrence and extinction will generally be estimated more accurately if the models used in data analysis account for differences in abundance among sample locations and for the dependence between site-specific abundance and detection. Simulation experiments are used to illustrate conditions wherein these types of models can be expected to outperform alternative estimators of population site occupancy and extinction. ?? 2007 by the Ecological Society of America.
Zheng, Qianwang; Mikš-Krajnik, Marta; Yang, Yishan; Xu, Wang; Yuk, Hyun-Gyun
2014-09-01
Conventional culture detection methods are time consuming and labor-intensive. For this reason, an alternative rapid method combining real-time PCR and immunomagnetic separation (IMS) was investigated in this study to detect both healthy and heat-injured Salmonella Typhimurium on raw duck wings. Firstly, the IMS method was optimized by determining the capture efficiency of Dynabeads(®) on Salmonella cells on raw duck wings with different bead incubation (10, 30 and 60 min) and magnetic separation (3, 10 and 30 min) times. Secondly, three Taqman primer sets, Sal, invA and ttr, were evaluated to optimize the real-time PCR protocol by comparing five parameters: inclusivity, exclusivity, PCR efficiency, detection probability and limit of detection (LOD). Thirdly, the optimized real-time PCR, in combination with IMS (PCR-IMS) assay, was compared with a standard ISO and a real-time PCR (PCR) method by analyzing artificially inoculated raw duck wings with healthy and heat-injured Salmonella cells at 10(1) and 10(0) CFU/25 g. Finally, the optimized PCR-IMS assay was validated for Salmonella detection in naturally contaminated raw duck wing samples. Under optimal IMS conditions (30 min bead incubation and 3 min magnetic separation times), approximately 85 and 64% of S. Typhimurium cells were captured by Dynabeads® from pure culture and inoculated raw duck wings, respectively. Although Sal and ttr primers exhibited 100% inclusivity and exclusivity for 16 Salmonella spp. and 36 non-Salmonella strains, the Sal primer showed lower LOD (10(3) CFU/ml) and higher PCR efficiency (94.1%) than the invA and ttr primers. Moreover, for Sal and invA primers, 100% detection probability on raw duck wings suspension was observed at 10(3) and 10(4) CFU/ml with and without IMS, respectively. Thus, the Sal primer was chosen for further experiments. The optimized PCR-IMS method was significantly (P=0.0011) better at detecting healthy Salmonella cells after 7-h enrichment than traditional PCR method. However there was no significant difference between the two methods with longer enrichment time (14 h). The diagnostic accuracy of PCR-IMS was shown to be 98.3% through the validation study. These results indicate that the optimized PCR-IMS method in this study could provide a sensitive, specific and rapid detection method for Salmonella on raw duck wings, enabling 10-h detection. However, a longer enrichment time could be needed for resuscitation and reliable detection of heat-injured cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Murn, Campbell; Holloway, Graham J
2016-10-01
Species occurring at low density can be difficult to detect and if not properly accounted for, imperfect detection will lead to inaccurate estimates of occupancy. Understanding sources of variation in detection probability and how they can be managed is a key part of monitoring. We used sightings data of a low-density and elusive raptor (white-headed vulture Trigonoceps occipitalis ) in areas of known occupancy (breeding territories) in a likelihood-based modelling approach to calculate detection probability and the factors affecting it. Because occupancy was known a priori to be 100%, we fixed the model occupancy parameter to 1.0 and focused on identifying sources of variation in detection probability. Using detection histories from 359 territory visits, we assessed nine covariates in 29 candidate models. The model with the highest support indicated that observer speed during a survey, combined with temporal covariates such as time of year and length of time within a territory, had the highest influence on the detection probability. Averaged detection probability was 0.207 (s.e. 0.033) and based on this the mean number of visits required to determine within 95% confidence that white-headed vultures are absent from a breeding area is 13 (95% CI: 9-20). Topographical and habitat covariates contributed little to the best models and had little effect on detection probability. We highlight that low detection probabilities of some species means that emphasizing habitat covariates could lead to spurious results in occupancy models that do not also incorporate temporal components. While variation in detection probability is complex and influenced by effects at both temporal and spatial scales, temporal covariates can and should be controlled as part of robust survey methods. Our results emphasize the importance of accounting for detection probability in occupancy studies, particularly during presence/absence studies for species such as raptors that are widespread and occur at low densities.
LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.
Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu
2005-01-01
Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.
Applying cognitive acuity theory to the development and scoring of situational judgment tests.
Leeds, J Peter
2017-11-09
The theory of cognitive acuity (TCA) treats the response options within items as signals to be detected and uses psychophysical methods to estimate the respondents' sensitivity to these signals. Such a framework offers new methods to construct and score situational judgment tests (SJT). Leeds (2012) defined cognitive acuity as the capacity to discern correctness and distinguish between correctness differences among simultaneously presented situation-specific response options. In this study, SJT response options were paired in order to offer the respondent a two-option choice. The contrast in correctness valence between the two options determined the magnitude of signal emission, with larger signals portending a higher probability of detection. A logarithmic relation was found between correctness valence contrast (signal stimulus) and its detectability (sensation response). Respondent sensitivity to such signals was measured and found to be related to the criterion variables. The linkage between psychophysics and elemental psychometrics may offer new directions for measurement theory.
Fusion of Scores in a Detection Context Based on Alpha Integration.
Soriano, Antonio; Vergara, Luis; Ahmed, Bouziane; Salazar, Addisson
2015-09-01
We present a new method for fusing scores corresponding to different detectors (two-hypotheses case). It is based on alpha integration, which we have adapted to the detection context. Three optimization methods are presented: least mean square error, maximization of the area under the ROC curve, and minimization of the probability of error. Gradient algorithms are proposed for the three methods. Different experiments with simulated and real data are included. Simulated data consider the two-detector case to illustrate the factors influencing alpha integration and demonstrate the improvements obtained by score fusion with respect to individual detector performance. Two real data cases have been considered. In the first, multimodal biometric data have been processed. This case is representative of scenarios in which the probability of detection is to be maximized for a given probability of false alarm. The second case is the automatic analysis of electroencephalogram and electrocardiogram records with the aim of reproducing the medical expert detections of arousal during sleeping. This case is representative of scenarios in which probability of error is to be minimized. The general superior performance of alpha integration verifies the interest of optimizing the fusing parameters.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models. PMID:25874630
Hunter, Margaret E.; Oyler-McCance, Sara J.; Dorazio, Robert M.; Fike, Jennifer A.; Smith, Brian J.; Hunter, Charles T.; Reed, Robert N.; Hart, Kristen M.
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models.
Hunter, Margaret E; Oyler-McCance, Sara J; Dorazio, Robert M; Fike, Jennifer A; Smith, Brian J; Hunter, Charles T; Reed, Robert N; Hart, Kristen M
2015-01-01
Environmental DNA (eDNA) methods are used to detect DNA that is shed into the aquatic environment by cryptic or low density species. Applied in eDNA studies, occupancy models can be used to estimate occurrence and detection probabilities and thereby account for imperfect detection. However, occupancy terminology has been applied inconsistently in eDNA studies, and many have calculated occurrence probabilities while not considering the effects of imperfect detection. Low detection of invasive giant constrictors using visual surveys and traps has hampered the estimation of occupancy and detection estimates needed for population management in southern Florida, USA. Giant constrictor snakes pose a threat to native species and the ecological restoration of the Florida Everglades. To assist with detection, we developed species-specific eDNA assays using quantitative PCR (qPCR) for the Burmese python (Python molurus bivittatus), Northern African python (P. sebae), boa constrictor (Boa constrictor), and the green (Eunectes murinus) and yellow anaconda (E. notaeus). Burmese pythons, Northern African pythons, and boa constrictors are established and reproducing, while the green and yellow anaconda have the potential to become established. We validated the python and boa constrictor assays using laboratory trials and tested all species in 21 field locations distributed in eight southern Florida regions. Burmese python eDNA was detected in 37 of 63 field sampling events; however, the other species were not detected. Although eDNA was heterogeneously distributed in the environment, occupancy models were able to provide the first estimates of detection probabilities, which were greater than 91%. Burmese python eDNA was detected along the leading northern edge of the known population boundary. The development of informative detection tools and eDNA occupancy models can improve conservation efforts in southern Florida and support more extensive studies of invasive constrictors. Generic sampling design and terminology are proposed to standardize and clarify interpretations of eDNA-based occupancy models.
van der Ster, Björn J P; Bennis, Frank C; Delhaas, Tammo; Westerhof, Berend E; Stok, Wim J; van Lieshout, Johannes J
2017-01-01
Introduction: In the initial phase of hypovolemic shock, mean blood pressure (BP) is maintained by sympathetically mediated vasoconstriction rendering BP monitoring insensitive to detect blood loss early. Late detection can result in reduced tissue oxygenation and eventually cellular death. We hypothesized that a machine learning algorithm that interprets currently used and new hemodynamic parameters could facilitate in the detection of impending hypovolemic shock. Method: In 42 (27 female) young [mean (sd): 24 (4) years], healthy subjects central blood volume (CBV) was progressively reduced by application of -50 mmHg lower body negative pressure until the onset of pre-syncope. A support vector machine was trained to classify samples into normovolemia (class 0), initial phase of CBV reduction (class 1) or advanced CBV reduction (class 2). Nine models making use of different features were computed to compare sensitivity and specificity of different non-invasive hemodynamic derived signals. Model features included : volumetric hemodynamic parameters (stroke volume and cardiac output), BP curve dynamics, near-infrared spectroscopy determined cortical brain oxygenation, end-tidal carbon dioxide pressure, thoracic bio-impedance, and middle cerebral artery transcranial Doppler (TCD) blood flow velocity. Model performance was tested by quantifying the predictions with three methods : sensitivity and specificity, absolute error, and quantification of the log odds ratio of class 2 vs. class 0 probability estimates. Results: The combination with maximal sensitivity and specificity for classes 1 and 2 was found for the model comprising volumetric features (class 1: 0.73-0.98 and class 2: 0.56-0.96). Overall lowest model error was found for the models comprising TCD curve hemodynamics. Using probability estimates the best combination of sensitivity for class 1 (0.67) and specificity (0.87) was found for the model that contained the TCD cerebral blood flow velocity derived pulse height. The highest combination for class 2 was found for the model with the volumetric features (0.72 and 0.91). Conclusion: The most sensitive models for the detection of advanced CBV reduction comprised data that describe features from volumetric parameters and from cerebral blood flow velocity hemodynamics. In a validated model of hemorrhage in humans these parameters provide the best indication of the progression of central hypovolemia.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Spatial patch occupancy patterns of the Lower Keys marsh rabbit
Eaton, Mitchell J.; Hughes, Phillip T.; Nichols, James D.; Morkill, Anne; Anderson, Chad
2011-01-01
Reliable estimates of presence or absence of a species can provide substantial information on management questions related to distribution and habitat use but should incorporate the probability of detection to reduce bias. We surveyed for the endangered Lower Keys marsh rabbit (Sylvilagus palustris hefneri) in habitat patches on 5 Florida Key islands, USA, to estimate occupancy and detection probabilities. We derived detection probabilities using spatial replication of plots and evaluated hypotheses that patch location (coastal or interior) and patch size influence occupancy and detection. Results demonstrate that detection probability, given rabbits were present, was <0.5 and suggest that naïve estimates (i.e., estimates without consideration of imperfect detection) of patch occupancy are negatively biased. We found that patch size and location influenced probability of occupancy but not detection. Our findings will be used by Refuge managers to evaluate population trends of Lower Keys marsh rabbits from historical data and to guide management decisions for species recovery. The sampling and analytical methods we used may be useful for researchers and managers of other endangered lagomorphs and cryptic or fossorial animals occupying diverse habitats.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
Alba, A; Casal, J; Napp, S; Martin, P A J
2010-11-01
Compulsory surveillance programmes for avian influenza (AI) have been implemented in domestic poultry and wild birds in all the European Member States since 2005. The implementation of these programmes is complex and requires a close evaluation. A good indicator to assess their efficacy is the sensitivity (Se) of the surveillance system. In this study, the sensitivities for different sampling designs proposed by the Spanish authorities for the commercial poultry population of Catalonia were assessed, using the scenario tree model methodology. These samplings were stratified throughout the territory of Spain and took into account the species, the types of production and their specific risks. The probabilities of detecting infection at different prevalences at both individual and holding level were estimated. Furthermore, those subpopulations that contributed more to the Se of the system were identified. The model estimated that all the designs met the requirements of the European Commission. The probability of detecting AI circulating in Catalonian poultry did not change significantly when the within-holding design prevalence varied from 30% to 10%. In contrast, when the among-holding design prevalence decreased from 5% to 1%, the probability of detecting AI was drastically reduced. The sampling of duck and goose holdings, and to a lesser extent the sampling of turkey and game bird holdings, increased the Se substantially. The Se of passive surveillance in chickens for highly pathogenic avian influenza (HPAI) and low pathogenicity avian influenza (LPAI) were also assessed. The probability of the infected birds manifesting apparent clinical signs and the awareness of veterinarians and farmers had great influence on the probability of detecting AI. In order to increase the probability of an early detection of HPAI in chicken, the probability of performing AI specific tests when AI is suspected would need to be increased. Copyright © 2010 Elsevier B.V. All rights reserved.
Integrating count and detection-nondetection data to model population dynamics.
Zipkin, Elise F; Rossman, Sam; Yackulic, Charles B; Wiens, J David; Thorson, James T; Davis, Raymond J; Grant, Evan H Campbell
2017-06-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture-recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection-nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection-nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection-nondetection data (1995-2014) with newly collected count data (2015-2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance. © 2017 by the Ecological Society of America.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Sensitivity and specificity of oral HPV detection for HPV-positive head and neck cancer.
Gipson, Brooke J; Robbins, Hilary A; Fakhry, Carole; D'Souza, Gypsyamber
2018-02-01
The incidence of HPV-related head and neck squamous cell carcinoma (HPV-HNSCC) is increasing. Oral samples are easy and non-invasive to collect, but the diagnostic accuracy of oral HPV detection methods for classifying HPV-positive HNSCC tumors has not been well explored. In a systematic review, we identified eight studies of HNSCC patients meeting our eligibility criteria of having: (1) HPV detection in oral rinse or oral swab samples, (2) tumor HPV or p16 testing, (3) a publication date within the last 10 years (January 2007-May 2017, as laboratory methods change), and (4) at least 15 HNSCC cases. Data were abstracted from each study and a meta-analysis performed to calculate sensitivity and specificity. Eight articles meeting inclusion criteria were identified. Among people diagnosed with HNSCC, oral HPV detection has good specificity (92%, 95% CI = 82-97%) and moderate sensitivity (72%, 95% CI = 45-89%) for HPV-positive HNSCC tumor. Results were similar when restricted to studies with only oropharyngeal cancer cases, with oral rinse samples, or testing for HPV16 DNA (instead of any oncogenic HPV) in the oral samples. Among those who already have HNSCC, oral HPV detection has few false-positives but may miss one-half to one-quarter of HPV-related cases (false-negatives). Given these findings in cancer patients, the utility of oral rinses and swabs as screening tests for HPV-HNSCC among healthy populations is probably limited. Copyright © 2017 Elsevier Ltd. All rights reserved.
Point counts are a common method for sampling avian distribution and abundance. Though methods for estimating detection probabilities are available, many analyses use raw counts and do not correct for detectability. We use a removal model of detection within an N-mixture approa...
PCR-based detection of gene transfer vectors: application to gene doping surveillance.
Perez, Irene C; Le Guiner, Caroline; Ni, Weiyi; Lyles, Jennifer; Moullier, Philippe; Snyder, Richard O
2013-12-01
Athletes who illicitly use drugs to enhance their athletic performance are at risk of being banned from sports competitions. Consequently, some athletes may seek new doping methods that they expect to be capable of circumventing detection. With advances in gene transfer vector design and therapeutic gene transfer, and demonstrations of safety and therapeutic benefit in humans, there is an increased probability of the pursuit of gene doping by athletes. In anticipation of the potential for gene doping, assays have been established to directly detect complementary DNA of genes that are top candidates for use in doping, as well as vector control elements. The development of molecular assays that are capable of exposing gene doping in sports can serve as a deterrent and may also identify athletes who have illicitly used gene transfer for performance enhancement. PCR-based methods to detect foreign DNA with high reliability, sensitivity, and specificity include TaqMan real-time PCR, nested PCR, and internal threshold control PCR.
Significance of noisy signals in periodograms
NASA Astrophysics Data System (ADS)
Süveges, Maria
2015-08-01
The detection of tiny periodic signals in noisy and irregularly sampled time series is a challenging task. Once a small peak is found in the periodogram, the next step is to see how probable it is that pure noise produced a peak so extreme - that is to say, compute its False Alarm Probability (FAP). This useful measure quantifies the statistical plausibility of the found signal among the noise. However, its derivation from statistical principles is very hard due to the specificities of astronomical periodograms, such as oversampling and the ensuing strong correlation among its values at different frequencies. I will present a method to compute the FAP based on extreme-value statistics (Süveges 2014), and compare it to two other methods proposed by Baluev (2008) and Paltani (2004) and Schwarzenberg-Czerny (2012) on signals with various signal shapes and at different signal-to-noise ratios.
Estimating occupancy and abundance using aerial images with imperfect detection
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Bower, Michael R.
2017-01-01
Species distribution and abundance are critical population characteristics for efficient management, conservation, and ecological insight. Point process models are a powerful tool for modelling distribution and abundance, and can incorporate many data types, including count data, presence-absence data, and presence-only data. Aerial photographic images are a natural tool for collecting data to fit point process models, but aerial images do not always capture all animals that are present at a site. Methods for estimating detection probability for aerial surveys usually include collecting auxiliary data to estimate the proportion of time animals are available to be detected.We developed an approach for fitting point process models using an N-mixture model framework to estimate detection probability for aerial occupancy and abundance surveys. Our method uses multiple aerial images taken of animals at the same spatial location to provide temporal replication of sample sites. The intersection of the images provide multiple counts of individuals at different times. We examined this approach using both simulated and real data of sea otters (Enhydra lutris kenyoni) in Glacier Bay National Park, southeastern Alaska.Using our proposed methods, we estimated detection probability of sea otters to be 0.76, the same as visual aerial surveys that have been used in the past. Further, simulations demonstrated that our approach is a promising tool for estimating occupancy, abundance, and detection probability from aerial photographic surveys.Our methods can be readily extended to data collected using unmanned aerial vehicles, as technology and regulations permit. The generality of our methods for other aerial surveys depends on how well surveys can be designed to meet the assumptions of N-mixture models.
Detection of Staphylococcal Enterotoxin in Food
Casman, Ezra P.; Bennett, Reginald W.
1965-01-01
Methods are described for the extraction and serological detection of trace amounts of enterotoxins A and B in foods incriminated in outbreaks of staphylococcal food poisoning. Evidence is presented for the probable applicability of the methods for the detection of unidentified enterotoxins. PMID:14325876
Gray, Brian R.; Holland, Mark D.; Yi, Feng; Starcevich, Leigh Ann Harrod
2013-01-01
Site occupancy models are commonly used by ecologists to estimate the probabilities of species site occupancy and of species detection. This study addresses the influence on site occupancy and detection estimates of variation in species availability among surveys within sites. Such variation in availability may result from temporary emigration, nonavailability of the species for detection, and sampling sites spatially when species presence is not uniform within sites. We demonstrate, using Monte Carlo simulations and aquatic vegetation data, that variation in availability and heterogeneity in the probability of availability may yield biases in the expected values of the site occupancy and detection estimates that have traditionally been associated with low-detection probabilities and heterogeneity in those probabilities. These findings confirm that the effects of availability may be important for ecologists and managers, and that where such effects are expected, modification of sampling designs and/or analytical methods should be considered. Failure to limit the effects of availability may preclude reliable estimation of the probability of site occupancy.
Use of elastin fibre detection in the diagnosis of ventilator associated pneumonia.
el-Ebiary, M.; Torres, A.; González, J.; Martos, A.; Puig de la Bellacasa, J.; Ferrer, M.; Rodriguez-Roisin, R.
1995-01-01
BACKGROUND--Elastin fibre detection could be a simple and reliable marker of ventilator associated pneumonia. To confirm this, a prospective study was undertaken to evaluate the diagnostic yield of elastin fibre detection in the diagnosis of ventilator associated pneumonia. METHODS--Seventy eight mechanically ventilated patients were evaluated by examining endotracheal aspirates for the presence of elastin fibres. All patients were previously treated with antibiotics. Quantitative bacterial cultures of endotracheal aspirates and protected specimen brush samples were also performed. Patients were classified into three diagnostic categories: group 1, definite pneumonia (n = 25); group 2, probable pneumonia (n = 35); and group 3, controls (n = 18). RESULTS--Patients with definite and probable pneumonia were grouped together. The presence of elastin fibres in endotracheal aspirate samples was more frequent in groups 1 and 2, being found in 19 of the 60 patients compared with five of the control group. Although the presence of elastin fibres had a low sensitivity (32%), it was a reasonably specific marker (72%) of pneumonia. This specificity increased to 86% and 81% respectively when only Gram negative bacilli and Pseudomonas aeruginosa pneumonia were considered. Again, calculated sensitivity was 43% and 44% when analysing cases infected by Gram negative bacilli and Ps aeruginosa, respectively. The negative predictive value of the detection of elastin fibres in pneumonia caused by Ps aeruginosa was 81%. Detection was more frequent with infection by Gram negative bacilli (14/19), particularly with Ps aeruginosa (8/14). By contrast, pneumonia due to Gram positive cocci or non-bacterial agents uncommonly resulted in positive elastin fibre preparations (4/19, 21%). When analysing patients with and without chronic obstructive pulmonary disease, the diagnostic value of elastin fibre detection did not change. CONCLUSIONS--Potassium hydroxide preparation of elastin fibres is a rapid and simple specific marker of ventilator associated pneumonia and may be a useful technique to help diagnose pulmonary infections in mechanically ventilated patients, although this assessment is at present limited to patients without adult respiratory distress syndrome. PMID:7886642
Detection performance in clutter with variable resolution
NASA Astrophysics Data System (ADS)
Schmieder, D. E.; Weathersby, M. R.
1983-07-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 line pairs per target (LP/TGT), while at the higher SCRs it was found that a resoluton of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
An Experiment Quantifying The Effect Of Clutter On Target Detection
NASA Astrophysics Data System (ADS)
Weathersby, Marshall R.; Schmieder, David E.
1985-01-01
Experiments were conducted to determine the influence of background clutter on target detection criteria. The experiment consisted of placing observers in front of displayed images on a TV monitor. Observer ability to detect military targets embedded in simulated natural and manmade background clutter was measured when there was unlimited viewing time. Results were described in terms of detection probability versus target resolution for various signal to clutter ratios (SCR). The experiments were preceded by a search for a meaningful clutter definition. The selected definition was a statistical measure computed by averaging the standard deviation of contiguous scene cells over the whole scene. The cell size was comparable to the target size. Observer test results confirmed the expectation that the resolution required for a given detection probability was a continuum function of the clutter level. At the lower SCRs the resolution required for a high probability of detection was near 6 lines pairs per target (LP/TGT), while at the higher SCRs it was found that a resolution of less than 0.25 LP/TGT would yield a high probability of detection. These results are expected to aid in target acquisition performance modeling and to lead to improved specifications for imaging automatic target screeners.
Space debris detection in optical image sequences.
Xi, Jiangbo; Wen, Desheng; Ersoy, Okan K; Yi, Hongwei; Yao, Dalei; Song, Zongxi; Xi, Shaobo
2016-10-01
We present a high-accuracy, low false-alarm rate, and low computational-cost methodology for removing stars and noise and detecting space debris with low signal-to-noise ratio (SNR) in optical image sequences. First, time-index filtering and bright star intensity enhancement are implemented to remove stars and noise effectively. Then, a multistage quasi-hypothesis-testing method is proposed to detect the pieces of space debris with continuous and discontinuous trajectories. For this purpose, a time-index image is defined and generated. Experimental results show that the proposed method can detect space debris effectively without any false alarms. When the SNR is higher than or equal to 1.5, the detection probability can reach 100%, and when the SNR is as low as 1.3, 1.2, and 1, it can still achieve 99%, 97%, and 85% detection probabilities, respectively. Additionally, two large sets of image sequences are tested to show that the proposed method performs stably and effectively.
Kanankege, Kaushi S. T.; Alkhamis, Moh A.; Phelps, Nicholas B. D.; Perez, Andres M.
2018-01-01
Zebra mussels (ZMs) (Dreissena polymorpha) and Eurasian watermilfoil (EWM) (Myriophyllum spicatum) are aggressive aquatic invasive species posing a conservation burden on Minnesota. Recognizing areas at high risk for invasion is a prerequisite for the implementation of risk-based prevention and mitigation management strategies. The early detection of invasion has been challenging, due in part to the imperfect observation process of invasions including the absence of a surveillance program, reliance on public reporting, and limited resource availability, which results in reporting bias. To predict the areas at high risk for invasions, while accounting for underreporting, we combined network analysis and probability co-kriging to estimate the risk of ZM and EWM invasions. We used network analysis to generate a waterbody-specific variable representing boater traffic, a known high risk activity for human-mediated transportation of invasive species. In addition, co-kriging was used to estimate the probability of species introduction, using waterbody-specific variables. A co-kriging model containing distance to the nearest ZM infested location, boater traffic, and road access was used to recognize the areas at high risk for ZM invasions (AUC = 0.78). The EWM co-kriging model included distance to the nearest EWM infested location, boater traffic, and connectivity to infested waterbodies (AUC = 0.76). Results suggested that, by 2015, nearly 20% of the waterbodies in Minnesota were at high risk of ZM (12.45%) or EWM (12.43%) invasions, whereas only 125/18,411 (0.67%) and 304/18,411 (1.65%) are currently infested, respectively. Prediction methods presented here can support decisions related to solving the problems of imperfect detection, which subsequently improve the early detection of biological invasions. PMID:29354638
Kalaiselvan, Sagadevan; Sankar, Sathish; Ramamurthy, Mageshbabu; Ghosh, Asit Ranjan; Nandagopal, Balaji; Sridharan, Gopalan
2017-08-01
Hantaviruses are emerging viral pathogens that causes hantavirus cardiopulmonary syndrome (HCPS) in the Americas, a severe, sometimes fatal, respiratory disease in humans with a case fatality rate of ≥50%. IgM and IgG-based serological detection methods are the most common approaches used for laboratory diagnosis of hantaviruses. Such emerging viral pathogens emphasizes the need for improved rapid diagnostic devices and vaccines incorporating pan-specific epitopes of genotypes. We predicted linear B-cell epitopes for hantaviruses that are specific to genotypes causing HCPS in humans using in silico prediction servers. We modeled the Andes and Sin Nombre hantavirus nucleocapsid protein to locate the identified epitopes. Based on the mean percent prediction probability score, epitope IMASKSVGS/TAEEKLKKKSAF was identified as the best candidate B-cell epitope specific for hantaviruses causing HCPS. Promiscuous epitopes were identified in the C-terminal of the protein. Our study for the first time has reported pan-specific B-cell epitopes for developing immunoassays in the detection of antibodies to hantaviruses causing HCPS. Identification of epitopes with pan-specific recognition of all genotypes causing HCPS could be valuable for the development of immunodiagnositic tools toward pan-detection of hantavirus antibodies in ELISA. J. Cell. Biochem. 118: 2320-2324, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Nolen, Matthew S.; Wagner, Brian K.
2017-01-01
Crayfish are ecologically important in freshwater systems worldwide and are imperiled in North America and globally. We sought to examine landscape- to local-scale environmental variables related to occupancy and detection probability of a suite of stream-dwelling crayfish species. We used a quantitative kickseine method to sample crayfish presence at 102 perennial stream sites with eight surveys per site. We modeled occupancy (psi) and detection probability (P) and local- and landscape-scale environmental covariates. We developed a set of a priori candidate models for each species and ranked models using (Q)AICc. Detection probabilities and occupancy estimates differed among crayfish species with Orconectes eupunctus, O. marchandi, and Cambarus hubbsi being relatively rare (psi < 0.20) with moderate (0.46–0.60) to high (0.81) detection probability and O. punctimanus and O. ozarkae being relatively common (psi > 0.60) with high detection probability (0.81). Detection probability was often related to local habitat variables current velocity, depth, or substrate size. Important environmental variables for crayfish occupancy were species dependent but were mainly landscape variables such as stream order, geology, slope, topography, and land use. Landscape variables strongly influenced crayfish occupancy and should be considered in future studies and conservation plans.
Bowman, Howard; Filetti, Marco; Janssen, Dirk; Su, Li; Alsufyani, Abdulmajeed; Wyble, Brad
2013-01-01
We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG. PMID:23372697
Method for Determining the Sensitivity of a Physical Security System.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Speed, Ann; Gauthier, John H.; Hoffman, Matthew John
Modern systems, such as physical security systems, are often designed to involve complex interactions of technological and human elements. Evaluation of the performance of these systems often overlooks the human element. A method is proposed here to expand the concept of sensitivity—as denoted by d’—from signal detection theory (Green & Swets 1966; Macmillan & Creelman 2005), which came out of the field of psychophysics, to cover not only human threat detection but also other human functions plus the performance of technical systems in a physical security system, thereby including humans in the overall evaluation of system performance. New in thismore » method is the idea that probabilities of hits (accurate identification of threats) and false alarms (saying “threat” when there is not one), which are used to calculate d’ of the system, can be applied to technologies and, furthermore, to different functions in the system beyond simple yes-no threat detection. At the most succinct level, the method returns a single number that represents the effectiveness of a physical security system; specifically, the balance between the handling of actual threats and the distraction of false alarms. The method can be automated, and the constituent parts revealed, such that given an interaction graph that indicates the functional associations of system elements and the individual probabilities of hits and false alarms for those elements, it will return the d’ of the entire system as well as d’ values for individual parts. The method can also return a measure of the response bias* of the system. One finding of this work is that the d’ for a physical security system can be relatively poor in spite of having excellent d’s for each of its individual functional elements.« less
A Driving Cycle Detection Approach Using Map Service API
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Gonder, Jeffrey D
Following advancements in smartphone and portable global positioning system (GPS) data collection, wearable GPS data have realized extensive use in transportation surveys and studies. The task of detecting driving cycles (driving or car-mode trajectory segments) from wearable GPS data has been the subject of much research. Specifically, distinguishing driving cycles from other motorized trips (such as taking a bus) is the main research problem in this paper. Many mode detection methods only focus on raw GPS speed data while some studies apply additional information, such as geographic information system (GIS) data, to obtain better detection performance. Procuring and maintaining dedicatedmore » road GIS data are costly and not trivial, whereas the technical maturity and broad use of map service application program interface (API) queries offers opportunities for mode detection tasks. The proposed driving cycle detection method takes advantage of map service APIs to obtain high-quality car-mode API route information and uses a trajectory segmentation algorithm to find the best-matched API route. The car-mode API route data combined with the actual route information, including the actual mode information, are used to train a logistic regression machine learning model, which estimates car modes and non-car modes with probability rates. The experimental results show promise for the proposed method's ability to detect vehicle mode accurately.« less
Berkas, Wayne R.
2000-01-01
Water samples from 27 wells completed in and near the Shell Valley aquifer were analyzed for benzene, toluene, ethylbenzene, and xylene (BTEX), polynuclear aromatic hydrocarbons (PAH), polychlorinated biphenyls (PCB), and pentachlorophenol (PCP) using the enzyme-linked immunoassay method. The analyses indicated the presence of PAH, PCB, and PCP in the study area. However, an individual compound at a high concentration or many compounds at low concentrations could cause the detections. Therefore, selected samples were analyzed using the gas chromatography (GC) method, which can detect individual compounds and determine the concentrations of those compounds. Concentrations for all compounds detected using the GC method were less than the minimum reporting levels (MRLs) for each constituent, indicating numerous compounds at low concentrations probably caused the immunoassay detections. The GC method also can detect compounds other than BTEX, PAH, PCB, and PCP. Concentrations for 81 of the additional compounds were determined and were less than the MRLs.Four compounds that could not be quantified accurately using the requested analytical methods also were detected. Acetone was detected in 4 of the 27 wells, 2-butanone was detected in 3 of the 27 wells, prometon was detected in 1 of the 27 wells, and tetrahydrofuran was detected in 9 of the 27 wells. Acetone, 2-butanone, and tetrahydrofuran probably leached from the polyvinyl chloride (PVC) pipe and joint glue and probably are not contaminants from the aquifer. Prometon is a herbicide that controls most annual and many perennial broadleaf weeds and primarily is used on roads and railroad tracks. The one occurrence of prometon could be caused by overspraying for weeds.
Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L
2015-09-15
The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the assessment of uncertainty in patient geocodes, relative to existing geocoding related uncertainty metrics.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)
2005-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance system and method having an adaptive sequential probability fault detection test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2006-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)
2008-01-01
System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.
Improving detection of low SNR targets using moment-based detection
NASA Astrophysics Data System (ADS)
Young, Shannon R.; Steward, Bryan J.; Hawks, Michael; Gross, Kevin C.
2016-05-01
Increases in the number of cameras deployed, frame rate, and detector array sizes have led to a dramatic increase in the volume of motion imagery data that is collected. Without a corresponding increase in analytical manpower, much of the data is not analyzed to full potential. This creates a need for fast, automated, and robust methods for detecting signals of interest. Current approaches fall into two categories: detect-before-track (DBT), which are fast but often poor at detecting dim targets, and track-before-detect (TBD) methods which can offer better performance but are typically much slower. This research seeks to contribute to the near real time detection of low SNR, unresolved moving targets through an extension of earlier work on higher order moments anomaly detection, a method that exploits both spatial and temporal information but is still computationally efficient and massively parallelizable. It was found that intelligent selection of parameters can improve probability of detection by as much as 25% compared to earlier work with higherorder moments. The present method can reduce detection thresholds by 40% compared to the Reed-Xiaoli anomaly detector for low SNR targets (for a given probability of detection and false alarm).
Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2010-01-01
When facing a conjunction between space objects, decision makers must chose whether to maneuver for collision avoidance or not. We apply a well-known decision procedure, the sequential probability ratio test, to this problem. We propose two approaches to the problem solution, one based on a frequentist method, and the other on a Bayesian method. The frequentist method does not require any prior knowledge concerning the conjunction, while the Bayesian method assumes knowledge of prior probability densities. Our results show that both methods achieve desired missed detection rates, but the frequentist method's false alarm performance is inferior to the Bayesian method's
Probabilistic Surface Characterization for Safe Landing Hazard Detection and Avoidance (HDA)
NASA Technical Reports Server (NTRS)
Johnson, Andrew E. (Inventor); Ivanov, Tonislav I. (Inventor); Huertas, Andres (Inventor)
2015-01-01
Apparatuses, systems, computer programs and methods for performing hazard detection and avoidance for landing vehicles are provided. Hazard assessment takes into consideration the geometry of the lander. Safety probabilities are computed for a plurality of pixels in a digital elevation map. The safety probabilities are combined for pixels associated with one or more aim points and orientations. A worst case probability value is assigned to each of the one or more aim points and orientations.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Powell, Rebecca L R; Ouellette, Ian; Lindsay, Ross W; Parks, Christopher L; King, C Richter; McDermott, Adrian B; Morrow, Gavin
2013-06-01
Results from recent HIV-1 vaccine studies have indicated that high serum antibody (Ab) titers may not be necessary for Ab-mediated protection, and that Abs localized to mucosal sites might be critical for preventing infection. Enzyme-linked immunosorbent assay (ELISA) has been used for decades as the gold standard for Ab measurement, though recently, highly sensitive microsphere-based assays have become available, with potential utility for improved detection of Abs. In this study, we assessed the Bio-Plex(®) Suspension Array System for the detection of simian immunodeficiency virus (SIV)-specific Abs in rhesus macaques (RMs) chronically infected with SIV, whose serum or mucosal SIV-specific Ab titers were negative by ELISA. We developed a SIVmac239-specific 4-plex bead array for the simultaneous detection of Abs binding to Env, Gag, Pol, and Nef. The 4-plex assay was used to quantify SIV-specific serum IgG and rectal swab IgA titers from control (SIV-naive) and SIVmac239-infected RMs. The Bio-Plex assay specifically detected anti-SIV Abs in specimens from SIV-infected animals for all four analytes when compared to SIV-naive control samples (p≤0.04). Furthermore, in 70% of Env and 79% of Gag ELISA-negative serum samples, specific Ab was detected using the Bio-Plex assay. Similarly, 71% of Env and 48% of Gag ELISA-negative rectal swab samples were identified as positive using the Bio-Plex assay. Importantly, assay specificity (i.e., probability of true positives) was comparable to ELISA (94%-100%). The results reported here indicate that microsphere-based methods provide a substantial improvement over ELISA for the detection of Ab responses, aid in detecting specific Abs when analyzing samples containing low levels of Abs, such as during the early stages of a vaccine trial, and may be valuable in attempts to link protective efficacy of vaccines with induced Ab responses.
Goal-directed ultrasound in the detection of long-bone fractures
NASA Technical Reports Server (NTRS)
Marshburn, Thomas H.; Legome, Eric; Sargsyan, Ashot; Li, Shannon Melton James; Noble, Vicki A.; Dulchavsky, Scott A.; Sims, Carrie; Robinson, David
2004-01-01
BACKGROUND: New portable ultrasound (US) systems are capable of detecting fractures in the remote setting. However, the accuracy of ultrasound by physicians with minimal ultrasound training is unknown. METHODS: After one hour of standardized training, physicians with minimal US experience clinically evaluated patients presenting with pain and trauma to the upper arm or leg. The investigators then performed a long-bone US evaluation, recording their impression of fracture presence or absence. Results of the examination were compared with routine plain or computer aided radiography (CT). RESULTS: 58 patients were examined. The sensitivity and specificity of US were 92.9% and 83.3%, and of the physical examination were 78.6% and 90.0%, respectively. US provided improved sensitivity with less specificity compared with physical examination in the detection of fractures in long bones. CONCLUSION: Ultrasound scans by minimally trained clinicians may be used to rule out a long-bone fracture in patients with a medium to low probability of fracture.
Multi-Level Anomaly Detection on Time-Varying Graph Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A; Collins, John P; Ferragut, Erik M
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating probabilities at finer levels, and these closely related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, thismore » multi-scale analysis facilitates intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. To illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Community Participation in Chagas Disease Vector Surveillance: Systematic Review
Abad-Franch, Fernando; Vega, M. Celeste; Rolón, Miriam S.; Santos, Walter S.; Rojas de Arias, Antonieta
2011-01-01
Background Vector control has substantially reduced Chagas disease (ChD) incidence. However, transmission by household-reinfesting triatomines persists, suggesting that entomological surveillance should play a crucial role in the long-term interruption of transmission. Yet, infestation foci become smaller and harder to detect as vector control proceeds, and highly sensitive surveillance methods are needed. Community participation (CP) and vector-detection devices (VDDs) are both thought to enhance surveillance, but this remains to be thoroughly assessed. Methodology/Principal Findings We searched Medline, Web of Knowledge, Scopus, LILACS, SciELO, the bibliographies of retrieved studies, and our own records. Data from studies describing vector control and/or surveillance interventions were extracted by two reviewers. Outcomes of primary interest included changes in infestation rates and the detection of infestation/reinfestation foci. Most results likely depended on study- and site-specific conditions, precluding meta-analysis, but we re-analysed data from studies comparing vector control and detection methods whenever possible. Results confirm that professional, insecticide-based vector control is highly effective, but also show that reinfestation by native triatomines is common and widespread across Latin America. Bug notification by householders (the simplest CP-based strategy) significantly boosts vector detection probabilities; in comparison, both active searches and VDDs perform poorly, although they might in some cases complement each other. Conclusions/Significance CP should become a strategic component of ChD surveillance, but only professional insecticide spraying seems consistently effective at eliminating infestation foci. Involvement of stakeholders at all process stages, from planning to evaluation, would probably enhance such CP-based strategies. PMID:21713022
Bayesian methods for outliers detection in GNSS time series
NASA Astrophysics Data System (ADS)
Qianqian, Zhang; Qingming, Gui
2013-07-01
This article is concerned with the problem of detecting outliers in GNSS time series based on Bayesian statistical theory. Firstly, a new model is proposed to simultaneously detect different types of outliers based on the conception of introducing different types of classification variables corresponding to the different types of outliers; the problem of outlier detection is converted into the computation of the corresponding posterior probabilities, and the algorithm for computing the posterior probabilities based on standard Gibbs sampler is designed. Secondly, we analyze the reasons of masking and swamping about detecting patches of additive outliers intensively; an unmasking Bayesian method for detecting additive outlier patches is proposed based on an adaptive Gibbs sampler. Thirdly, the correctness of the theories and methods proposed above is illustrated by simulated data and then by analyzing real GNSS observations, such as cycle slips detection in carrier phase data. Examples illustrate that the Bayesian methods for outliers detection in GNSS time series proposed by this paper are not only capable of detecting isolated outliers but also capable of detecting additive outlier patches. Furthermore, it can be successfully used to process cycle slips in phase data, which solves the problem of small cycle slips.
Segmentation and automated measurement of chronic wound images: probability map approach
NASA Astrophysics Data System (ADS)
Ahmad Fauzi, Mohammad Faizal; Khansa, Ibrahim; Catignani, Karen; Gordillo, Gayle; Sen, Chandan K.; Gurcan, Metin N.
2014-03-01
estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
Detection theory for accurate and non-invasive skin cancer diagnosis using dynamic thermal imaging
Godoy, Sebastián E.; Hayat, Majeed M.; Ramirez, David A.; Myers, Stephen A.; Padilla, R. Steven; Krishna, Sanjay
2017-01-01
Skin cancer is the most common cancer in the United States with over 3.5M annual cases. Presently, visual inspection by a dermatologist has good sensitivity (> 90%) but poor specificity (< 10%), especially for melanoma, which leads to a high number of unnecessary biopsies. Here we use dynamic thermal imaging (DTI) to demonstrate a rapid, accurate and non-invasive imaging system for detection of skin cancer. In DTI, the lesion is cooled down and the thermal recovery is recorded using infrared imaging. The thermal recovery curves of the suspected lesions are then utilized in the context of continuous-time detection theory in order to define an optimal statistical decision rule such that the sensitivity of the algorithm is guaranteed to be at a maximum for every prescribed false-alarm probability. The proposed methodology was tested in a pilot study including 140 human subjects demonstrating a sensitivity in excess of 99% for a prescribed specificity in excess of 99% for detection of skin cancer. To the best of our knowledge, this is the highest reported accuracy for any non-invasive skin cancer diagnosis method. PMID:28736673
a Cloud Boundary Detection Scheme Combined with Aslic and Cnn Using ZY-3, GF-1/2 Satellite Imagery
NASA Astrophysics Data System (ADS)
Guo, Z.; Li, C.; Wang, Z.; Kwok, E.; Wei, X.
2018-04-01
Remote sensing optical image cloud detection is one of the most important problems in remote sensing data processing. Aiming at the information loss caused by cloud cover, a cloud detection method based on convolution neural network (CNN) is presented in this paper. Firstly, a deep CNN network is used to extract the multi-level feature generation model of cloud from the training samples. Secondly, the adaptive simple linear iterative clustering (ASLIC) method is used to divide the detected images into superpixels. Finally, the probability of each superpixel belonging to the cloud region is predicted by the trained network model, thereby generating a cloud probability map. The typical region of GF-1/2 and ZY-3 were selected to carry out the cloud detection test, and compared with the traditional SLIC method. The experiment results show that the average accuracy of cloud detection is increased by more than 5 %, and it can detected thin-thick cloud and the whole cloud boundary well on different imaging platforms.
Nikolay, Birgit; Salje, Henrik; Sturm-Ramirez, Katharine; Azziz-Baumgartner, Eduardo; Homaira, Nusrat; Iuliano, A. Danielle; Paul, Repon C.; Hossain, M. Jahangir; Cauchemez, Simon; Gurley, Emily S.
2017-01-01
Background The International Health Regulations outline core requirements to ensure the detection of public health threats of international concern. Assessing the capacity of surveillance systems to detect these threats is crucial for evaluating a country’s ability to meet these requirements. Methods and Findings We propose a framework to evaluate the sensitivity and representativeness of hospital-based surveillance and apply it to severe neurological infectious diseases and fatal respiratory infectious diseases in Bangladesh. We identified cases in selected communities within surveillance hospital catchment areas using key informant and house-to-house surveys and ascertained where cases had sought care. We estimated the probability of surveillance detecting different sized outbreaks by distance from the surveillance hospital and compared characteristics of cases identified in the community and cases attending surveillance hospitals. We estimated that surveillance detected 26% (95% CI 18%–33%) of severe neurological disease cases and 18% (95% CI 16%–21%) of fatal respiratory disease cases residing at 10 km distance from a surveillance hospital. Detection probabilities decreased markedly with distance. The probability of detecting small outbreaks (three cases) dropped below 50% at distances greater than 26 km for severe neurological disease and at distances greater than 7 km for fatal respiratory disease. Characteristics of cases attending surveillance hospitals were largely representative of all cases; however, neurological disease cases aged <5 y or from the lowest socioeconomic group and fatal respiratory disease cases aged ≥60 y were underrepresented. Our estimates of outbreak detection rely on suspected cases that attend a surveillance hospital receiving laboratory confirmation of disease and being reported to the surveillance system. The extent to which this occurs will depend on disease characteristics (e.g., severity and symptom specificity) and surveillance resources. Conclusion We present a new approach to evaluating the sensitivity and representativeness of hospital-based surveillance, making it possible to predict its ability to detect emerging threats. PMID:28095468
Probability of detection of nests and implications for survey design
Smith, P.A.; Bart, J.; Lanctot, Richard B.; McCaffery, B.J.; Brown, S.
2009-01-01
Surveys based on double sampling include a correction for the probability of detection by assuming complete enumeration of birds in an intensively surveyed subsample of plots. To evaluate this assumption, we calculated the probability of detecting active shorebird nests by using information from observers who searched the same plots independently. Our results demonstrate that this probability varies substantially by species and stage of the nesting cycle but less by site or density of nests. Among the species we studied, the estimated single-visit probability of nest detection during the incubation period varied from 0.21 for the White-rumped Sandpiper (Calidris fuscicollis), the most difficult species to detect, to 0.64 for the Western Sandpiper (Calidris mauri), the most easily detected species, with a mean across species of 0.46. We used these detection probabilities to predict the fraction of persistent nests found over repeated nest searches. For a species with the mean value for detectability, the detection rate exceeded 0.85 after four visits. This level of nest detection was exceeded in only three visits for the Western Sandpiper, but six to nine visits were required for the White-rumped Sandpiper, depending on the type of survey employed. Our results suggest that the double-sampling method's requirement of nearly complete counts of birds in the intensively surveyed plots is likely to be met for birds with nests that survive over several visits of nest searching. Individuals with nests that fail quickly or individuals that do not breed can be detected with high probability only if territorial behavior is used to identify likely nesting pairs. ?? The Cooper Ornithological Society, 2009.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Shangjie; Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, California; Hara, Wendy
Purpose: To develop a reliable method to estimate electron density based on anatomic magnetic resonance imaging (MRI) of the brain. Methods and Materials: We proposed a unifying multi-atlas approach for electron density estimation based on standard T1- and T2-weighted MRI. First, a composite atlas was constructed through a voxelwise matching process using multiple atlases, with the goal of mitigating effects of inherent anatomic variations between patients. Next we computed for each voxel 2 kinds of conditional probabilities: (1) electron density given its image intensity on T1- and T2-weighted MR images; and (2) electron density given its spatial location in a referencemore » anatomy, obtained by deformable image registration. These were combined into a unifying posterior probability density function using the Bayesian formalism, which provided the optimal estimates for electron density. We evaluated the method on 10 patients using leave-one-patient-out cross-validation. Receiver operating characteristic analyses for detecting different tissue types were performed. Results: The proposed method significantly reduced the errors in electron density estimation, with a mean absolute Hounsfield unit error of 119, compared with 140 and 144 (P<.0001) using conventional T1-weighted intensity and geometry-based approaches, respectively. For detection of bony anatomy, the proposed method achieved an 89% area under the curve, 86% sensitivity, 88% specificity, and 90% accuracy, which improved upon intensity and geometry-based approaches (area under the curve: 79% and 80%, respectively). Conclusion: The proposed multi-atlas approach provides robust electron density estimation and bone detection based on anatomic MRI. If validated on a larger population, our work could enable the use of MRI as a primary modality for radiation treatment planning.« less
NASA Technical Reports Server (NTRS)
Gutierrez, Alberto, Jr.
1995-01-01
This dissertation evaluates receiver-based methods for mitigating the effects due to nonlinear bandlimited signal distortion present in high data rate satellite channels. The effects of the nonlinear bandlimited distortion is illustrated for digitally modulated signals. A lucid development of the low-pass Volterra discrete time model for a nonlinear communication channel is presented. In addition, finite-state machine models are explicitly developed for a nonlinear bandlimited satellite channel. A nonlinear fixed equalizer based on Volterra series has previously been studied for compensation of noiseless signal distortion due to a nonlinear satellite channel. This dissertation studies adaptive Volterra equalizers on a downlink-limited nonlinear bandlimited satellite channel. We employ as figure of merits performance in the mean-square error and probability of error senses. In addition, a receiver consisting of a fractionally-spaced equalizer (FSE) followed by a Volterra equalizer (FSE-Volterra) is found to give improvement beyond that gained by the Volterra equalizer. Significant probability of error performance improvement is found for multilevel modulation schemes. Also, it is found that probability of error improvement is more significant for modulation schemes, constant amplitude and multilevel, which require higher signal to noise ratios (i.e., higher modulation orders) for reliable operation. The maximum likelihood sequence detection (MLSD) receiver for a nonlinear satellite channel, a bank of matched filters followed by a Viterbi detector, serves as a probability of error lower bound for the Volterra and FSE-Volterra equalizers. However, this receiver has not been evaluated for a specific satellite channel. In this work, an MLSD receiver is evaluated for a specific downlink-limited satellite channel. Because of the bank of matched filters, the MLSD receiver may be high in complexity. Consequently, the probability of error performance of a more practical suboptimal MLSD receiver, requiring only a single receive filter, is evaluated.
Recognition of pigment network pattern in dermoscopy images based on fuzzy classification of pixels.
Garcia-Arroyo, Jose Luis; Garcia-Zapirain, Begonya
2018-01-01
One of the most relevant dermoscopic patterns is the pigment network. An innovative method of pattern recognition is presented for its detection in dermoscopy images. It consists of two steps. In the first one, by means of a supervised machine learning process and after performing the extraction of different colour and texture features, a fuzzy classification of pixels into the three categories present in the pattern's definition ("net", "hole" and "other") is carried out. This enables the three corresponding fuzzy sets to be created and, as a result, the three probability images that map them out are generated. In the second step, the pigment network pattern is characterised from a parameterisation process -derived from the system specification- and the subsequent extraction of different features calculated from the combinations of image masks extracted from the probability images, corresponding to the alpha-cuts obtained from the fuzzy sets. The method was tested on a database of 875 images -by far the largest used in the state of the art to detect pigment network- extracted from a public Atlas of Dermoscopy, obtaining AUC results of 0.912 and 88%% accuracy, with 90.71%% sensitivity and 83.44%% specificity. The main contribution of this method is the very design of the algorithm, highly innovative, which could also be used to deal with other pattern recognition problems of a similar nature. Other contributions are: 1. The good performance in discriminating between the pattern and the disturbing artefacts -which means that no prior preprocessing is required in this method- and between the pattern and other dermoscopic patterns; 2. It puts forward a new methodological approach for work of this kind, introducing the system specification as a required step prior to algorithm design and development, being this specification the basis for a required parameterisation -in the form of configurable parameters (with their value ranges) and set threshold values- of the algorithm and the subsequent conducting of the experiments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Detection of bacteriuria and pyuria by URISCREEN a rapid enzymatic screening test.
Pezzlo, M T; Amsterdam, D; Anhalt, J P; Lawrence, T; Stratton, N J; Vetter, E A; Peterson, E M; de la Maza, L M
1992-01-01
A multicenter study was performed to evaluate the ability of the URISCREEN (Analytab Products, Plainview, N.Y.), a 2-min catalase tube test, to detect bacteriuria and pyuria. This test was compared with the Chemstrip LN (BioDynamics, Division of Boehringer Mannheim Diagnostics, Indianapolis, Ind.), a 2-min enzyme dipstick test; a semiquantitative plate culture method was used as the reference test for bacteriuria, and the Gram stain or a quantitative chamber count method was used as the reference test for pyuria. Each test was evaluated for its ability to detect probable pathogens at greater than or equal to 10(2) CFU/ml and/or greater than or equal to 1 leukocyte per oil immersion field, as determined by the Gram stain method, or greater than 10 leukocytes per microliter, as determined by the quantitative count method. A total of 1,500 urine specimens were included in this evaluation. There were 298 specimens with greater than or equal 10(2) CFU/ml and 451 specimens with pyuria. Of the 298 specimens with probable pathogens isolated at various colony counts, 219 specimens had colony counts of greater than or equal to 10(5) CFU/ml, 51 specimens had between 10(4) and 10(5) CFU/ml, and 28 specimens had between 10(2) and less than 10(4) CFU/ml. Both the URISCREEN and the Chemstrip LN detected 93% (204 of 219) of the specimens with probable pathogens at greater than or equal to 10(5) CFU/ml. For the specimens with probable pathogens at greater than or equal to 10(2) CFU/ml, the sensitivities of the URISCREEN and the Chemstrip LN were 86% (256 of 298) and 81% (241 of 298), respectively. Of the 451 specimens with pyuria, the URISCREEN detected 88% (398 of 451) and Chemstrip LN detected 78% (350 if 451). There were 204 specimens with both greater than or equal to 10(2) CFU/ml and pyuria; the sensitivities of both methods were 95% (193 of 204) for these specimens. Overall, there were 545 specimens with probable pathogens at greater than or equal to 10(2) CFU/ml and/or pyuria. The URISCREEN detected 85% (461 of 545), and the Chemstrip LN detected 73% (398 of 545). A majority (76%) of the false-negative results obtained with either method were for specimens without leukocytes in the urine. There were 955 specimens with no probable pathogens or leukocytes. Of these, 28% (270 of 955) were found positive by the URISCREEN and 13% (122 of 955) were found positive by the Chemstrip LN. A majority of the false-positive results were probably due, in part, to the detection of enzymes present in both bacterial and somatic cells by each of the test systems. Overall, the URISCREEN is rapid, manual, easy-to-perform enzymatic test that yields findings similar to those yielded by the Chemstrip LN for specimens with both greater than or equal to 10(2) CFU/ml and pyuria or for specimens with greater than or equal to 10(5) CFU/ml and with or without pyuria. However, when the data were analyzed for either probable pathogens at less 10(5) CFU/ml or pyuria, the sensitivity of the URISCREEN was higher (P less than 0.05). PMID:1551986
A new method for detecting small and dim targets in starry background
NASA Astrophysics Data System (ADS)
Yao, Rui; Zhang, Yanning; Jiang, Lei
2011-08-01
Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.
Mali, Ivana; Duarte, Adam; Forstner, Michael R J
2018-01-01
Abundance estimates play an important part in the regulatory and conservation decision-making process. It is important to correct monitoring data for imperfect detection when using these data to track spatial and temporal variation in abundance, especially in the case of rare and elusive species. This paper presents the first attempt to estimate abundance of the Rio Grande cooter ( Pseudemys gorzugi ) while explicitly considering the detection process. Specifically, in 2016 we monitored this rare species at two sites along the Black River, New Mexico via traditional baited hoop-net traps and less invasive visual surveys to evaluate the efficacy of these two sampling designs. We fitted the Huggins closed-capture estimator to estimate capture probabilities using the trap data and distance sampling models to estimate detection probabilities using the visual survey data. We found that only the visual survey with the highest number of observed turtles resulted in similar abundance estimates to those estimated using the trap data. However, the estimates of abundance from the remaining visual survey data were highly variable and often underestimated abundance relative to the estimates from the trap data. We suspect this pattern is related to changes in the basking behavior of the species and, thus, the availability of turtles to be detected even though all visual surveys were conducted when environmental conditions were similar. Regardless, we found that riverine habitat conditions limited our ability to properly conduct visual surveys at one site. Collectively, this suggests visual surveys may not be an effective sample design for this species in this river system. When analyzing the trap data, we found capture probabilities to be highly variable across sites and between age classes and that recapture probabilities were much lower than initial capture probabilities, highlighting the importance of accounting for detectability when monitoring this species. Although baited hoop-net traps seem to be an effective sampling design, it is important to note that this method required a relatively high trap effort to reliably estimate abundance. This information will be useful when developing a larger-scale, long-term monitoring program for this species of concern.
Bird, Patrick; Flannery, Jonathan; Crowley, Erin; Agin, James; Goins, David; Monteroso, Lisa; Benesh, DeAnn
2015-01-01
The 3M™ Molecular Detection Assay (MDA) Listeria is used with the 3M Molecular Detection System for the detection of Listeria species in food, food-related, and environmental samples after enrichment. The assay utilizes loop-mediated isothermal amplification to rapidly amplify Listeria target DNA with high specificity and sensitivity, combined with bioluminescence to detect the amplification. The 3M MDA Listeria method was evaluated using an unpaired study design in a multilaboratory collaborative study and compared to the AOAC Official Method of AnalysisSM (OMA) 993.12 Listeria monocytogenes in Milk and Dairy Products reference method for the detection of Listeria species in full-fat (4% milk fat) cottage cheese (25 g test portions). A total of 15 laboratories located in the continental United States and Canada participated. Each matrix had three inoculation levels: an uninoculated control level (0 CFU/test portion), and two levels artificially contaminated with Listeria monocytogenes, a low inoculum level (0.2-2 CFU/test portion) and a high inoculum level (2-5 CFU/test portion) using nonheat-stressed cells. In total, 792 unpaired replicate portions were analyzed. Statistical analysis was conducted according to the probability of detection (POD) model. Results obtained for the low inoculum level test portions produced a difference in cross-laboratory POD value of -0.07 with a 95% confidence interval of (-0.19, 0.06). No statistically significant differences were observed in the number of positive samples detected by the 3M MDA Listeria method versus the AOAC OMA method.
Reliability of programs specified with equational specifications
NASA Astrophysics Data System (ADS)
Nikolik, Borislav
Ultrareliability is desirable (and sometimes a demand of regulatory authorities) for safety-critical applications, such as commercial flight-control programs, medical applications, nuclear reactor control programs, etc. A method is proposed, called the Term Redundancy Method (TRM), for obtaining ultrareliable programs through specification-based testing. Current specification-based testing schemes need a prohibitively large number of testcases for estimating ultrareliability. They assume availability of an accurate program-usage distribution prior to testing, and they assume the availability of a test oracle. It is shown how to obtain ultrareliable programs (probability of failure near zero) with a practical number of testcases, without accurate usage distribution, and without a test oracle. TRM applies to the class of decision Abstract Data Type (ADT) programs specified with unconditional equational specifications. TRM is restricted to programs that do not exceed certain efficiency constraints in generating testcases. The effectiveness of TRM in failure detection and recovery is demonstrated on formulas from the aircraft collision avoidance system TCAS.
Transit visibility zones of the Solar system planets
NASA Astrophysics Data System (ADS)
Wells, R.; Poppenhaeger, K.; Watson, C. A.; Heller, R.
2018-01-01
The detection of thousands of extrasolar planets by the transit method naturally raises the question of whether potential extrasolar observers could detect the transits of the Solar system planets. We present a comprehensive analysis of the regions in the sky from where transit events of the Solar system planets can be detected. We specify how many different Solar system planets can be observed from any given point in the sky, and find the maximum number to be three. We report the probabilities of a randomly positioned external observer to be able to observe single and multiple Solar system planet transits; specifically, we find a probability of 2.518 per cent to be able to observe at least one transiting planet, 0.229 per cent for at least two transiting planets, and 0.027 per cent for three transiting planets. We identify 68 known exoplanets that have a favourable geometric perspective to allow transit detections in the Solar system and we show how the ongoing K2 mission will extend this list. We use occurrence rates of exoplanets to estimate that there are 3.2 ± 1.2 and 6.6^{+1.3}_{-0.8} temperate Earth-sized planets orbiting GK and M dwarf stars brighter than V = 13 and 16, respectively, that are located in the Earth's transit zone.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Evaluation of nutria (Myocastor coypus) detection methods in Maryland, USA
Pepper, Margaret A.; Herrmann, Valentine; Hines, James; Nichols, James D.; Kendrot, Stephen R
2017-01-01
Nutria (Myocaster coypus), invasive, semi-aquatic rodents native to South America, were introduced into Maryland near Blackwater National Wildlife Refuge (BNWR) in 1943. Irruptive population growth, expansion, and destructive feeding habits resulted in the destruction of thousands of acres of emergent marshes at and surrounding BNWR. In 2002, a partnership of federal, state and private entities initiated an eradication campaign to protect remaining wetlands from further damage and facilitate the restoration of coastal wetlands throughout the Chesapeake Bay region. Program staff removed nearly 14,000 nutria from five infested watersheds in a systematic trapping and hunting program between 2002 and 2014. As part of ongoing surveillance activities, the Chesapeake Bay Nutria Eradication Project uses a variety of tools to detect and remove nutria. Project staff developed a floating raft, or monitoring platform, to determine site occupancy. These platforms are placed along waterways and checked periodically for evidence of nutria visitation. We evaluated the effectiveness of monitoring platforms and three associated detection methods: hair snares, presence of scat, and trail cameras. Our objectives were to (1) determine if platform placement on land or water influenced nutria visitation rates, (2) determine if the presence of hair snares influenced visitation rates, and (3) determine method-specific detection probabilities. Our analyses indicated that platforms placed on land were 1.5–3.0 times more likely to be visited than those placed in water and that platforms without snares were an estimated 1.7–3.7 times more likely to be visited than those with snares. Although the presence of snares appears to have discouraged visitation, seasonal variation may confound interpretation of these results. Scat was the least effective method of determining nutria visitation, while hair snares were as effective as cameras. Estimated detection probabilities provided by occupancy modeling were 0.73 for hair snares, 0.71 for cameras and 0.40 for scat. We recommend the use of hair snares on monitoring platforms as they are the most cost-effective and reliable detection method available at this time. Future research should focus on determining the cause for the observed decrease in nutria visits after snares were applied.
Occupancy as a surrogate for abundance estimation
MacKenzie, D.I.; Nichols, J.D.
2004-01-01
In many monitoring programmes it may be prohibitively expensive to estimate the actual abundance of a bird species in a defined area, particularly at large spatial scales, or where birds occur at very low densities. Often it may be appropriate to consider the proportion of area occupied by the species as an alternative state variable. However, as with abundance estimation, issues of detectability must be taken into account in order to make accurate inferences: the non?detection of the species does not imply the species is genuinely absent. Here we review some recent modelling developments that permit unbiased estimation of the proportion of area occupied, colonization and local extinction probabilities. These methods allow for unequal sampling effort and enable covariate information on sampling locations to be incorporated. We also describe how these models could be extended to incorporate information from marked individuals, which would enable finer questions of population dynamics (such as turnover rate of nest sites by specific breeding pairs) to be addressed. We believe these models may be applicable to a wide range of bird species and may be useful for investigating various questions of ecological interest. For example, with respect to habitat quality, we might predict that a species is more likely to have higher local extinction probabilities, or higher turnover rates of specific breeding pairs, in poor quality habitats.
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
Reither, Klaus; Manyama, Christina; Clowes, Petra; Rachow, Andrea; Mapamba, Daniel; Steiner, Andreas; Ross, Amanda; Mfinanga, Elirehema; Sasamalo, Mohamed; Nsubuga, Martin; Aloi, Francesco; Cirillo, Daniela; Jugheli, Levan; Lwilla, Fred
2015-04-01
Following endorsement by the World Health Organisation, the Xpert MTB/RIF assay has been widely incorporated into algorithms for the diagnosis of adult tuberculosis (TB). However, data on its performance in children remain scarce. This prospective, multi-centre study evaluated the performance of Xpert MTB/RIF to diagnose pulmonary tuberculosis in children. Children older than eight weeks and younger than 16 years with suspected pulmonary tuberculosis were enrolled at three TB endemic settings in Tanzania and Uganda, and assigned to five well-defined case definition categories: culture-confirmed TB, highly probable TB, probable TB, not TB, or indeterminate. The diagnostic accuracy of Xpert MTB/RIF was assessed using culture-confirmed TB cases as reference standard. In total, 451 children were enrolled. 37 (8%) had culture-confirmed TB, 48 (11%) highly probably TB and 62 probable TB (13%). The Xpert MTB/RIF assay had a sensitivity of 68% (95% CI, 50%-82%) and specificity of 100% (95% CI, 97%-100%); detecting 1.7 times more culture-confirmed cases than smear microscopy with a similar time to detection. Xpert MTB/RIF was positive in 2% (1/48) of highly probable and in 3% (2/62) of probable TB cases. Xpert MTB/RIF provided timely results with moderate sensitivity and excellent specificity compared to culture. Low yields in children with highly probable and probable TB remain problematic. Copyright © 2014 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.
2003-01-01
Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.
Presence-nonpresence surveys of golden-cheeked warblers: detection, occupancy and survey effort
Watson, C.A.; Weckerly, F.W.; Hatfield, J.S.; Farquhar, C.C.; Williamson, P.S.
2008-01-01
Surveys to detect the presence or absence of endangered species may not consistently cover an area, account for imperfect detection or consider that detection and species presence at sample units may change within a survey season. We evaluated a detection?nondetection survey method for the federally endangered golden-cheeked warbler (GCWA) Dendroica chrysoparia. Three study areas were selected across the breeding range of GCWA in central Texas. Within each area, 28-36 detection stations were placed 200 m apart. Each detection station was surveyed nine times during the breeding season in 2 consecutive years. Surveyors remained up to 8 min at each detection station recording GCWA detected by sight or sound. To assess the potential influence of environmental covariates (e.g. slope, aspect, canopy cover, study area) on detection and occupancy and possible changes in occupancy and detection probabilities within breeding seasons, 30 models were analyzed. Using information-theoretic model selection procedures, we found that detection probabilities and occupancy varied among study areas and within breeding seasons. Detection probabilities ranged from 0.20 to 0.80 and occupancy ranged from 0.56 to 0.95. Because study areas with high detection probabilities had high occupancy, a conservative survey effort (erred towards too much surveying) was estimated using the lowest detection probability. We determined that nine surveys of 35 stations were needed to have estimates of occupancy with coefficients of variation <20%. Our survey evaluation evidently captured the key environmental variable that influenced bird detection (GCWA density) and accommodated the changes in GCWA distribution throughout the breeding season.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
Rapid detection of pandemic influenza in the presence of seasonal influenza
2010-01-01
Background Key to the control of pandemic influenza are surveillance systems that raise alarms rapidly and sensitively. In addition, they must minimise false alarms during a normal influenza season. We develop a method that uses historical syndromic influenza data from the existing surveillance system 'SERVIS' (Scottish Enhanced Respiratory Virus Infection Surveillance) for influenza-like illness (ILI) in Scotland. Methods We develop an algorithm based on the weekly case ratio (WCR) of reported ILI cases to generate an alarm for pandemic influenza. From the seasonal influenza data from 13 Scottish health boards, we estimate the joint probability distribution of the country-level WCR and the number of health boards showing synchronous increases in reported influenza cases over the previous week. Pandemic cases are sampled with various case reporting rates from simulated pandemic influenza infections and overlaid with seasonal SERVIS data from 2001 to 2007. Using this combined time series we test our method for speed of detection, sensitivity and specificity. Also, the 2008-09 SERVIS ILI cases are used for testing detection performances of the three methods with a real pandemic data. Results We compare our method, based on our simulation study, to the moving-average Cumulative Sums (Mov-Avg Cusum) and ILI rate threshold methods and find it to be more sensitive and rapid. For 1% case reporting and detection specificity of 95%, our method is 100% sensitive and has median detection time (MDT) of 4 weeks while the Mov-Avg Cusum and ILI rate threshold methods are, respectively, 97% and 100% sensitive with MDT of 5 weeks. At 99% specificity, our method remains 100% sensitive with MDT of 5 weeks. Although the threshold method maintains its sensitivity of 100% with MDT of 5 weeks, sensitivity of Mov-Avg Cusum declines to 92% with increased MDT of 6 weeks. For a two-fold decrease in the case reporting rate (0.5%) and 99% specificity, the WCR and threshold methods, respectively, have MDT of 5 and 6 weeks with both having sensitivity close to 100% while the Mov-Avg Cusum method can only manage sensitivity of 77% with MDT of 6 weeks. However, the WCR and Mov-Avg Cusum methods outperform the ILI threshold method by 1 week in retrospective detection of the 2009 pandemic in Scotland. Conclusions While computationally and statistically simple to implement, the WCR algorithm is capable of raising alarms, rapidly and sensitively, for influenza pandemics against a background of seasonal influenza. Although the algorithm was developed using the SERVIS data, it has the capacity to be used at other geographic scales and for different disease systems where buying some early extra time is critical. PMID:21106071
2012-09-30
generalized power-law detection algorithm for humpback whale vocalizations. J. Acous. Soc. Am. 131(4), 2682-2699. Roch, M. A., H. Klinck, S...Heaney (2012b). Site specific probability of passive acoustic detection of humpback whale calls from single fixed hydrophones. J. Acous. Soc. Am...monitoring: Correcting humpback call detections for site-specific and time-dependent environmental characteristics . JASA Express Letters, submitted October, 2012, 5 pgs plus 3 figs.
Soria, M C; Soria, M A; Bueno, D J; Godano, E I; Gómez, S C; ViaButron, I A; Padin, V M; Rogé, A D
2017-08-01
The performance of detection methods (culture methods and polymerase chain reaction assay) and plating media used in the same type of samples were determined as well as the specificity of PCR primers to detected Salmonella spp. contamination in layer hen farms. Also, the association of farm characteristics with Salmonella presence was evaluated. Environmental samples (feces, feed, drinking water, air, boot-swabs) and eggs were taken from 40 layer hen houses. Salmonella spp. was most detected in boot-swabs taken around the houses (30% and 35% by isolation and PCR, respectively) follow by fecal samples (15.2% and 13.6% by isolation and PCR, respectively). Eggs, drinking water, and air samples were negative for Salmonella detection. Salmonella Schwarzengrund and S. Enteritidis were the most isolated serotypes. For plating media, relative specificity was 1, and the relative sensitivity was greater for EF-18 agar than XLDT agar in feed and fecal samples. However, relative sensitivity was greater in XLDT agar than EF-18 agar for boot-swab samples. Agreement was between fair to good depending on the sample, and it was good between isolation and PCR (feces and boot-swabs), without agreement for feed samples. Salmonella spp. PCR was positive for all strains, while S. Typhimurium PCR was negative. Salmonella Enteritidis PCR used was not specific. Based in the multiple logistic regression analyses, categorization by counties was significant for Salmonella spp. presence (P-value = 0.010). This study shows the importance of considering different types of samples, plating media and detection methods during a Salmonella spp. monitoring study. In addition, it is important to incorporate the sampling of floors around the layer hen houses to learn if biosecurity measures should be strengthened to minimize the entry and spread of Salmonella in the houses. Also, the performance of some PCR methods and S. Enteritidis PCR should be improved, and biosecurity measures in hen farms must be reinforced in the region of more concentrated layer hen houses to reduce the probability of Salmonella spp. presence. © 2017 Poultry Science Association Inc.
He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou
2018-06-20
Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Detection of faults and software reliability analysis
NASA Technical Reports Server (NTRS)
Knight, J. C.
1986-01-01
Multiversion or N-version programming was proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. Specific topics addressed are: failure probabilities in N-version systems, consistent comparison in N-version systems, descriptions of the faults found in the Knight and Leveson experiment, analytic models of comparison testing, characteristics of the input regions that trigger faults, fault tolerance through data diversity, and the relationship between failures caused by automatically seeded faults.
Sawicki, B
1977-01-01
Investigations were carried out on 32 male guinea pigs 2 to 3 months of age. The STH (produced by BIOMED, Warszawa, Poland) was administered intramuscularly every other day, in 7 injections of 20 Evans's units (E. U.) or 100 E. U./kg body weight each. Thyroid gland sections were stained with heamatoxylin and eosin and with the Azan method. The C cells were detected with the modified silver method of Grimelius and with the HCl-toluidine blue and HCl-lead haemotoxylin techniques. Moreover, reactions were performed for succinate and alpha-glycerophosphate dehydrogenases and also for non-specific esterases and non-specific acetylcholinesterase. STH evoked proliferation of the C cells, changed their morphology and activity pattern of the enzymes present therein, probably testifying to an enhanced secretory activity of these cells.
Hepatitis disease detection using Bayesian theory
NASA Astrophysics Data System (ADS)
Maseleno, Andino; Hidayati, Rohmah Zahroh
2017-02-01
This paper presents hepatitis disease diagnosis using a Bayesian theory for better understanding of the theory. In this research, we used a Bayesian theory for detecting hepatitis disease and displaying the result of diagnosis process. Bayesian algorithm theory is rediscovered and perfected by Laplace, the basic idea is using of the known prior probability and conditional probability density parameter, based on Bayes theorem to calculate the corresponding posterior probability, and then obtained the posterior probability to infer and make decisions. Bayesian methods combine existing knowledge, prior probabilities, with additional knowledge derived from new data, the likelihood function. The initial symptoms of hepatitis which include malaise, fever and headache. The probability of hepatitis given the presence of malaise, fever, and headache. The result revealed that a Bayesian theory has successfully identified the existence of hepatitis disease.
Using Map Service API for Driving Cycle Detection for Wearable GPS Data: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Gonder, Jeffrey D
Following advancements in smartphone and portable global positioning system (GPS) data collection, wearable GPS data have realized extensive use in transportation surveys and studies. The task of detecting driving cycles (driving or car-mode trajectory segments) from wearable GPS data has been the subject of much research. Specifically, distinguishing driving cycles from other motorized trips (such as taking a bus) is the main research problem in this paper. Many mode detection methods only focus on raw GPS speed data while some studies apply additional information, such as geographic information system (GIS) data, to obtain better detection performance. Procuring and maintaining dedicatedmore » road GIS data are costly and not trivial, whereas the technical maturity and broad use of map service application program interface (API) queries offers opportunities for mode detection tasks. The proposed driving cycle detection method takes advantage of map service APIs to obtain high-quality car-mode API route information and uses a trajectory segmentation algorithm to find the best-matched API route. The car-mode API route data combined with the actual route information, including the actual mode information, are used to train a logistic regression machine learning model, which estimates car modes and non-car modes with probability rates. The experimental results show promise for the proposed method's ability to detect vehicle mode accurately.« less
Probability of detecting band-tailed pigeons during call-broadcast versus auditory surveys
Kirkpatrick, C.; Conway, C.J.; Hughes, K.M.; Devos, J.C.
2007-01-01
Estimates of population trend for the interior subspecies of band-tailed pigeon (Patagioenas fasciata fasciata) are not available because no standardized survey method exists for monitoring the interior subspecies. We evaluated 2 potential band-tailed pigeon survey methods (auditory and call-broadcast surveys) from 2002 to 2004 in 5 mountain ranges in southern Arizona, USA, and in mixed-conifer forest throughout the state. Both auditory and call-broadcast surveys produced low numbers of cooing pigeons detected per survey route (x?? ??? 0.67) and had relatively high temporal variance in average number of cooing pigeons detected during replicate surveys (CV ??? 161%). However, compared to auditory surveys, use of call-broadcast increased 1) the percentage of replicate surveys on which ???1 cooing pigeon was detected by an average of 16%, and 2) the number of cooing pigeons detected per survey route by an average of 29%, with this difference being greatest during the first 45 minutes of the morning survey period. Moreover, probability of detecting a cooing pigeon was 27% greater during call-broadcast (0.80) versus auditory (0.63) surveys. We found that cooing pigeons were most common in mixed-conifer forest in southern Arizona and density of male pigeons in mixed-conifer forest throughout the state averaged 0.004 (SE = 0.001) pigeons/ha. Our results are the first to show that call-broadcast increases the probability of detecting band-tailed pigeons (or any species of Columbidae) during surveys. Call-broadcast surveys may provide a useful method for monitoring populations of the interior subspecies of band-tailed pigeon in areas where other survey methods are inappropriate.
Estimation of descriptive statistics for multiply censored water quality data
Helsel, Dennis R.; Cohn, Timothy A.
1988-01-01
This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.
Flag-based detection of weak gas signatures in long-wave infrared hyperspectral image sequences
NASA Astrophysics Data System (ADS)
Marrinan, Timothy; Beveridge, J. Ross; Draper, Bruce; Kirby, Michael; Peterson, Chris
2016-05-01
We present a flag manifold based method for detecting chemical plumes in long-wave infrared hyperspectral movies. The method encodes temporal and spatial information related to a hyperspectral pixel into a flag, or nested sequence of linear subspaces. The technique used to create the flags pushes information about the background clutter, ambient conditions, and potential chemical agents into the leading elements of the flags. Exploiting this temporal information allows for a detection algorithm that is sensitive to the presence of weak signals. This method is compared to existing techniques qualitatively on real data and quantitatively on synthetic data to show that the flag-based algorithm consistently performs better on data when the SINRdB is low, and beats the ACE and MF algorithms in probability of detection for low probabilities of false alarm even when the SINRdB is high.
Assessing Aircraft Supply Air to Recommend Compounds for Timely Warning of Contamination
NASA Astrophysics Data System (ADS)
Fox, Richard B.
Taking aircraft out of service for even one day to correct fume-in-cabin events can cost the industry roughly $630 million per year in lost revenue. The quantitative correlation study investigated quantitative relationships between measured concentrations of contaminants in bleed air and probability of odor detectability. Data were collected from 94 aircraft engine and auxiliary power unit (APU) bleed air tests from an archival data set between 1997 and 2011, and no relationships were found. Pearson correlation was followed by regression analysis for individual contaminants. Significant relationships of concentrations of compounds in bleed air to probability of odor detectability were found (p<0.05), as well as between compound concentration and probability of sensory irritancy detectability. Study results may be useful to establish early warning levels. Predictive trend monitoring, a method to identify potential pending failure modes within a mechanical system, may influence scheduled down-time for maintenance as a planned event, rather than repair after a mechanical failure and thereby reduce operational costs associated with odor-in-cabin events. Twenty compounds (independent variables) were found statistically significant as related to probability of odor detectability (dependent variable 1). Seventeen compounds (independent variables) were found statistically significant as related to probability of sensory irritancy detectability (dependent variable 2). Additional research was recommended to further investigate relationships between concentrations of contaminants and probability of odor detectability or probability of sensory irritancy detectability for all turbine oil brands. Further research on implementation of predictive trend monitoring may be warranted to demonstrate how the monitoring process might be applied to in-flight application.
A risk-based approach to flammable gas detector spacing.
Defriend, Stephen; Dejmek, Mark; Porter, Leisa; Deshotels, Bob; Natvig, Bernt
2008-11-15
Flammable gas detectors allow an operating company to address leaks before they become serious, by automatically alarming and by initiating isolation and safe venting. Without effective gas detection, there is very limited defense against a flammable gas leak developing into a fire or explosion that could cause loss of life or escalate to cascading failures of nearby vessels, piping, and equipment. While it is commonly recognized that some gas detectors are needed in a process plant containing flammable gas or volatile liquids, there is usually a question of how many are needed. The areas that need protection can be determined by dispersion modeling from potential leak sites. Within the areas that must be protected, the spacing of detectors (or alternatively, number of detectors) should be based on risk. Detector design can be characterized by spacing criteria, which is convenient for design - or alternatively by number of detectors, which is convenient for cost reporting. The factors that influence the risk are site-specific, including process conditions, chemical composition, number of potential leak sites, piping design standards, arrangement of plant equipment and structures, design of isolation and depressurization systems, and frequency of detector testing. Site-specific factors such as those just mentioned affect the size of flammable gas cloud that must be detected (within a specified probability) by the gas detection system. A probability of detection must be specified that gives a design with a tolerable risk of fires and explosions. To determine the optimum spacing of detectors, it is important to consider the probability that a detector will fail at some time and be inoperative until replaced or repaired. A cost-effective approach is based on the combined risk from a representative selection of leakage scenarios, rather than a worst-case evaluation. This means that probability and severity of leak consequences must be evaluated together. In marine and offshore facilities, it is conventional to use computational fluid dynamics (CFD) modeling to determine the size of a flammable cloud that would result from a specific leak scenario. Simpler modeling methods can be used, but the results are not very accurate in the region near the release, especially where flow obstructions are present. The results from CFD analyses on several leak scenarios can be plotted to determine the size of a flammable cloud that could result in an explosion that would generate overpressure exceeding the strength of the mechanical design of the plant. A cloud of this size has the potential to produce a blast pressure or flying debris capable of causing a fatality or subsequent damage to vessels or piping containing hazardous material. In cases where the leak results in a fire, rather than explosion, CFD or other modeling methods can estimate the size of a leak that would cause a fire resulting in subsequent damage to the facility, or would prevent the safe escape of personnel. The gas detector system must be capable of detecting a gas release or vapor cloud, and initiating action to prevent the leak from reaching a size that could cause injury or severe damage upon ignition.
Bio-ALIRT biosurveillance detection algorithm evaluation.
Siegrist, David; Pavlin, J
2004-09-24
Early detection of disease outbreaks by a medical biosurveillance system relies on two major components: 1) the contribution of early and reliable data sources and 2) the sensitivity, specificity, and timeliness of biosurveillance detection algorithms. This paper describes an effort to assess leading detection algorithms by arranging a common challenge problem and providing a common data set. The objectives of this study were to determine whether automated detection algorithms can reliably and quickly identify the onset of natural disease outbreaks that are surrogates for possible terrorist pathogen releases, and do so at acceptable false-alert rates (e.g., once every 2-6 weeks). Historic de-identified data were obtained from five metropolitan areas over 23 months; these data included International Classification of Diseases, Ninth Revision (ICD-9) codes related to respiratory and gastrointestinal illness syndromes. An outbreak detection group identified and labeled two natural disease outbreaks in these data and provided them to analysts for training of detection algorithms. All outbreaks in the remaining test data were identified but not revealed to the detection groups until after their analyses. The algorithms established a probability of outbreak for each day's counts. The probability of outbreak was assessed as an "actual" alert for different false-alert rates. The best algorithms were able to detect all of the outbreaks at false-alert rates of one every 2-6 weeks. They were often able to detect for the same day human investigators had identified as the true start of the outbreak. Because minimal data exists for an actual biologic attack, determining how quickly an algorithm might detect such an attack is difficult. However, application of these algorithms in combination with other data-analysis methods to historic outbreak data indicates that biosurveillance techniques for analyzing syndrome counts can rapidly detect seasonal respiratory and gastrointestinal illness outbreaks. Further research is needed to assess the value of electronic data sources for predictive detection. In addition, simulations need to be developed and implemented to better characterize the size and type of biologic attack that can be detected by current methods by challenging them under different projected operational conditions.
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
A multi-level anomaly detection algorithm for time-varying graph data with interactive visualization
Bridges, Robert A.; Collins, John P.; Ferragut, Erik M.; ...
2016-01-01
This work presents a novel modeling and analysis framework for graph sequences which addresses the challenge of detecting and contextualizing anomalies in labelled, streaming graph data. We introduce a generalization of the BTER model of Seshadhri et al. by adding flexibility to community structure, and use this model to perform multi-scale graph anomaly detection. Specifically, probability models describing coarse subgraphs are built by aggregating node probabilities, and these related hierarchical models simultaneously detect deviations from expectation. This technique provides insight into a graph's structure and internal context that may shed light on a detected event. Additionally, this multi-scale analysis facilitatesmore » intuitive visualizations by allowing users to narrow focus from an anomalous graph to particular subgraphs or nodes causing the anomaly. For evaluation, two hierarchical anomaly detectors are tested against a baseline Gaussian method on a series of sampled graphs. We demonstrate that our graph statistics-based approach outperforms both a distribution-based detector and the baseline in a labeled setting with community structure, and it accurately detects anomalies in synthetic and real-world datasets at the node, subgraph, and graph levels. Furthermore, to illustrate the accessibility of information made possible via this technique, the anomaly detector and an associated interactive visualization tool are tested on NCAA football data, where teams and conferences that moved within the league are identified with perfect recall, and precision greater than 0.786.« less
Automatic detection of anomalies in screening mammograms
2013-01-01
Background Diagnostic performance in breast screening programs may be influenced by the prior probability of disease. Since breast cancer incidence is roughly half a percent in the general population there is a large probability that the screening exam will be normal. That factor may contribute to false negatives. Screening programs typically exhibit about 83% sensitivity and 91% specificity. This investigation was undertaken to determine if a system could be developed to pre-sort screening-images into normal and suspicious bins based on their likelihood to contain disease. Wavelets were investigated as a method to parse the image data, potentially removing confounding information. The development of a classification system based on features extracted from wavelet transformed mammograms is reported. Methods In the multi-step procedure images were processed using 2D discrete wavelet transforms to create a set of maps at different size scales. Next, statistical features were computed from each map, and a subset of these features was the input for a concerted-effort set of naïve Bayesian classifiers. The classifier network was constructed to calculate the probability that the parent mammography image contained an abnormality. The abnormalities were not identified, nor were they regionalized. The algorithm was tested on two publicly available databases: the Digital Database for Screening Mammography (DDSM) and the Mammographic Images Analysis Society’s database (MIAS). These databases contain radiologist-verified images and feature common abnormalities including: spiculations, masses, geometric deformations and fibroid tissues. Results The classifier-network designs tested achieved sensitivities and specificities sufficient to be potentially useful in a clinical setting. This first series of tests identified networks with 100% sensitivity and up to 79% specificity for abnormalities. This performance significantly exceeds the mean sensitivity reported in literature for the unaided human expert. Conclusions Classifiers based on wavelet-derived features proved to be highly sensitive to a range of pathologies, as a result Type II errors were nearly eliminated. Pre-sorting the images changed the prior probability in the sorted database from 37% to 74%. PMID:24330643
Xu, Lan; Verdoodt, Freija; Wentzensen, Nicolas; Bergeron, Christine; Arbyn, Marc
2015-01-01
Background Women with a cytological diagnosis of atypical squamous cells, cannot exclude high-grade squamous intraepithelial lesion (ASC-H) are usually immediately referred to colposcopy. However, triage may reduce the burden of diagnostic work-up and avoid over-treatment. Methods A meta-analysis was conducted to assess the accuracy of hrHPV testing, and testing for other molecular markers to detect CIN of grade II or III or worse (CIN2+ or CIN3+) in women with ASC-H. An additional question assessed was whether triage is useful given the relatively high pre-triage probability of underlying precancer. Results The pooled absolute sensitivity and specificity for CIN2+ of HC2 (derived from 19 studies) was 93% (95% CI: 89–95%) and 45% (95% CI: 41–50%), respectively. The p16INK4a staining (only 3 studies) has similar sensitivity (93%, 95% CI:75–100%) but superior specificity (specificity ratio: 1.69) to HC2 for CIN2+. Testing for PAX1 gene methylation (only 1 study) showed a superior specificity of 95% (specificity ratio: 2.08). The average pre-test risk was 34% for CIN2+ and 20% for CIN3+. A negative HC2 result decreased this to 8% and 5%, whereas a positive result upgraded the risk to 47% and 28%. Conclusions Due to the high probability of precancer in ASC-H, the utility of triage is limited. The usual recommendation to refer women with ASC-H to colposcopy is not altered by a positive triage test, whatever the test used. A negative hrHPV DNA or p16INK4a test may allow for repeat testing but this recommendation will depend on local decision thresholds for referral. PMID:26618614
Estimating site occupancy rates when detection probabilities are less than one
MacKenzie, D.I.; Nichols, J.D.; Lachman, G.B.; Droege, S.; Royle, J. Andrew; Langtimm, C.A.
2002-01-01
Nondetection of a species at a site does not imply that the species is absent unless the probability of detection is 1. We propose a model and likelihood-based method for estimating site occupancy rates when detection probabilities are 0.3). We estimated site occupancy rates for two anuran species at 32 wetland sites in Maryland, USA, from data collected during 2000 as part of an amphibian monitoring program, Frogwatch USA. Site occupancy rates were estimated as 0.49 for American toads (Bufo americanus), a 44% increase over the proportion of sites at which they were actually observed, and as 0.85 for spring peepers (Pseudacris crucifer), slightly above the observed proportion of 0.83.
Estimating nest detection probabilities for white-winged dove nest transects in Tamaulipas, Mexico
Nichols, J.D.; Tomlinson, R.E.; Waggerman, G.
1986-01-01
Nest transects in nesting colonies provide one source of information on White-winged Dove (Zenaida asiatica asiatica) population status and reproduction. Nests are counted along transects using standardized field methods each year in Texas and northeastern Mexico by personnel associated with Mexico's Office of Flora and Fauna, the Texas Parks and Wildlife Department, and the U.S. Fish and Wildlife Service. Nest counts on transects are combined with information on the size of nesting colonies to estimate total numbers of nests in sampled colonies. Historically, these estimates have been based on the actual nest counts on transects and thus have required the assumption that all nests lying within transect boundaries are detected (seen) with a probability of one. Our objectives were to test the hypothesis that nest detection probability is one and, if rejected, to estimate this probability.
Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping
2017-01-01
Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods. PMID:28531127
NASA Astrophysics Data System (ADS)
Ham, S.; Oh, Y.; Choi, K.; Lee, I.
2018-05-01
Detecting unregistered buildings from aerial images is an important task for urban management such as inspection of illegal buildings in green belt or update of GIS database. Moreover, the data acquisition platform of photogrammetry is evolving from manned aircraft to UAVs (Unmanned Aerial Vehicles). However, it is very costly and time-consuming to detect unregistered buildings from UAV images since the interpretation of aerial images still relies on manual efforts. To overcome this problem, we propose a system which automatically detects unregistered buildings from UAV images based on deep learning methods. Specifically, we train a deconvolutional network with publicly opened geospatial data, semantically segment a given UAV image into a building probability map and compare the building map with existing GIS data. Through this procedure, we could detect unregistered buildings from UAV images automatically and efficiently. We expect that the proposed system can be applied for various urban management tasks such as monitoring illegal buildings or illegal land-use change.
Beeman, John W.; Hayes, Brian; Wright, Katrina
2012-01-01
A series of in-stream passive integrated transponder (PIT) detection antennas installed across the Klamath River in August 2010 were tested using tagged fish in the summer of 2011. Six pass-by antennas were constructed and anchored to the bottom of the Klamath River at a site between the Shasta and Scott Rivers. Two of the six antennas malfunctioned during the spring of 2011 and two pass-through antennas were installed near the opposite shoreline prior to system testing. The detection probability of the PIT tag detection system was evaluated using yearling coho salmon implanted with a PIT tag and a radio transmitter and then released into the Klamath River slightly downstream of Iron Gate Dam. Cormack-Jolly-Seber capture-recapture methods were used to estimate the detection probability of the PIT tag detection system based on detections of PIT tags there and detections of radio transmitters at radio-telemetry detection systems downstream. One of the 43 PIT- and radio-tagged fish released was detected by the PIT tag detection system and 23 were detected by the radio-telemetry detection systems. The estimated detection probability of the PIT tag detection system was 0.043 (standard error 0.042). Eight PIT-tagged fish from other studies also were detected. Detections at the PIT tag detection system were at the two pass-through antennas and the pass-by antenna adjacent to them. Above average river discharge likely was a factor in the low detection probability of the PIT tag detection system. High discharges dislodged two power cables leaving 12 meters of the river width unsampled for PIT detections and resulted in water depths greater than the read distance of the antennas, which allowed fish to pass over much of the system with little chance of being detected. Improvements in detection probability may be expected under river discharge conditions where water depth over the antennas is within maximum read distance of the antennas. Improvements also may be expected if additional arrays of antennas are used.
Detection of Invasive Mosquito Vectors Using Environmental DNA (eDNA) from Water Samples
Schneider, Judith; Valentini, Alice; Dejean, Tony; Montarsi, Fabrizio; Taberlet, Pierre
2016-01-01
Repeated introductions and spread of invasive mosquito species (IMS) have been recorded on a large scale these last decades worldwide. In this context, members of the mosquito genus Aedes can present serious risks to public health as they have or may develop vector competence for various viral diseases. While the Tiger mosquito (Aedes albopictus) is a well-known vector for e.g. dengue and chikungunya viruses, the Asian bush mosquito (Ae. j. japonicus) and Ae. koreicus have shown vector competence in the field and the laboratory for a number of viruses including dengue, West Nile fever and Japanese encephalitis. Early detection and identification is therefore crucial for successful eradication or control strategies. Traditional specific identification and monitoring of different and/or cryptic life stages of the invasive Aedes species based on morphological grounds may lead to misidentifications, and are problematic when extensive surveillance is needed. In this study, we developed, tested and applied an environmental DNA (eDNA) approach for the detection of three IMS, based on water samples collected in the field in several European countries. We compared real-time quantitative PCR (qPCR) assays specific for these three species and an eDNA metabarcoding approach with traditional sampling, and discussed the advantages and limitations of these methods. Detection probabilities for eDNA-based approaches were in most of the specific comparisons higher than for traditional survey and the results were congruent between both molecular methods, confirming the reliability and efficiency of alternative eDNA-based techniques for the early and unambiguous detection and surveillance of invasive mosquito vectors. The ease of water sampling procedures in the eDNA approach tested here allows the development of large-scale monitoring and surveillance programs of IMS, especially using citizen science projects. PMID:27626642
Detection of Invasive Mosquito Vectors Using Environmental DNA (eDNA) from Water Samples.
Schneider, Judith; Valentini, Alice; Dejean, Tony; Montarsi, Fabrizio; Taberlet, Pierre; Glaizot, Olivier; Fumagalli, Luca
2016-01-01
Repeated introductions and spread of invasive mosquito species (IMS) have been recorded on a large scale these last decades worldwide. In this context, members of the mosquito genus Aedes can present serious risks to public health as they have or may develop vector competence for various viral diseases. While the Tiger mosquito (Aedes albopictus) is a well-known vector for e.g. dengue and chikungunya viruses, the Asian bush mosquito (Ae. j. japonicus) and Ae. koreicus have shown vector competence in the field and the laboratory for a number of viruses including dengue, West Nile fever and Japanese encephalitis. Early detection and identification is therefore crucial for successful eradication or control strategies. Traditional specific identification and monitoring of different and/or cryptic life stages of the invasive Aedes species based on morphological grounds may lead to misidentifications, and are problematic when extensive surveillance is needed. In this study, we developed, tested and applied an environmental DNA (eDNA) approach for the detection of three IMS, based on water samples collected in the field in several European countries. We compared real-time quantitative PCR (qPCR) assays specific for these three species and an eDNA metabarcoding approach with traditional sampling, and discussed the advantages and limitations of these methods. Detection probabilities for eDNA-based approaches were in most of the specific comparisons higher than for traditional survey and the results were congruent between both molecular methods, confirming the reliability and efficiency of alternative eDNA-based techniques for the early and unambiguous detection and surveillance of invasive mosquito vectors. The ease of water sampling procedures in the eDNA approach tested here allows the development of large-scale monitoring and surveillance programs of IMS, especially using citizen science projects.
Detection probability of least tern and piping plover chicks in a large river system
Roche, Erin A.; Shaffer, Terry L.; Anteau, Michael J.; Sherfy, Mark H.; Stucker, Jennifer H.; Wiltermuth, Mark T.; Dovichin, Colin M.
2014-01-01
Monitoring the abundance and stability of populations of conservation concern is often complicated by an inability to perfectly detect all members of the population. Mark-recapture offers a flexible framework in which one may identify factors contributing to imperfect detection, while at the same time estimating demographic parameters such as abundance or survival. We individually color-marked, recaptured, and re-sighted 1,635 federally listed interior least tern (Sternula antillarum; endangered) chicks and 1,318 piping plover (Charadrius melodus; threatened) chicks from 2006 to 2009 at 4 study areas along the Missouri River and investigated effects of observer-, subject-, and site-level covariates suspected of influencing detection. Increasing the time spent searching and crew size increased the probability of detecting both species regardless of study area and detection methods were not associated with decreased survival. However, associations between detection probability and the investigated covariates were highly variable by study area and species combinations, indicating that a universal mark-recapture design may not be appropriate.
Liu, Datong; Peng, Yu; Peng, Xiyuan
2018-01-01
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372
The probability of seizures during EEG monitoring in critically ill adults
Westover, M. Brandon; Shafi, Mouhsin M.; Bianchi, Matt T.; Moura, Lidia M.V.R.; O’Rourke, Deirdre; Rosenthal, Eric S.; Chu, Catherine J.; Donovan, Samantha; Hoch, Daniel B.; Kilbride, Ronan D.; Cole, Andrew J.; Cash, Sydney S.
2014-01-01
Objective To characterize the risk for seizures over time in relation to EEG findings in hospitalized adults undergoing continuous EEG monitoring (cEEG). Methods Retrospective analysis of cEEG data and medical records from 625 consecutive adult inpatients monitored at a tertiary medical center. Using survival analysis methods, we estimated the time-dependent probability that a seizure will occur within the next 72-h, if no seizure has occurred yet, as a function of EEG abnormalities detected so far. Results Seizures occurred in 27% (168/625). The first seizure occurred early (<30 min of monitoring) in 58% (98/168). In 527 patients without early seizures, 159 (30%) had early epileptiform abnormalities, versus 368 (70%) without. Seizures were eventually detected in 25% of patients with early epileptiform discharges, versus 8% without early discharges. The 72-h risk of seizures declined below 5% if no epileptiform abnormalities were present in the first two hours, whereas 16 h of monitoring were required when epileptiform discharges were present. 20% (74/388) of patients without early epileptiform abnormalities later developed them; 23% (17/74) of these ultimately had seizures. Only 4% (12/294) experienced a seizure without preceding epileptiform abnormalities. Conclusions Seizure risk in acute neurological illness decays rapidly, at a rate dependent on abnormalities detected early during monitoring. This study demonstrates that substantial risk stratification is possible based on early EEG abnormalities. Significance These findings have implications for patient-specific determination of the required duration of cEEG monitoring in hospitalized patients. PMID:25082090
Efficiency of MY09/11 consensus PCR in the detection of multiple HPV infections.
Şahiner, Fatih; Kubar, Ayhan; Gümral, Ramazan; Ardıç, Medine; Yiğit, Nuri; Şener, Kenan; Dede, Murat; Yapar, Mehmet
2014-09-01
Human papillomavirus (HPV) DNA testing has become an important component of cervical cancer screening programs. In this study, we aimed to evaluate the efficiency of MY09/11 consensus polymerase chain reaction (PCR) for the detection of multiple HPV infections. For this purpose, MY09/11 PCR was compared to an original TaqMan-based type-specific real-time PCR assay, which can detect 20 different HPV types. Of the 654 samples, 34.1% (223/654) were HPV DNA positive according to at least one method. The relative sensitivities of MY09/11 PCR and type-specific PCR were 80.7% (180/223) and 97.8% (218/223), respectively. In all, 352 different HPV isolates (66 low-risk and 286 high-risk or probable high-risk types) were identified in 218 samples, but 5 samples, which were positive by consensus PCR only, could not be genotyped. The distribution of the 286 high-risk or probable high-risk HPVs were as follows: 24.5% HPV-16, 8.4% HPV-52, 7.7% HPV-51, 6.3% HPV-39, 6.3% HPV-82, 5.6% HPV-35, 5.6% HPV-58, 5.6% HPV-66, 5.2% HPV-18, 5.2% HPV-68, and 19.6% the other 8 types. A single HPV type was detected in 57.3% (125/218) of the genotyped samples, and multiple HPV types were found in the remaining 42.7% (93/218). The false-negative rates of MY09/11 PCR were found to be 17.4% in single infections, 23.3% in multiple infections, and 34.6% in multiple infections that contained 3 or more HPV types, with the condition that the low-risk types HPV-6 and HPV-11 be considered as a monotype. These data suggest that broad-range PCR assays may lead to significant data loss and that type-specific PCR assays can provide accurate and reliable results during cervical cancer screening. Copyright © 2014 Elsevier Inc. All rights reserved.
Robust detection of rare species using environmental DNA: the importance of primer specificity.
Wilcox, Taylor M; McKelvey, Kevin S; Young, Michael K; Jane, Stephen F; Lowe, Winsor H; Whiteley, Andrew R; Schwartz, Michael K
2013-01-01
Environmental DNA (eDNA) is being rapidly adopted as a tool to detect rare animals. Quantitative PCR (qPCR) using probe-based chemistries may represent a particularly powerful tool because of the method's sensitivity, specificity, and potential to quantify target DNA. However, there has been little work understanding the performance of these assays in the presence of closely related, sympatric taxa. If related species cause any cross-amplification or interference, false positives and negatives may be generated. These errors can be disastrous if false positives lead to overestimate the abundance of an endangered species or if false negatives prevent detection of an invasive species. In this study we test factors that influence the specificity and sensitivity of TaqMan MGB assays using co-occurring, closely related brook trout (Salvelinus fontinalis) and bull trout (S. confluentus) as a case study. We found qPCR to be substantially more sensitive than traditional PCR, with a high probability of detection at concentrations as low as 0.5 target copies/µl. We also found that number and placement of base pair mismatches between the Taqman MGB assay and non-target templates was important to target specificity, and that specificity was most influenced by base pair mismatches in the primers, rather than in the probe. We found that insufficient specificity can result in both false positive and false negative results, particularly in the presence of abundant related species. Our results highlight the utility of qPCR as a highly sensitive eDNA tool, and underscore the importance of careful assay design.
Detecting Anomalies in Process Control Networks
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Kang, Kyoung-Don
This paper presents the estimation-inspection algorithm, a statistical algorithm for anomaly detection in process control networks. The algorithm determines if the payload of a network packet that is about to be processed by a control system is normal or abnormal based on the effect that the packet will have on a variable stored in control system memory. The estimation part of the algorithm uses logistic regression integrated with maximum likelihood estimation in an inductive machine learning process to estimate a series of statistical parameters; these parameters are used in conjunction with logistic regression formulas to form a probability mass function for each variable stored in control system memory. The inspection part of the algorithm uses the probability mass functions to estimate the normalcy probability of a specific value that a network packet writes to a variable. Experimental results demonstrate that the algorithm is very effective at detecting anomalies in process control networks.
A graph-based system for network-vulnerability analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, L.P.; Phillips, C.
1998-06-01
This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks,more » broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.« less
Smart sensing surveillance system
NASA Astrophysics Data System (ADS)
Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen
2010-04-01
An effective public safety sensor system for heavily-populated applications requires sophisticated and geographically-distributed infrastructures, centralized supervision, and deployment of large-scale security and surveillance networks. Artificial intelligence in sensor systems is a critical design to raise awareness levels, improve the performance of the system and adapt to a changing scenario and environment. In this paper, a highly-distributed, fault-tolerant, and energy-efficient Smart Sensing Surveillance System (S4) is presented to efficiently provide a 24/7 and all weather security operation in crowded environments or restricted areas. Technically, the S4 consists of a number of distributed sensor nodes integrated with specific passive sensors to rapidly collect, process, and disseminate heterogeneous sensor data from near omni-directions. These distributed sensor nodes can cooperatively work to send immediate security information when new objects appear. When the new objects are detected, the S4 will smartly select the available node with a Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR camera to track the objects and capture associated imagery. The S4 provides applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. Other imaging processes can be updated to meet specific requirements and operations. In the S4, all the sensor nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology. This UWB RF technology can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The Service Oriented Architecture of S4 enables remote applications to interact with the S4 network and use the specific presentation methods. In addition, the S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards to efficiently discover, access, use, and control heterogeneous sensors and their metadata. These S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. The S4 system is directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha
2013-09-01
Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.
Unmanned aerial vehicles for surveying marine fauna: assessing detection probability.
Hodgson, Amanda; Peel, David; Kelly, Natalie
2017-06-01
Aerial surveys are conducted for various fauna to assess abundance, distribution, and habitat use over large spatial scales. They are traditionally conducted using light aircraft with observers recording sightings in real time. Unmanned Aerial Vehicles (UAVs) offer an alternative with many potential advantages, including eliminating human risk. To be effective, this emerging platform needs to provide detection rates of animals comparable to traditional methods. UAVs can also acquire new types of information, and this new data requires a reevaluation of traditional analyses used in aerial surveys; including estimating the probability of detecting animals. We conducted 17 replicate UAV surveys of humpback whales (Megaptera novaeangliae) while simultaneously obtaining a 'census' of the population from land-based observations, to assess UAV detection probability. The ScanEagle UAV, carrying a digital SLR camera, continuously captured images (with 75% overlap) along transects covering the visual range of land-based observers. We also used ScanEagle to conduct focal follows of whale pods (n = 12, mean duration = 40 min), to assess a new method of estimating availability. A comparison of the whale detections from the UAV to the land-based census provided an estimated UAV detection probability of 0.33 (CV = 0.25; incorporating both availability and perception biases), which was not affected by environmental covariates (Beaufort sea state, glare, and cloud cover). According to our focal follows, the mean availability was 0.63 (CV = 0.37), with pods including mother/calf pairs having a higher availability (0.86, CV = 0.20) than those without (0.59, CV = 0.38). The follows also revealed (and provided a potential correction for) a downward bias in group size estimates from the UAV surveys, which resulted from asynchronous diving within whale pods, and a relatively short observation window of 9 s. We have shown that UAVs are an effective alternative to traditional methods, providing a detection probability that is within the range of previous studies for our target species. We also describe a method of assessing availability bias that represents spatial and temporal characteristics of a survey, from the same perspective as the survey platform, is benign, and provides additional data on animal behavior. © 2017 by the Ecological Society of America.
Yu, Yun; Degnan, James H.; Nakhleh, Luay
2012-01-01
Gene tree topologies have proven a powerful data source for various tasks, including species tree inference and species delimitation. Consequently, methods for computing probabilities of gene trees within species trees have been developed and widely used in probabilistic inference frameworks. All these methods assume an underlying multispecies coalescent model. However, when reticulate evolutionary events such as hybridization occur, these methods are inadequate, as they do not account for such events. Methods that account for both hybridization and deep coalescence in computing the probability of a gene tree topology currently exist for very limited cases. However, no such methods exist for general cases, owing primarily to the fact that it is currently unknown how to compute the probability of a gene tree topology within the branches of a phylogenetic network. Here we present a novel method for computing the probability of gene tree topologies on phylogenetic networks and demonstrate its application to the inference of hybridization in the presence of incomplete lineage sorting. We reanalyze a Saccharomyces species data set for which multiple analyses had converged on a species tree candidate. Using our method, though, we show that an evolutionary hypothesis involving hybridization in this group has better support than one of strict divergence. A similar reanalysis on a group of three Drosophila species shows that the data is consistent with hybridization. Further, using extensive simulation studies, we demonstrate the power of gene tree topologies at obtaining accurate estimates of branch lengths and hybridization probabilities of a given phylogenetic network. Finally, we discuss identifiability issues with detecting hybridization, particularly in cases that involve extinction or incomplete sampling of taxa. PMID:22536161
A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Junghyun; Hayward, Chris; Zeiler, Cleat
Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less
For biomonitoring efforts aimed at early detection of aquatic invasive species (AIS), the ability to detect rare individuals is key and requires accurate species level identification to maintain a low occurrence probability of non-detection errors (failure to detect a present spe...
A PCR procedure for the detection of Giardia intestinalis cysts and Escherichia coli in lettuce.
Ramirez-Martinez, M L; Olmos-Ortiz, L M; Barajas-Mendiola, M A; Giono Cerezo, S; Avila, E E; Cuellar-Mata, P
2015-06-01
Giardia intestinalis is a pathogen associated with foodborne outbreaks and Escherichia coli is commonly used as a marker of faecal contamination. Implementation of routine identification methods of G. intestinalis is difficult for the analysis of vegetables and the microbiological detection of E. coli requires several days. This study proposes a PCR-based assay for the detection of E. coli and G. intestinalis cysts using crude DNA isolated from artificially contaminated lettuce. The G. intestinalis and E. coli PCR assays targeted the β-giardin and uidA genes, respectively, and were 100% specific. Forty lettuces from local markets were analysed by both PCR and light microscopy and no cysts were detected, the calculated detection limit was 20 cysts per gram of lettuce; however, by PCR, E. coli was detected in eight of ten randomly selected samples of lettuce. These data highlight the need to validate procedures for routine quality assurance. These PCR-based assays can be employed as alternative methods for the detection of G. intestinalis and E. coli and have the potential to allow for the automation and simultaneous detection of protozoa and bacterial pathogens in multiple samples. Significance and impact of the study: There are few studies for Giardia intestinalis detection in food because methods for its identification are difficult for routine implementation. Here, we developed a PCR-based method as an alternative to the direct observation of cysts in lettuce by light microscopy. Additionally, Escherichia coli was detected by PCR and the sanitary quality of lettuce was evaluated using molecular and standard microbiological methods. Using PCR, the detection probability of Giardia cysts inoculated onto samples of lettuce was improved compared to light microscopy, with the advantage of easy automation. These methods may be employed to perform timely and affordable detection of foodborne pathogens. © 2015 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
He, Jingjing; Wang, Dengjiang; Zhang, Weifang
2015-03-01
This study presents an experimental and modeling study for damage detection and quantification in riveted lap joints. Embedded lead zirconate titanate piezoelectric (PZT) ceramic wafer-type sensors are employed to perform in-situ non-destructive testing during fatigue cyclical loading. A multi-feature integration method is developed to quantify the crack size using signal features of correlation coefficient, amplitude change, and phase change. In addition, probability of detection (POD) model is constructed to quantify the reliability of the developed sizing method. Using the developed crack size quantification method and the resulting POD curve, probabilistic fatigue life prediction can be performed to provide comprehensive information for decision-making. The effectiveness of the overall methodology is demonstrated and validated using several aircraft lap joint specimens from different manufactures and under different loading conditions.
Helble, Tyler A; D'Spain, Gerald L; Campbell, Greg S; Hildebrand, John A
2013-11-01
This paper demonstrates the importance of accounting for environmental effects on passive underwater acoustic monitoring results. The situation considered is the reduction in shipping off the California coast between 2008-2010 due to the recession and environmental legislation. The resulting variations in ocean noise change the probability of detecting marine mammal vocalizations. An acoustic model was used to calculate the time-varying probability of detecting humpback whale vocalizations under best-guess environmental conditions and varying noise. The uncorrected call counts suggest a diel pattern and an increase in calling over a two-year period; the corrected call counts show minimal evidence of these features.
The COLA Collision Avoidance Method
NASA Astrophysics Data System (ADS)
Assmann, K.; Berger, J.; Grothkopp, S.
2009-03-01
In the following we present a collision avoidance method named COLA. The method has been designed to predict collisions for Earth orbiting spacecraft on any orbits, including orbit changes, with other space-born objects. The point in time of a collision and the collision probability are determined. To guarantee effective processing the COLA method uses a modular design and is composed of several components which are either developed within this work or deduced from existing algorithms: A filtering module, the close approach determination, the collision detection and the collision probability calculation. A software tool which implements the COLA method has been verified using various test cases built from sample missions. This software has been implemented in the C++ programming language and serves as a universal collision detection tool at LSE Space Engineering & Operations AG.
Escalante-Maldonado, Oscar; Kayali, Ahmad Y.; Yamazaki, Wataru; Vuddhakul, Varaporn; Nakaguchi, Yoshitsugu; Nishibuchi, Mitsuaki
2015-01-01
Vibrio parahaemolyticus is a marine microorganism that can cause seafood-borne gastroenteritis in humans. The infection can be spread and has become a pandemic through the international trade of contaminated seafood. Strains carrying the tdh gene encoding the thermostable direct hemolysin (TDH) and/or the trh gene encoding the TDH-related hemolysin (TRH) are considered to be pathogenic with the former gene being the most frequently found in clinical strains. However, their distribution frequency in environmental isolates is below 1%. Thus, very sensitive methods are required for detection and quantitation of tdh+ strains in seafood. We previously reported a method to detect and quantify tdh+ V. parahaemolyticus in seafood. This method consists of three components: the most-probable-number (MPN), the immunomagnetic separation (IMS) targeting all established K antigens, and the loop-mediated isothermal amplification (LAMP) targeting the tdh gene. However, this method faces regional issues in tropical zones of the world. Technicians have difficulties in securing dependable reagents in high-temperature climates where we found MPN underestimation in samples having tdh+ strains as well as other microorganisms present at high concentrations. In the present study, we solved the underestimation problem associated with the salt polymyxin broth enrichment for the MPN component and with the immunomagnetic bead-target association for the IMS component. We also improved the supply and maintenance of the dependable reagents by introducing a dried reagent system to the LAMP component. The modified method is specific, sensitive, quick and easy and applicable regardless of the concentrations of tdh+ V. parahaemolyticus. Therefore, we conclude this modified method is useful in world tropical, sub-tropical, and temperate zones. PMID:25914681
Pendleton, G.W.; Ralph, C. John; Sauer, John R.; Droege, Sam
1995-01-01
Many factors affect the use of point counts for monitoring bird populations, including sampling strategies, variation in detection rates, and independence of sample points. The most commonly used sampling plans are stratified sampling, cluster sampling, and systematic sampling. Each of these might be most useful for different objectives or field situations. Variation in detection probabilities and lack of independence among sample points can bias estimates and measures of precision. All of these factors should be con-sidered when using point count methods.
Finding a fox: an evaluation of survey methods to estimate abundance of a small desert carnivore.
Dempsey, Steven J; Gese, Eric M; Kluever, Bryan M
2014-01-01
The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore.
Finding a Fox: An Evaluation of Survey Methods to Estimate Abundance of a Small Desert Carnivore
Dempsey, Steven J.; Gese, Eric M.; Kluever, Bryan M.
2014-01-01
The status of many carnivore species is a growing concern for wildlife agencies, conservation organizations, and the general public. Historically, kit foxes (Vulpes macrotis) were classified as abundant and distributed in the desert and semi-arid regions of southwestern North America, but is now considered rare throughout its range. Survey methods have been evaluated for kit foxes, but often in populations where abundance is high and there is little consensus on which technique is best to monitor abundance. We conducted a 2-year study to evaluate four survey methods (scat deposition surveys, scent station surveys, spotlight survey, and trapping) for detecting kit foxes and measuring fox abundance. We determined the probability of detection for each method, and examined the correlation between the relative abundance as estimated by each survey method and the known minimum kit fox abundance as determined by radio-collared animals. All surveys were conducted on 15 5-km transects during the 3 biological seasons of the kit fox. Scat deposition surveys had both the highest detection probabilities (p = 0.88) and were most closely related to minimum known fox abundance (r2 = 0.50, P = 0.001). The next best method for kit fox detection was the scent station survey (p = 0.73), which had the second highest correlation to fox abundance (r2 = 0.46, P<0.001). For detecting kit foxes in a low density population we suggest using scat deposition transects during the breeding season. Scat deposition surveys have low costs, resilience to weather, low labor requirements, and pose no risk to the study animals. The breeding season was ideal for monitoring kit fox population size, as detections consisted of the resident population and had the highest detection probabilities. Using appropriate monitoring techniques will be critical for future conservation actions for this rare desert carnivore. PMID:25148102
Accounting for Incomplete Species Detection in Fish Community Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated speciesmore » richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).« less
Inferences about landbird abundance from count data: recent advances and future directions
Nichols, J.D.; Thomas, L.; Conn, P.B.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
We summarize results of a November 2006 workshop dealing with recent research on the estimation of landbird abundance from count data. Our conceptual framework includes a decomposition of the probability of detecting a bird potentially exposed to sampling efforts into four separate probabilities. Primary inference methods are described and include distance sampling, multiple observers, time of detection, and repeated counts. The detection parameters estimated by these different approaches differ, leading to different interpretations of resulting estimates of density and abundance. Simultaneous use of combinations of these different inference approaches can not only lead to increased precision but also provides the ability to decompose components of the detection process. Recent efforts to test the efficacy of these different approaches using natural systems and a new bird radio test system provide sobering conclusions about the ability of observers to detect and localize birds in auditory surveys. Recent research is reported on efforts to deal with such potential sources of error as bird misclassification, measurement error, and density gradients. Methods for inference about spatial and temporal variation in avian abundance are outlined. Discussion topics include opinions about the need to estimate detection probability when drawing inference about avian abundance, methodological recommendations based on the current state of knowledge and suggestions for future research.
Tsang, Ming-Kiu; Ye, WeiWei; Wang, Guojing; Li, Jingming; Yang, Mo; Hao, Jianhua
2016-01-26
Ebola outbreaks are currently of great concern, and therefore, development of effective diagnosis methods is urgently needed. The key for lethal virus detection is high sensitivity, since early-stage detection of virus may increase the probability of survival. Here, we propose a luminescence scheme of assay consisting of BaGdF5:Yb/Er upconversion nanoparticles (UCNPs) conjugated with oligonucleotide probe and gold nanoparticles (AuNPs) linked with target Ebola virus oligonucleotide. As a proof of concept, a homogeneous assay was fabricated and tested, yielding a detection limit at picomolar level. The luminescence resonance energy transfer is ascribed to the spectral overlapping of upconversion luminescence and the absorption characteristics of AuNPs. Moreover, we anchored the UCNPs and AuNPs on a nanoporous alumina (NAAO) membrane to form a heterogeneous assay. Importantly, the detection limit was greatly improved, exhibiting a remarkable value at the femtomolar level. The enhancement is attributed to the increased light-matter interaction throughout the nanopore walls of the NAAO membrane. The specificity test suggested that the nanoprobes were specific to Ebola virus oligonucleotides. The strategy combining UCNPs, AuNPs, and NAAO membrane provides new insight into low-cost, rapid, and ultrasensitive detection of different diseases. Furthermore, we explored the feasibility of clinical application by using inactivated Ebola virus samples. The detection results showed great potential of our heterogeneous design for practical application.
NASA Astrophysics Data System (ADS)
Wang, Fei
2013-09-01
Geiger-mode detectors have single photon sensitivity and picoseconds timing resolution, which make it a good candidate for low light level ranging applications, especially in the case of flash three dimensional imaging applications where the received laser power is extremely limited. Another advantage of Geiger-mode APD is their capability of large output current which can drive CMOS timing circuit directly, which means that larger format focal plane arrays can be easily fabricated using the mature CMOS technology. However Geiger-mode detector based FPAs can only measure the range information of a scene but not the reflectivity. Reflectivity is a major characteristic which can help target classification and identification. According to Poisson statistic nature, detection probability is tightly connected to the incident number of photon. Employing this relation, a signal intensity estimation method based on probability inversion is proposed. Instead of measuring intensity directly, several detections are conducted, then the detection probability is obtained and the intensity is estimated using this method. The relation between the estimator's accuracy, measuring range and number of detections are discussed based on statistical theory. Finally Monte-Carlo simulation is conducted to verify the correctness of this theory. Using 100 times of detection, signal intensity equal to 4.6 photons per detection can be measured using this method. With slight modification of measuring strategy, intensity information can be obtained using current Geiger-mode detector based FPAs, which can enrich the information acquired and broaden the application field of current technology.
Sensing Methods for Detecting Analog Television Signals
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi
This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.
Gastrin-releasing peptide in human nasal mucosa.
Baraniuk, J N; Lundgren, J D; Goff, J; Peden, D; Merida, M; Shelhamer, J; Kaliner, M
1990-04-01
Gastrin-releasing peptide (GRP), the 27 amino acid mammalian form of bombesin, was studied in human inferior turbinate nasal mucosa. The GRP content of the mucosa measured by radioimmunoassay was 0.60 +/- 0.25 pmol/g tissue (n = 9 patients; mean +/- SEM). GRP-immunoreactive nerves detected by the immunogold method of indirect immunohistochemistry were found predominantly in small muscular arteries, arterioles, venous sinusoids, and between submucosal gland acini. 125I-GRP binding sites determined by autoradiography were exclusively and specifically localized to nasal epithelium and submucosal glands. There was no binding to vessels. The effects of GRP on submucosal gland product release were studied in short-term explant culture. GRP (10 microM) significantly stimulated the release of the serous cell-specific product lactoferrin, and [3H]glucosamine-labeled glycoconjugates which are products of epithelial goblet cells and submucosal gland cells. These observations indicate that GRP released from nerve fibers probably acts on glandular GRP receptors to induce glycoconjugate release from submucosal glands and epithelium and lactoferrin release from serous cells, but that GRP would probably not affect vascular permeability.
NASA Technical Reports Server (NTRS)
Hodge, Andrew J.; Walker, James L., II
2008-01-01
A probability of detection study was performed for the detection of impact damage using flash heating infrared thermography on a full scale honeycomb composite structure. The honeycomb structure was an intertank structure from a previous NASA technology demonstration program. The intertank was fabricated from IM7/8552 carbon fiber/epoxy facesheets and aluminum honeycomb core. The intertank was impacted in multiple locations with a range of impact energies utilizing a spherical indenter. In a single blind study, the intertank was inspected with thermography before and after impact damage was incurred. Following thermographic inspection several impact sites were sectioned from the intertank and cross-sectioned for microscopic comparisons of NDE detection and actual damage incurred. The study concluded that thermographic inspection was a good method of detecting delamination damage incurred by impact. The 90/95 confidence level on the probability of detection was close to the impact energy that delaminations were first observed through cross-sectional analysis.
Haller, Julian; Wilkens, Volker; Shaw, Adam
2018-02-01
A method to determine acoustic cavitation probabilities in tissue-mimicking materials (TMMs) is described that uses a high-intensity focused ultrasound (HIFU) transducer for both inducing and detecting the acoustic cavitation events. The method was evaluated by studying acoustic cavitation probabilities in agar-based TMMs with and without scatterers and for different sonication modes like continuous wave, single pulses (microseconds to milliseconds) and repeated burst signals. Acoustic cavitation thresholds (defined here as the peak rarefactional in situ pressure at which the acoustic cavitation probability reaches 50%) at a frequency of 1.06 MHz were observed between 1.1 MPa (for 1 s of continuous wave sonication) and 4.6 MPa (for 1 s of a repeated burst signal with 25-cycle burst length and 10-ms burst period) in a 3% (by weight) agar phantom without scatterers. The method and its evaluation are described, and general terminology useful for standardizing the description of insonation conditions and comparing results is provided. In the accompanying second part, the presented method is used to systematically study the acoustic cavitation thresholds in the same material for a range of sonication modes. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Detection of muramic acid in a carbohydrate fraction of human spleen.
Hoijer, M A; Melief, M J; van Helden-Meeuwsen, C G; Eulderink, F; Hazenberg, M P
1995-01-01
In previous studies, we showed that peptidoglycan polysaccharides from anaerobic bacteria normally present in the human gut induced severe chronic joint inflammation in rats. Our hypothesis is that peptidoglycan from the gut flora is involved in perpetuation of idiopathic inflammation. However, in the literature, the presence of peptidoglycan or subunits like muramyl peptides in blood or tissues is still a matter of debate. We were able to stain red pulp macrophages in all six available human spleens by immunohistochemical techniques using a monoclonal antibody against gut flora-derived antigens. Therefore, these human spleens were extracted, and after removal of most of the protein, the carbohydrate fraction was investigated for the presence of muramic acid, an amino sugar characteristic for peptidoglycan. Using three different methods for detection of muramic acid, we found a mean of 3.3 mumol of muramic acid with high-pressure liquid chromatography, 1.9 mumol with a colorimetric method for detection of lactate, and 0.8 mumol with an enzymatic method for detection of D-lactate per spleen (D-lactate is a specific group of the muramic acid molecule). It is concluded that peptidoglycan is present in human spleen not as small muramyl peptides as were previously searched for by other investigators but as larger macromolecules probably stored in spleen macrophages. PMID:7729869
NASA Astrophysics Data System (ADS)
Zhang, Jun; Cain, Elizabeth Hope; Saha, Ashirbani; Zhu, Zhe; Mazurowski, Maciej A.
2018-02-01
Breast mass detection in mammography and digital breast tomosynthesis (DBT) is an essential step in computerized breast cancer analysis. Deep learning-based methods incorporate feature extraction and model learning into a unified framework and have achieved impressive performance in various medical applications (e.g., disease diagnosis, tumor detection, and landmark detection). However, these methods require large-scale accurately annotated data. Unfortunately, it is challenging to get precise annotations of breast masses. To address this issue, we propose a fully convolutional network (FCN) based heatmap regression method for breast mass detection, using only weakly annotated mass regions in mammography images. Specifically, we first generate heat maps of masses based on human-annotated rough regions for breast masses. We then develop an FCN model for end-to-end heatmap regression with an F-score loss function, where the mammography images are regarded as the input and heatmaps for breast masses are used as the output. Finally, the probability map of mass locations can be estimated with the trained model. Experimental results on a mammography dataset with 439 subjects demonstrate the effectiveness of our method. Furthermore, we evaluate whether we can use mammography data to improve detection models for DBT, since mammography shares similar structure with tomosynthesis. We propose a transfer learning strategy by fine-tuning the learned FCN model from mammography images. We test this approach on a small tomosynthesis dataset with only 40 subjects, and we show an improvement in the detection performance as compared to training the model from scratch.
A conditional probability approach using monitoring data to develop geographic-specific water quality criteria for protection of aquatic life is presented. Typical methods to develop criteria using existing monitoring data are limited by two issues: (1) how to extrapolate to an...
Detection of image structures using the Fisher information and the Rao metric.
Maybank, Stephen J
2004-12-01
In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.
Scherer, Luciene Cardoso; Sperhacke, Rosa Dea; Jarczewski, Carla; Cafrune, Patrícia I; Minghelli, Simone; Ribeiro, Marta Osório; Mello, Fernanda CQ; Ruffino-Netto, Antonio; Rossetti, Maria LR; Kritski, Afrânio L
2007-01-01
Background Smear-negative pulmonary tuberculosis (SNPTB) accounts for 30% of Pulmonary Tuberculosis (PTB) cases reported annually in developing nations. Polymerase chain reaction (PCR) may provide an alternative for the rapid detection of Mycobacterium tuberculosis (MTB); however little data are available regarding the clinical utility of PCR in SNPTB, in a setting with a high burden of TB/HIV co-infection. Methods To evaluate the performance of the PCR dot-blot in parallel with pretest probability (Clinical Suspicion) in patients suspected of having SNPTB, a prospective study of 213 individuals with clinical and radiological suspicion of SNPTB was carried out from May 2003 to May 2004, in a TB/HIV reference hospital. Respiratory specialists estimated the pretest probability of active disease into high, intermediate, low categories. Expectorated sputum was examined by direct microscopy (Ziehl-Neelsen staining), culture (Lowenstein Jensen) and PCR dot-blot. Gold standard was based on culture positivity combined with the clinical definition of PTB. Results In smear-negative and HIV subjects, active PTB was diagnosed in 28.4% (43/151) and 42.2% (19/45), respectively. In the high, intermediate and low pretest probability categories active PTB was diagnosed in 67.4% (31/46), 24% (6/25), 7.5% (6/80), respectively. PCR had sensitivity of 65% (CI 95%: 50%–78%) and specificity of 83% (CI 95%: 75%–89%). There was no difference in the sensitivity of PCR in relation to HIV status. PCR sensitivity and specificity among non-previously TB treated and those treated in the past were, respectively: 69%, 43%, 85% and 80%. The high pretest probability, when used as a diagnostic test, had sensitivity of 72% (CI 95%:57%–84%) and specificity of 86% (CI 95%:78%–92%). Using the PCR dot-blot in parallel with high pretest probability as a diagnostic test, sensitivity, specificity, positive and negative predictive values were: 90%, 71%, 75%, and 88%, respectively. Among non-previously TB treated and HIV subjects, this approach had sensitivity, specificity, positive and negative predictive values of 91%, 79%, 81%, 90%, and 90%, 65%, 72%, 88%, respectively. Conclusion PCR dot-blot associated with a high clinical suspicion may provide an important contribution to the diagnosis of SNPTB mainly in patients that have not been previously treated attended at a TB/HIV reference hospital. PMID:18096069
[Analysis and experimental verification of sensitivity and SNR of laser warning receiver].
Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue
2009-01-01
In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.
A new method to search for high-redshift clusters using photometric redshifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castignani, G.; Celotti, A.; Chiaberge, M.
2014-09-10
We describe a new method (Poisson probability method, PPM) to search for high-redshift galaxy clusters and groups by using photometric redshift information and galaxy number counts. The method relies on Poisson statistics and is primarily introduced to search for megaparsec-scale environments around a specific beacon. The PPM is tailored to both the properties of the FR I radio galaxies in the Chiaberge et al. sample, which are selected within the COSMOS survey, and to the specific data set used. We test the efficiency of our method of searching for cluster candidates against simulations. Two different approaches are adopted. (1) Wemore » use two z ∼ 1 X-ray detected cluster candidates found in the COSMOS survey and we shift them to higher redshift up to z = 2. We find that the PPM detects the cluster candidates up to z = 1.5, and it correctly estimates both the redshift and size of the two clusters. (2) We simulate spherically symmetric clusters of different size and richness, and we locate them at different redshifts (i.e., z = 1.0, 1.5, and 2.0) in the COSMOS field. We find that the PPM detects the simulated clusters within the considered redshift range with a statistical 1σ redshift accuracy of ∼0.05. The PPM is an efficient alternative method for high-redshift cluster searches that may also be applied to both present and future wide field surveys such as SDSS Stripe 82, LSST, and Euclid. Accurate photometric redshifts and a survey depth similar or better than that of COSMOS (e.g., I < 25) are required.« less
NASA Astrophysics Data System (ADS)
Vio, R.; Andreani, P.
2016-05-01
The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Limits of detection and decision. Part 4
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
Probability density functions (PDFs) have been derived for a number of commonly used limit of detection definitions, including several variants of the Relative Standard Deviation of the Background-Background Equivalent Concentration (RSDB-BEC) method, for a simple linear chemical measurement system (CMS) having homoscedastic, Gaussian measurement noise and using ordinary least squares (OLS) processing. All of these detection limit definitions serve as both decision and detection limits, thereby implicitly resulting in 50% rates of Type 2 errors. It has been demonstrated that these are closely related to Currie decision limits, if the coverage factor, k, is properly defined, and that all of the PDFs are scaled reciprocals of noncentral t variates. All of the detection limits have well-defined upper and lower limits, thereby resulting in finite moments and confidence limits, and the problem of estimating the noncentrality parameter has been addressed. As in Parts 1-3, extensive Monte Carlo simulations were performed and all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Specific recommendations for harmonization of detection limit methodology have also been made.
Eads, David A.; Biggins, Dean E.; Doherty, Paul F.; Gage, Kenneth L.; Huyvaert, Kathryn P.; Long, Dustin H.; Antolin, Michael F.
2013-01-01
Ectoparasites are often difficult to detect in the field. We developed a method that can be used with occupancy models to estimate the prevalence of ectoparasites on hosts, and to investigate factors that influence rates of ectoparasite occupancy while accounting for imperfect detection. We describe the approach using a study of fleas (Siphonaptera) on black-tailed prairie dogs (Cynomys ludovicianus). During each primary occasion (monthly trapping events), we combed a prairie dog three consecutive times to detect fleas (15 s/combing). We used robust design occupancy modeling to evaluate hypotheses for factors that might correlate with the occurrence of fleas on prairie dogs, and factors that might influence the rate at which prairie dogs are colonized by fleas. Our combing method was highly effective; dislodged fleas fell into a tub of water and could not escape, and there was an estimated 99.3% probability of detecting a flea on an occupied host when using three combings. While overall detection was high, the probability of detection was always <1.00 during each primary combing occasion, highlighting the importance of considering imperfect detection. The combing method (removal of fleas) caused a decline in detection during primary occasions, and we accounted for that decline to avoid inflated estimates of occupancy. Regarding prairie dogs, flea occupancy was heightened in old/natural colonies of prairie dogs, and on hosts that were in poor condition. Occupancy was initially low in plots with high densities of prairie dogs, but, as the study progressed, the rate of flea colonization increased in plots with high densities of prairie dogs in particular. Our methodology can be used to improve studies of ectoparasites, especially when the probability of detection is low. Moreover, the method can be modified to investigate the co-occurrence of ectoparasite species, and community level factors such as species richness and interspecific interactions.
Simulation-driven machine learning: Bearing fault classification
NASA Astrophysics Data System (ADS)
Sobie, Cameron; Freitas, Carina; Nicolai, Mike
2018-01-01
Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.
Comparative dynamics of avian communities across edges and interiors of North American ecoregions
Karanth, K.K.; Nichols, J.D.; Sauer, J.R.; Hines, J.E.
2006-01-01
Aim Based on a priori hypotheses, we developed predictions about how avian communities might differ at the edges vs. interiors of ecoregions. Specifically, we predicted lower species richness and greater local turnover and extinction probabilities for regional edges. We tested these predictions using North American Breeding Bird Survey (BBS) data across nine ecoregions over a 20-year time period. Location Data from 2238 BBS routes within nine ecoregions of the United States were used. Methods The estimation methods used accounted for species detection probabilities < 1. Parameter estimates for species richness, local turnover and extinction probabilities were obtained using the program COMDYN. We examined the difference in community-level parameters estimated from within exterior edges (the habitat interface between ecoregions), interior edges (the habitat interface between two bird conservation regions within the same ecoregion) and interior (habitat excluding interfaces). General linear models were constructed to examine sources of variation in community parameters for five ecoregions (containing all three habitat types) and all nine ecoregions (containing two habitat types). Results Analyses provided evidence that interior habitats and interior edges had on average higher bird species richness than exterior edges, providing some evidence of reduced species richness near habitat edges. Lower average extinction probabilities and turnover rates in interior habitats (five-region analysis) provided some support for our predictions about these quantities. However, analyses directed at all three response variables, i.e. species richness, local turnover, and local extinction probability, provided evidence of an interaction between habitat and region, indicating that the relationships did not hold in all regions. Main conclusions The overall predictions of lower species richness, higher local turnover and extinction probabilities in regional edge habitats, as opposed to interior habitats, were generally supported. However, these predicted tendencies did not hold in all regions.
Various Effects of Embedded Intrapulse Communications on Pulsed Radar
2017-06-01
specific type of interference that may be encountered by radar; however, this introductory information should suffice to illustrate to the reader why...chapter we seek to not merely understand the overall statistical performance of the radar with embedded intrapulse communications but rather to evaluate...Theory Probability of detection, discussed in Chapter 4, assesses the statistical probability of a radar accurately identifying a target given a
Renaudin, Isabelle; Poliakoff, Françoise
2017-01-01
A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of “Flavescence dorée” (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes’ theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods. PMID:28384335
Chabirand, Aude; Loiseau, Marianne; Renaudin, Isabelle; Poliakoff, Françoise
2017-01-01
A working group established in the framework of the EUPHRESCO European collaborative project aimed to compare and validate diagnostic protocols for the detection of "Flavescence dorée" (FD) phytoplasma in grapevines. Seven molecular protocols were compared in an interlaboratory test performance study where each laboratory had to analyze the same panel of samples consisting of DNA extracts prepared by the organizing laboratory. The tested molecular methods consisted of universal and group-specific real-time and end-point nested PCR tests. Different statistical approaches were applied to this collaborative study. Firstly, there was the standard statistical approach consisting in analyzing samples which are known to be positive and samples which are known to be negative and reporting the proportion of false-positive and false-negative results to respectively calculate diagnostic specificity and sensitivity. This approach was supplemented by the calculation of repeatability and reproducibility for qualitative methods based on the notions of accordance and concordance. Other new approaches were also implemented, based, on the one hand, on the probability of detection model, and, on the other hand, on Bayes' theorem. These various statistical approaches are complementary and give consistent results. Their combination, and in particular, the introduction of new statistical approaches give overall information on the performance and limitations of the different methods, and are particularly useful for selecting the most appropriate detection scheme with regards to the prevalence of the pathogen. Three real-time PCR protocols (methods M4, M5 and M6 respectively developed by Hren (2007), Pelletier (2009) and under patent oligonucleotides) achieved the highest levels of performance for FD phytoplasma detection. This paper also addresses the issue of indeterminate results and the identification of outlier results. The statistical tools presented in this paper and their combination can be applied to many other studies concerning plant pathogens and other disciplines that use qualitative detection methods.
An Efficient Algorithm for the Detection of Infrequent Rapid Bursts in Time Series Data
NASA Astrophysics Data System (ADS)
Giles, A. B.
1997-01-01
Searching through data for infrequent rapid bursts is a common requirement in many areas of scientific research. In this paper, we present a powerful and flexible analysis method that, in a single pass through the data, searches for statistically significant bursts on a set of specified short timescales. The input data are binned, if necessary, and then quantified in terms of probabilities rather than rates or ratios. Using a measure-like probability makes the method relatively count rate independent. The method has been made computationally efficient by the use of lookup tables and cyclic buffers, and it is therefore particularly well suited to real-time applications. The technique has been developed specifically for use in an X-ray astronomy application to search for millisecond bursts from black hole candidates such as Cyg X-1. We briefly review the few observations of these types of features reported in the literature, as well as the variety of ways in which their statistical reliability was challenged. The developed technique, termed the burst expectation search (BES) method, is illustrated using some data simulations and archived data obtained during ground testing of the proportional counter array (PCA) experiment detectors on the Rossi X-Ray Timing Explorer (RXTE). A potential application for a real-time BES method on board RXTE is also examined.
Double-observer approach to estimating egg mass abundance of vernal pool breeding amphibians
Grant, E.H.C.; Jung, R.E.; Nichols, J.D.; Hines, J.E.
2005-01-01
Interest in seasonally flooded pools, and the status of associated amphibian populations, has initiated programs in the northeastern United States to document and monitor these habitats. Counting egg masses is an effective way to determine the population size of pool-breeding amphibians, such as wood frogs (Rana sylvatica) and spotted salamanders (Ambystoma maculatum). However, bias is associated with counts if egg masses are missed. Counts unadjusted for the proportion missed (i.e., without adjustment for detection probability) could lead to false assessments of population trends. We used a dependent double-observer method in 2002-2003 to estimate numbers of wood frog and spotted salamander egg masses at seasonal forest pools in 13 National Wildlife Refuges, 1 National Park, 1 National Seashore, and 1 State Park in the northeastern United States. We calculated detection probabilities for egg masses and examined whether detection probabilities varied by species, observers, pools, and in relation to pool characteristics (pool area, pool maximum depth, within-pool vegetation). For the 2 years, model selection indicated that no consistent set of variables explained the variation in data sets from individual Refuges and Parks. Because our results indicated that egg mass detection probabilities vary spatially and temporally, we conclude that it is essential to use estimation procedures, such as double-observer methods with egg mass surveys, to determine population sizes and trends of these species.
Wasser, Samuel K.; Hayward, Lisa S.; Hartman, Jennifer; Booth, Rebecca K.; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)–a significant threat to northern spotted owls–can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service’s Draft 2010 Survey Protocol. Seventeen 2 km ×2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl. PMID:22916175
Wasser, Samuel K; Hayward, Lisa S; Hartman, Jennifer; Booth, Rebecca K; Broms, Kristin; Berg, Jodi; Seely, Elizabeth; Lewis, Lyle; Smith, Heath
2012-01-01
State and federal actions to conserve northern spotted owl (Strix occidentalis caurina) habitat are largely initiated by establishing habitat occupancy. Northern spotted owl occupancy is typically assessed by eliciting their response to simulated conspecific vocalizations. However, proximity of barred owls (Strix varia)-a significant threat to northern spotted owls-can suppress northern spotted owl responsiveness to vocalization surveys and hence their probability of detection. We developed a survey method to simultaneously detect both species that does not require vocalization. Detection dogs (Canis familiaris) located owl pellets accumulated under roost sites, within search areas selected using habitat association maps. We compared success of detection dog surveys to vocalization surveys slightly modified from the U.S. Fish and Wildlife Service's Draft 2010 Survey Protocol. Seventeen 2 km × 2 km polygons were each surveyed multiple times in an area where northern spotted owls were known to nest prior to 1997 and barred owl density was thought to be low. Mitochondrial DNA was used to confirm species from pellets detected by dogs. Spotted owl and barred owl detection probabilities were significantly higher for dog than vocalization surveys. For spotted owls, this difference increased with number of site visits. Cumulative detection probabilities of northern spotted owls were 29% after session 1, 62% after session 2, and 87% after session 3 for dog surveys, compared to 25% after session 1, increasing to 59% by session 6 for vocalization surveys. Mean detection probability for barred owls was 20.1% for dog surveys and 7.3% for vocal surveys. Results suggest that detection dog surveys can complement vocalization surveys by providing a reliable method for establishing occupancy of both northern spotted and barred owl without requiring owl vocalization. This helps meet objectives of Recovery Actions 24 and 25 of the Revised Recovery Plan for the Northern Spotted Owl.
Probability of detection of defects in coatings with electronic shearography
NASA Technical Reports Server (NTRS)
Russell, S. S.; Lansing, M. D.; Horton, C. M.; Gnacek, W. J.
1995-01-01
The goal of this research was to utilize statistical methods to evaluate the probability of detection (POD) of defects in coatings using electronic shearography. The coating system utilized in the POD studies was to be the paint system currently utilized on the external casings of the NASA space transportation system reusable solid rocket motor boosters. The population of samples was to be large enough to determine the minimum defect size for 90-percent POD of 95-percent confidence POD on these coatings. Also, the best methods to excite coatings on aerospace components to induce deformations for measurement by electronic shearography were to be determined.
Cetacean population density estimation from single fixed sensors using passive acoustics.
Küsel, Elizabeth T; Mellinger, David K; Thomas, Len; Marques, Tiago A; Moretti, David; Ward, Jessica
2011-06-01
Passive acoustic methods are increasingly being used to estimate animal population density. Most density estimation methods are based on estimates of the probability of detecting calls as functions of distance. Typically these are obtained using receivers capable of localizing calls or from studies of tagged animals. However, both approaches are expensive to implement. The approach described here uses a MonteCarlo model to estimate the probability of detecting calls from single sensors. The passive sonar equation is used to predict signal-to-noise ratios (SNRs) of received clicks, which are then combined with a detector characterization that predicts probability of detection as a function of SNR. Input distributions for source level, beam pattern, and whale depth are obtained from the literature. Acoustic propagation modeling is used to estimate transmission loss. Other inputs for density estimation are call rate, obtained from the literature, and false positive rate, obtained from manual analysis of a data sample. The method is applied to estimate density of Blainville's beaked whales over a 6-day period around a single hydrophone located in the Tongue of the Ocean, Bahamas. Results are consistent with those from previous analyses, which use additional tag data. © 2011 Acoustical Society of America
An Optimal Method for Detecting Internal and External Intrusion in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Aliahmadipour, Laya; Javidi, Mohammad M.
Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption.
Converse, Sarah J.; Royle, J. Andrew; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.
2012-01-01
An ecological monitoring program should be viewed as a component of a larger framework designed to advance science and/or management, rather than as a stand-alone activity. Monitoring targets (the ecological variables of interest; e.g. abundance or occurrence of a species) should be set based on the needs of that framework (Nichols and Williams 2006; e.g. Chapters 2–4). Once such monitoring targets are set, the subsequent step in monitoring design involves consideration of the field and analytical methods that will be used to measure monitoring targets with adequate accuracy and precision. Long-term monitoring programs will involve replication of measurements over time, and possibly over space; that is, one location or each of multiple locations will be monitored multiple times, producing a collection of site visits (replicates). Clearly this replication is important for addressing spatial and temporal variability in the ecological resources of interest (Chapters 7–10), but it is worth considering how this replication can further be exploited to increase the effectiveness of monitoring. In particular, defensible monitoring of the majority of animal, and to a lesser degree plant, populations and communities will generally require investigators to account for imperfect detection (Chapters 4, 18). Raw indices of population state variables, such as abundance or occupancy (sensu MacKenzie et al. 2002), are rarely defensible when detection probabilities are < 1, because in those cases detection may vary over time and space in unpredictable ways. Myriad authors have discussed the risks inherent in making inference from monitoring data while failing to correct for differences in detection, resulting in indices that have an unknown relationship to the parameters of interest (e.g. Nichols 1992, Anderson 2001, MacKenzie et al. 2002, Williams et al. 2002, Anderson 2003, White 2005, Kéry and Schmidt 2008). While others have argued that indices may be preferable in some cases due to the challenges associated with estimating detection probabilities (e.g. McKelvey and Pearson 2001, Johnson 2008), we do not attempt to resolve this debate here. Rather, we are more apt to agree with MacKenzie and Kendall (2002) that the burden of proof ought to be on the assertion that detection probabilities are constant. Furthermore, given the wide variety of field methods available for estimating detection probabilities and the inability for an investigator to know, a priori, if detection probabilities will be constant over time and space, we believe that development of monitoring programs ought to include field and analytical methods to account for the imperfect detection of organisms.
Sources of variation in detection of wading birds from aerial surveys in the Florida Everglades
Conroy, M.J.; Peterson, J.T.; Bass, O.L.; Fonnesbeck, C.J.; Howell, J.E.; Moore, C.T.; Runge, J.P.
2008-01-01
We conducted dual-observer trials to estimate detection probabilities (probability that a group that is present and available is detected) for fixed-wing aerial surveys of wading birds in the Everglades system, Florida. Detection probability ranged from <0.2 to similar to 0.75 and varied according to species, group size, observer, and the observer's position in the aircraft (front or rear seat). Aerial-survey simulations indicated that incomplete detection can have a substantial effect oil assessment of population trends, particularly river relatively short intervals (<= 3 years) and small annual changes in population size (<= 3%). We conclude that detection bias is an important consideration for interpreting observations from aerial surveys of wading birds, potentially limiting the use of these data for comparative purposes and trend analyses. We recommend that workers conducting aerial surveys for wading birds endeavor to reduce observer and other controllable sources of detection bias and account for uncontrollable sources through incorporation of dual-observer or other calibratior methods as part of survey design (e.g., using double sampling).
Cost and detection rate of glaucoma screening with imaging devices in a primary care center
Anton, Alfonso; Fallon, Monica; Cots, Francesc; Sebastian, María A; Morilla-Grasa, Antonio; Mojal, Sergi; Castells, Xavier
2017-01-01
Purpose To analyze the cost and detection rate of a screening program for detecting glaucoma with imaging devices. Materials and methods In this cross-sectional study, a glaucoma screening program was applied in a population-based sample randomly selected from a population of 23,527. Screening targeted the population at risk of glaucoma. Examinations included optic disk tomography (Heidelberg retina tomograph [HRT]), nerve fiber analysis, and tonometry. Subjects who met at least 2 of 3 endpoints (HRT outside normal limits, nerve fiber index ≥30, or tonometry ≥21 mmHg) were referred for glaucoma consultation. The currently established (“conventional”) detection method was evaluated by recording data from primary care and ophthalmic consultations in the same population. The direct costs of screening and conventional detection were calculated by adding the unit costs generated during the diagnostic process. The detection rate of new glaucoma cases was assessed. Results The screening program evaluated 414 subjects; 32 cases were referred for glaucoma consultation, 7 had glaucoma, and 10 had probable glaucoma. The current detection method assessed 677 glaucoma suspects in the population, of whom 29 were diagnosed with glaucoma or probable glaucoma. Glaucoma screening and the conventional detection method had detection rates of 4.1% and 3.1%, respectively, and the cost per case detected was 1,410 and 1,435€, respectively. The cost of screening 1 million inhabitants would be 5.1 million euros and would allow the detection of 4,715 new cases. Conclusion The proposed screening method directed at population at risk allows a detection rate of 4.1% and a cost of 1,410 per case detected. PMID:28243057
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Cornforth, David J; Tarvainen, Mika P; Jelinek, Herbert F
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN.
Cornforth, David J.; Tarvainen, Mika P.; Jelinek, Herbert F.
2014-01-01
Cardiac autonomic neuropathy (CAN) is a disease that involves nerve damage leading to an abnormal control of heart rate. An open question is to what extent this condition is detectable from heart rate variability (HRV), which provides information only on successive intervals between heart beats, yet is non-invasive and easy to obtain from a three-lead ECG recording. A variety of measures may be extracted from HRV, including time domain, frequency domain, and more complex non-linear measures. Among the latter, Renyi entropy has been proposed as a suitable measure that can be used to discriminate CAN from controls. However, all entropy methods require estimation of probabilities, and there are a number of ways in which this estimation can be made. In this work, we calculate Renyi entropy using several variations of the histogram method and a density method based on sequences of RR intervals. In all, we calculate Renyi entropy using nine methods and compare their effectiveness in separating the different classes of participants. We found that the histogram method using single RR intervals yields an entropy measure that is either incapable of discriminating CAN from controls, or that it provides little information that could not be gained from the SD of the RR intervals. In contrast, probabilities calculated using a density method based on sequences of RR intervals yield an entropy measure that provides good separation between groups of participants and provides information not available from the SD. The main contribution of this work is that different approaches to calculating probability may affect the success of detecting disease. Our results bring new clarity to the methods used to calculate the Renyi entropy in general, and in particular, to the successful detection of CAN. PMID:25250311
Statistical methods for identifying and bounding a UXO target area or minefield
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinstry, Craig A.; Pulsipher, Brent A.; Gilbert, Richard O.
2003-09-18
The sampling unit for minefield or UXO area characterization is typically represented by a geographical block or transect swath that lends itself to characterization by geophysical instrumentation such as mobile sensor arrays. New spatially based statistical survey methods and tools, more appropriate for these unique sampling units have been developed and implemented at PNNL (Visual Sample Plan software, ver. 2.0) with support from the US Department of Defense. Though originally developed to support UXO detection and removal efforts, these tools may also be used in current form or adapted to support demining efforts and aid in the development of newmore » sensors and detection technologies by explicitly incorporating both sampling and detection error in performance assessments. These tools may be used to (1) determine transect designs for detecting and bounding target areas of critical size, shape, and density of detectable items of interest with a specified confidence probability, (2) evaluate the probability that target areas of a specified size, shape and density have not been missed by a systematic or meandering transect survey, and (3) support post-removal verification by calculating the number of transects required to achieve a specified confidence probability that no UXO or mines have been missed.« less
Bayesian approach for peak detection in two-dimensional chromatography.
Vivó-Truyols, Gabriel
2012-03-20
A new method for peak detection in two-dimensional chromatography is presented. In a first step, the method starts with a conventional one-dimensional peak detection algorithm to detect modulated peaks. In a second step, a sophisticated algorithm is constructed to decide which of the individual one-dimensional peaks have been originated from the same compound and should then be arranged in a two-dimensional peak. The merging algorithm is based on Bayesian inference. The user sets prior information about certain parameters (e.g., second-dimension retention time variability, first-dimension band broadening, chromatographic noise). On the basis of these priors, the algorithm calculates the probability of myriads of peak arrangements (i.e., ways of merging one-dimensional peaks), finding which of them holds the highest value. Uncertainty in each parameter can be accounted by adapting conveniently its probability distribution function, which in turn may change the final decision of the most probable peak arrangement. It has been demonstrated that the Bayesian approach presented in this paper follows the chromatographers' intuition. The algorithm has been applied and tested with LC × LC and GC × GC data and takes around 1 min to process chromatograms with several thousands of peaks.
Jin, Bo; Krishnan, Balu; Adler, Sophie; Wagstyl, Konrad; Hu, Wenhan; Jones, Stephen; Najm, Imad; Alexopoulos, Andreas; Zhang, Kai; Zhang, Jianguo; Ding, Meiping; Wang, Shuang; Wang, Zhong Irene
2018-05-01
Focal cortical dysplasia (FCD) is a major pathology in patients undergoing surgical resection to treat pharmacoresistant epilepsy. Magnetic resonance imaging (MRI) postprocessing methods may provide essential help for detection of FCD. In this study, we utilized surface-based MRI morphometry and machine learning for automated lesion detection in a mixed cohort of patients with FCD type II from 3 different epilepsy centers. Sixty-one patients with pharmacoresistant epilepsy and histologically proven FCD type II were included in the study. The patients had been evaluated at 3 different epilepsy centers using 3 different MRI scanners. T1-volumetric sequence was used for postprocessing. A normal database was constructed with 120 healthy controls. We also included 35 healthy test controls and 15 disease test controls with histologically confirmed hippocampal sclerosis to assess specificity. Features were calculated and incorporated into a nonlinear neural network classifier, which was trained to identify lesional cluster. We optimized the threshold of the output probability map from the classifier by performing receiver operating characteristic (ROC) analyses. Success of detection was defined by overlap between the final cluster and the manual labeling. Performance was evaluated using k-fold cross-validation. The threshold of 0.9 showed optimal sensitivity of 73.7% and specificity of 90.0%. The area under the curve for the ROC analysis was 0.75, which suggests a discriminative classifier. Sensitivity and specificity were not significantly different for patients from different centers, suggesting robustness of performance. Correct detection rate was significantly lower in patients with initially normal MRI than patients with unequivocally positive MRI. Subgroup analysis showed the size of the training group and normal control database impacted classifier performance. Automated surface-based MRI morphometry equipped with machine learning showed robust performance across cohorts from different centers and scanners. The proposed method may be a valuable tool to improve FCD detection in presurgical evaluation for patients with pharmacoresistant epilepsy. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
Correcting length-frequency distributions for imperfect detection
Breton, André R.; Hawkins, John A.; Winkelman, Dana L.
2013-01-01
Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.
Multimedia data from two probability-based exposure studies were investigated in terms of how censoring of non-detects affected estimation of population parameters and associations. Appropriate methods for handling censored below-detection-limit (BDL) values in this context were...
A Bayesian Method for Managing Uncertainties Relating to Distributed Multistatic Sensor Search
2006-07-01
before - detect process. There will also be an increased probability of high signal-to-noise ratio (SNR) detections associated with specular and near...and high target strength and high Doppler opportunities give rise to the expectation of an increased number of detections that could feed a track
Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making
NASA Technical Reports Server (NTRS)
Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.
2006-01-01
Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V.
Patel, Vinod B.; Singh, Ravesh; Connolly, Cathy; Kasprowicz, Victoria; Zumla, Allimudin; Ndungu, Thumbi; Dheda, Keertan
2010-01-01
Background/Objective The diagnosis of tuberculous meningitis (TBM) in resource poor TB endemic environments is challenging. The accuracy of current tools for the rapid diagnosis of TBM is suboptimal. We sought to develop a clinical-prediction rule for the diagnosis of TBM in a high HIV prevalence setting, and to compare performance outcomes to conventional diagnostic modalities and a novel lipoarabinomannan (LAM) antigen detection test (Clearview-TB®) using cerebrospinal fluid (CSF). Methods Patients with suspected TBM were classified as definite-TBM (CSF culture or PCR positive), probable-TBM and non-TBM. Results Of the 150 patients, 84% were HIV-infected (median [IQR] CD4 count = 132 [54; 241] cells/µl). There were 39, 55 and 54 patients in the definite, probable and non-TBM groups, respectively. The LAM sensitivity and specificity (95%CI) was 31% (17;48) and 94% (85;99), respectively (cut-point ≥0.18). By contrast, smear-microscopy was 100% specific but detected none of the definite-TBM cases. LAM positivity was associated with HIV co-infection and low CD4 T cell count (CD4<200 vs. >200 cells/µl; p = 0.03). The sensitivity and specificity in those with a CD4<100 cells/µl was 50% (27;73) and 95% (74;99), respectively. A clinical-prediction rule ≥6 derived from multivariate analysis had a sensitivity and specificity (95%CI) of 47% (31;64) and 98% (90;100), respectively. When LAM was combined with the clinical-prediction-rule, the sensitivity increased significantly (p<0.001) to 63% (47;68) and specificity remained high at 93% (82;98). Conclusions Despite its modest sensitivity the LAM ELISA is an accurate rapid rule-in test for TBM that has incremental value over smear-microscopy. The rule-in value of LAM can be further increased by combination with a clinical-prediction rule, thus enhancing the rapid diagnosis of TBM in HIV-infected persons with advanced immunosuppression. PMID:21203513
García-Cortés, M; Lucena, M I; Pachkoria, K; Borraz, Y; Hidalgo, R; Andrade, R J
2008-05-01
Causality assessment in hepatotoxicity is challenging. The current standard liver-specific Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale is complex and difficult to implement in daily practice. The Naranjo Adverse Drug Reactions Probability Scale is a simple and widely used nonspecific scale, which has not been specifically evaluated in drug-induced liver injury. To compare the Naranjo method with the standard liver-specific Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale in evaluating the accuracy and reproducibility of Naranjo Adverse Drug Reactions Probability Scale in the diagnosis of hepatotoxicity. Two hundred and twenty-five cases of suspected hepatotoxicity submitted to a national registry were evaluated by two independent observers and assessed for between-observer and between-scale differences using percentages of agreement and the weighted kappa (kappa(w)) test. A total of 249 ratings were generated. Between-observer agreement was 45% with a kappa(w) value of 0.17 for the Naranjo Adverse Drug Reactions Probability Scale, while there was a higher agreement when using the Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale (72%, kappa(w): 0.71). Concordance between the two scales was 24% (kappa(w): 0.15). The Naranjo Adverse Drug Reactions Probability Scale had low sensitivity (54%) and poor negative predictive value (29%) and showed a limited capability to distinguish between adjacent categories of probability. The Naranjo scale lacks validity and reproducibility in the attribution of causality in hepatotoxicity.
Predictors of specific phobia in children with Williams syndrome.
Pitts, C H; Klein-Tasman, B P; Osborne, J W; Mervis, C B
2016-10-01
Specific phobia (SP) is the most common anxiety disorder among children with Williams syndrome (WS); prevalence rates derived from Diagnostic and Statistical Manual of Mental Disorders-based diagnostic interviews range from 37% to 56%. We evaluated the effects of gender, age, intellectual abilities and/or behaviour regulation difficulties on the likelihood that a child with WS would be diagnosed with SP. A total of 194 6-17 year-olds with WS were evaluated. To best characterise the relations between the predictors and the probability of a SP diagnosis, we explored not only possible linear effects but also curvilinear effects. No gender differences were detected. As age increased, the likelihood of receiving a SP diagnosis decreased. As IQ increased, the probability of receiving a SP diagnosis also decreased. Behaviour regulation difficulties were the strongest predictor of a positive diagnosis. A quadratic relation was detected: The probability of receiving a SP diagnosis gradually rose as behaviour regulation difficulties increased. However, once behaviour regulation difficulties approached the clinical range, the probability of receiving a SP diagnosis asymptoted at a high level. Children with behaviour regulation difficulties in or just below the clinical range were at the greatest risk of developing SP. These findings highlight the value of large samples and the importance of evaluating for nonlinear effects to provide accurate model specification when characterising relations among a dependent variable and possible predictors. © 2016 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Nano-biosensing approaches on tuberculosis: Defy of aptamers.
Golichenari, Behrouz; Nosrati, Rahim; Farokhi-Fard, Aref; Abnous, Khalil; Vaziri, Farzam; Behravan, Javad
2018-06-11
Tuberculosis is a major global health problem caused by the bacterium Mycobacterium tuberculosis (Mtb) complex. According to WHO reports, 53 million TB patients died from 2000 to 2016. Therefore, early diagnosis of the disease is of great importance for global health care programs. The restrictions of traditional methods have encouraged the development of innovative methods for rapid, reliable, and cost-effective diagnosis of tuberculosis. In recent years, aptamer-based biosensors or aptasensors have drawn great attention to sensitive and accessible detection of tuberculosis. Aptamers are small short single-stranded molecules of DNA or RNA that fold to a unique form and bind to targets. Once combined with nanomaterials, nano-scale aptasensors provide powerful analytical platforms for diagnosing of tuberculosis. Various groups designed and studied aptamers specific for the whole cells of M. tuberculosis, mycobacterial proteins and IFN-γ for early diagnosis of TB. Advantages such as high specificity and strong affinity, potential for binding to a larger variety of targets, increased stability, lower costs of synthesis and storage requirements, and lower probability of contamination make aptasensors pivotal alternatives for future TB diagnostics. In recent years, the concept of SOMAmer has opened new horizons in high precision detection of tuberculosis biomarkers. This review article provides a description of the research progresses of aptamer-based and SOMAmer-based biosensors and nanobiosensors for the detection of tuberculosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Standard plane localization in ultrasound by radial component model and selective search.
Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu
2014-11-01
Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han
2015-01-01
Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration, and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria enrichment broth, Fraser broth, and University of Vermont Medium (UVM) Listeria enrichment broth are recommended by regulatory agencies such as Food and Drug Administration-bacteriological and analytical method (FDA-BAM), US Department of Agriculture-Food and Safety (USDA-FSIS), and International Organization for Standardization (ISO). Many plating media are available for the isolation of L. monocytogenes, for instance, polymyxin acriflavin lithium-chloride ceftazidime aesculin mannitol, Oxford, and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method, and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. most probable number technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction, multiplex polymerase chain reaction, real-time/quantitative polymerase chain reaction, nucleic acid sequence-based amplification, loop-mediated isothermal amplification, DNA microarray, and next generation sequencing technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labor-saving. In future, there are chances for the development of new techniques for the detection and identification of foodborne with improved features. PMID:26579116
Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han
2015-01-01
Listeria monocytogenes, a foodborne pathogen that can cause listeriosis through the consumption of food contaminated with this pathogen. The ability of L. monocytogenes to survive in extreme conditions and cause food contaminations have become a major concern. Hence, routine microbiological food testing is necessary to prevent food contamination and outbreaks of foodborne illness. This review provides insight into the methods for cultural detection, enumeration, and molecular identification of L. monocytogenes in various food samples. There are a number of enrichment and plating media that can be used for the isolation of L. monocytogenes from food samples. Enrichment media such as buffered Listeria enrichment broth, Fraser broth, and University of Vermont Medium (UVM) Listeria enrichment broth are recommended by regulatory agencies such as Food and Drug Administration-bacteriological and analytical method (FDA-BAM), US Department of Agriculture-Food and Safety (USDA-FSIS), and International Organization for Standardization (ISO). Many plating media are available for the isolation of L. monocytogenes, for instance, polymyxin acriflavin lithium-chloride ceftazidime aesculin mannitol, Oxford, and other chromogenic media. Besides, reference methods like FDA-BAM, ISO 11290 method, and USDA-FSIS method are usually applied for the cultural detection or enumeration of L. monocytogenes. most probable number technique is applied for the enumeration of L. monocytogenes in the case of low level contamination. Molecular methods including polymerase chain reaction, multiplex polymerase chain reaction, real-time/quantitative polymerase chain reaction, nucleic acid sequence-based amplification, loop-mediated isothermal amplification, DNA microarray, and next generation sequencing technology for the detection and identification of L. monocytogenes are discussed in this review. Overall, molecular methods are rapid, sensitive, specific, time- and labor-saving. In future, there are chances for the development of new techniques for the detection and identification of foodborne with improved features.
Donato, Mary M.
2000-01-01
As ground water continues to provide an ever-growing proportion of Idaho?s drinking water, concerns about the quality of that resource are increasing. Pesticides (most commonly, atrazine/desethyl-atrazine, hereafter referred to as atrazine) and nitrite plus nitrate as nitrogen (hereafter referred to as nitrate) have been detected in many aquifers in the State. To provide a sound hydrogeologic basis for atrazine and nitrate management in southern Idaho—the largest region of land and water use in the State—the U.S. Geological Survey produced maps showing the probability of detecting these contaminants in ground water in the upper Snake River Basin (published in a 1998 report) and the western Snake River Plain (published in this report). The atrazine probability map for the western Snake River Plain was constructed by overlaying ground-water quality data with hydrogeologic and anthropogenic data in a geographic information system (GIS). A data set was produced in which each well had corresponding information on land use, geology, precipitation, soil characteristics, regional depth to ground water, well depth, water level, and atrazine use. These data were analyzed by logistic regression using a statistical software package. Several preliminary multivariate models were developed and those that best predicted the detection of atrazine were selected. The multivariate models then were entered into a GIS and the probability maps were produced. Land use, precipitation, soil hydrologic group, and well depth were significantly correlated with atrazine detections in the western Snake River Plain. These variables also were important in the 1998 probability study of the upper Snake River Basin. The effectiveness of the probability models for atrazine might be improved if more detailed data were available for atrazine application. A preliminary atrazine probability map for the entire Snake River Plain in Idaho, based on a data set representing that region, also was produced. In areas where this map overlaps the 1998 map of the upper Snake River Basin, the two maps show broadly similar probabilities of detecting atrazine. Logistic regression also was used to develop a preliminary statistical model that predicts the probability of detecting elevated nitrate in the western Snake River Plain. A nitrate probability map was produced from this model. Results showed that elevated nitrate concentrations were correlated with land use, soil organic content, well depth, and water level. Detailed information on nitrate input, specifically fertilizer application, might have improved the effectiveness of this model.
A Track Initiation Method for the Underwater Target Tracking Environment
NASA Astrophysics Data System (ADS)
Li, Dong-dong; Lin, Yang; Zhang, Yao
2018-04-01
A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.
Jensen, Dan B; Hogeveen, Henk; De Vries, Albert
2016-09-01
Rapid detection of dairy cow mastitis is important so corrective action can be taken as soon as possible. Automatically collected sensor data used to monitor the performance and the health state of the cow could be useful for rapid detection of mastitis while reducing the labor needs for monitoring. The state of the art in combining sensor data to predict clinical mastitis still does not perform well enough to be applied in practice. Our objective was to combine a multivariate dynamic linear model (DLM) with a naïve Bayesian classifier (NBC) in a novel method using sensor and nonsensor data to detect clinical cases of mastitis. We also evaluated reductions in the number of sensors for detecting mastitis. With the DLM, we co-modeled 7 sources of sensor data (milk yield, fat, protein, lactose, conductivity, blood, body weight) collected at each milking for individual cows to produce one-step-ahead forecasts for each sensor. The observations were subsequently categorized according to the errors of the forecasted values and the estimated forecast variance. The categorized sensor data were combined with other data pertaining to the cow (week in milk, parity, mastitis history, somatic cell count category, and season) using Bayes' theorem, which produced a combined probability of the cow having clinical mastitis. If this probability was above a set threshold, the cow was classified as mastitis positive. To illustrate the performance of our method, we used sensor data from 1,003,207 milkings from the University of Florida Dairy Unit collected from 2008 to 2014. Of these, 2,907 milkings were associated with recorded cases of clinical mastitis. Using the DLM/NBC method, we reached an area under the receiver operating characteristic curve of 0.89, with a specificity of 0.81 when the sensitivity was set at 0.80. Specificities with omissions of sensor data ranged from 0.58 to 0.81. These results are comparable to other studies, but differences in data quality, definitions of clinical mastitis, and time windows make comparisons across studies difficult. We found the DLM/NBC method to be a flexible method for combining multiple sensor and nonsensor data sources to predict clinical mastitis and accommodate missing observations. Further research is needed before practical implementation is possible. In particular, the performance of our method needs to be improved in the first 2 wk of lactation. The DLM method produces forecasts that are based on continuously estimated multivariate normal distributions, which makes forecasts and forecast errors easy to interpret, and new sensors can easily be added. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Evidential reasoning research on intrusion detection
NASA Astrophysics Data System (ADS)
Wang, Xianpei; Xu, Hua; Zheng, Sheng; Cheng, Anyu
2003-09-01
In this paper, we mainly aim at D-S theory of evidence and the network intrusion detection these two fields. It discusses the method how to apply this probable reasoning as an AI technology to the Intrusion Detection System (IDS). This paper establishes the application model, describes the new mechanism of reasoning and decision-making and analyses how to implement the model based on the synscan activities detection on the network. The results suggest that if only rational probability values were assigned at the beginning, the engine can, according to the rules of evidence combination and hierarchical reasoning, compute the values of belief and finally inform the administrators of the qualities of the traced activities -- intrusions, normal activities or abnormal activities.
Quantum Biometrics with Retinal Photon Counting
NASA Astrophysics Data System (ADS)
Loulakis, M.; Blatsios, G.; Vrettou, C. S.; Kominis, I. K.
2017-10-01
It is known that the eye's scotopic photodetectors, rhodopsin molecules, and their associated phototransduction mechanism leading to light perception, are efficient single-photon counters. We here use the photon-counting principles of human rod vision to propose a secure quantum biometric identification based on the quantum-statistical properties of retinal photon detection. The photon path along the human eye until its detection by rod cells is modeled as a filter having a specific transmission coefficient. Precisely determining its value from the photodetection statistics registered by the conscious observer is a quantum parameter estimation problem that leads to a quantum secure identification method. The probabilities for false-positive and false-negative identification of this biometric technique can readily approach 10-10 and 10-4, respectively. The security of the biometric method can be further quantified by the physics of quantum measurements. An impostor must be able to perform quantum thermometry and quantum magnetometry with energy resolution better than 10-9ℏ , in order to foil the device by noninvasively monitoring the biometric activity of a user.
Selby, Thomas H.; Hart, Kristen M.; Fujisaki, Ikuko; Smith, Brian J.; Pollock, Clayton J; Hillis-Star, Zandy M; Lundgren, Ian; Oli, Madan K.
2016-01-01
Submerged passive acoustic technology allows researchers to investigate spatial and temporal movement patterns of many marine and freshwater species. The technology uses receivers to detect and record acoustic transmissions emitted from tags attached to an individual. Acoustic signal strength naturally attenuates over distance, but numerous environmental variables also affect the probability a tag is detected. Knowledge of receiver range is crucial for designing acoustic arrays and analyzing telemetry data. Here, we present a method for testing a relatively large-scale receiver array in a dynamic Caribbean coastal environment intended for long-term monitoring of multiple species. The U.S. Geological Survey and several academic institutions in collaboration with resource management at Buck Island Reef National Monument (BIRNM), off the coast of St. Croix, recently deployed a 52 passive acoustic receiver array. We targeted 19 array-representative receivers for range-testing by submersing fixed delay interval range-testing tags at various distance intervals in each cardinal direction from a receiver for a minimum of an hour. Using a generalized linear mixed model (GLMM), we estimated the probability of detection across the array and assessed the effect of water depth, habitat, wind, temperature, and time of day on the probability of detection. The predicted probability of detection across the entire array at 100 m distance from a receiver was 58.2% (95% CI: 44.0–73.0%) and dropped to 26.0% (95% CI: 11.4–39.3%) 200 m from a receiver indicating a somewhat constrained effective detection range. Detection probability varied across habitat classes with the greatest effective detection range occurring in homogenous sand substrate and the smallest in high rugosity reef. Predicted probability of detection across BIRNM highlights potential gaps in coverage using the current array as well as limitations of passive acoustic technology within a complex coral reef environment.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spatially filtered to enforce local consensus among neighboring pixels and the spatially filtered image is output.
Method and apparatus for detecting a desired behavior in digital image data
Kegelmeyer, Jr., W. Philip
1997-01-01
A method for detecting stellate lesions in digitized mammographic image data includes the steps of prestoring a plurality of reference images, calculating a plurality of features for each of the pixels of the reference images, and creating a binary decision tree from features of randomly sampled pixels from each of the reference images. Once the binary decision tree has been created, a plurality of features, preferably including an ALOE feature (analysis of local oriented edges), are calculated for each of the pixels of the digitized mammographic data. Each of these plurality of features of each pixel are input into the binary decision tree and a probability is determined, for each of the pixels, corresponding to the likelihood of the presence of a stellate lesion, to create a probability image. Finally, the probability image is spacially filtered to enforce local consensus among neighboring pixels and the spacially filtered image is output.
Dehghan, Ashraf; Abumasoudi, Rouhollah Sheikh; Ehsanpour, Soheila
2016-01-01
Background: Infertility and errors in the process of its treatment have a negative impact on infertile couples. The present study was aimed to identify and assess the common errors in the reception process by applying the approach of “failure modes and effects analysis” (FMEA). Materials and Methods: In this descriptive cross-sectional study, the admission process of fertility and infertility center of Isfahan was selected for evaluation of its errors based on the team members’ decision. At first, the admission process was charted through observations and interviewing employees, holding multiple panels, and using FMEA worksheet, which has been used in many researches all over the world and also in Iran. Its validity was evaluated through content and face validity, and its reliability was evaluated through reviewing and confirmation of the obtained information by the FMEA team, and eventually possible errors, causes, and three indicators of severity of effect, probability of occurrence, and probability of detection were determined and corrective actions were proposed. Data analysis was determined by the number of risk priority (RPN) which is calculated by multiplying the severity of effect, probability of occurrence, and probability of detection. Results: Twenty-five errors with RPN ≥ 125 was detected through the admission process, in which six cases of error had high priority in terms of severity and occurrence probability and were identified as high-risk errors. Conclusions: The team-oriented method of FMEA could be useful for assessment of errors and also to reduce the occurrence probability of errors. PMID:28194208
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
Breininger, David R; Breininger, Robert D; Hall, Carlton R
2017-02-01
Seagrasses are the foundation of many coastal ecosystems and are in global decline because of anthropogenic impacts. For the Indian River Lagoon (Florida, U.S.A.), we developed competing multistate statistical models to quantify how environmental factors (surrounding land use, water depth, and time [year]) influenced the variability of seagrass state dynamics from 2003 to 2014 while accounting for time-specific detection probabilities that quantified our ability to determine seagrass state at particular locations and times. We classified seagrass states (presence or absence) at 764 points with geographic information system maps for years when seagrass maps were available and with aerial photographs when seagrass maps were not available. We used 4 categories (all conservation, mostly conservation, mostly urban, urban) to describe surrounding land use within sections of lagoonal waters, usually demarcated by land features that constricted these waters. The best models predicted that surrounding land use, depth, and year would affect transition and detection probabilities. Sections of the lagoon bordered by urban areas had the least stable seagrass beds and lowest detection probabilities, especially after a catastrophic seagrass die-off linked to an algal bloom. Sections of the lagoon bordered by conservation lands had the most stable seagrass beds, which supports watershed conservation efforts. Our results show that a multistate approach can empirically estimate state-transition probabilities as functions of environmental factors while accounting for state-dependent differences in seagrass detection probabilities as part of the overall statistical inference procedure. © 2016 Society for Conservation Biology.
Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing
2017-03-01
Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.
2010-01-01
Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353
Pattern recognition for passive polarimetric data using nonparametric classifiers
NASA Astrophysics Data System (ADS)
Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.
2005-08-01
Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.
Analysis of capture-recapture models with individual covariates using data augmentation
Royle, J. Andrew
2009-01-01
I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.
Algorithms of Crescent Structure Detection in Human Biological Fluid Facies
NASA Astrophysics Data System (ADS)
Krasheninnikov, V. R.; Malenova, O. E.; Yashina, A. S.
2017-05-01
One of the effective methods of early medical diagnosis is based on the image analysis of human biological fluids. In the process of fluid crystallization there appear characteristic patterns (markers) in the resulting layer (facies). Each marker is a highly probable sign of some pathology even at an early stage of a disease development. When mass health examination is carried out, it is necessary to analyze a large number of images. That is why, the problem of algorithm and software development for automated processing of images is rather urgent nowadays. This paper presents algorithms to detect a crescent structures in images of blood serum and cervical mucus facies. Such a marker indicates the symptoms of ischemic disease. The algorithm presented detects this marker with high probability when the probability of false alarm is low.
Rizzo, Austin A.; Brown, Donald J.; Welsh, Stuart A.; Thompson, Patricia A.
2017-01-01
Population monitoring is an essential component of endangered species recovery programs. The federally endangered Diamond Darter Crystallaria cincotta is in need of an effective monitoring design to improve our understanding of its distribution and track population trends. Because of their small size, cryptic coloration, and nocturnal behavior, along with limitations associated with current sampling methods, individuals are difficult to detect at known occupied sites. Therefore, research is needed to determine if survey efforts can be improved by increasing probability of individual detection. The primary objective of this study was to determine if there are seasonal and diel patterns in Diamond Darter detectability during population surveys. In addition to temporal factors, we also assessed five habitat variables that might influence individual detection. We used N-mixture models to estimate site abundances and relationships between covariates and individual detectability and ranked models using Akaike's information criteria. During 2015 three known occupied sites were sampled 15 times each between May and Oct. The best supported model included water temperature as a quadratic function influencing individual detectability, with temperatures around 22 C resulting in the highest detection probability. Detection probability when surveying at the optimal temperature was approximately 6% and 7.5% greater than when surveying at 16 C and 29 C, respectively. Time of Night and day of year were not strong predictors of Diamond Darter detectability. The results of this study will allow researchers and agencies to maximize detection probability when surveying populations, resulting in greater monitoring efficiency and likely more precise abundance estimates.
Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications
NASA Technical Reports Server (NTRS)
Hughes, William O.; Paez, Thomas L.
2006-01-01
This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.
2014-01-01
Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.
Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference
Wesson, R.L.; Bakun, W.H.; Perkins, D.M.
2003-01-01
Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.
Structural and Functional Evaluations for the Early Detection of Glaucoma.
Lucy, Katie A; Wollstein, Gadi
2016-01-01
The early detection of glaucoma is imperative in order to preserve functional vision. Structural and functional methods are utilized to detect and monitor glaucomatous damage and the vision loss it causes. The relationship between these detection measures is complex and differs between individuals, especially in early glaucoma. Using both measures together is advised in order to ensure the highest probability of glaucoma detection, and new testing methods are continuously developed with the goals of earlier disease detection and improvement of disease monitoring. The purpose of this review is to explore the relationship between structural and functional glaucoma detection and discuss important technological advances for early glaucoma detection.
Structural and Functional Evaluations for the Early Detection of Glaucoma
Lucy, Katie A.; Wollstein, Gadi
2016-01-01
The early detection of glaucoma is imperative in order to preserve functional vision. Structural and functional methods are utilized to detect and monitor glaucomatous damage and the vision loss it causes. The relationship between these detection measures is complex and differs between individuals, especially in early glaucoma. Using both measures together is advised in order to ensure the highest probability of glaucoma detection, and new testing methods are continuously developed with the goals of earlier disease detection and improvement of disease monitoring. The purpose of this review is to explore the relationship between structural and functional glaucoma detection and discuss important technological advances for early glaucoma detection. PMID:28603546
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
[How to screen for pheochromocytoma, primary aldosteronism and Cushing's syndrome].
Meyer, Patrick
2009-01-07
Pheochromocytoma, primary aldosteronism and Cushing's syndrome are uncommon disorders and are difficult to diagnose because laboratory tests lack validation and specificity. Despite these limitations, practice guidelines are proposed to standardize the screening procedure. The most reliable method to diagnose pheochromocytoma is the measurement of plasmatic and/or urinary metanephrines and normetanephrines depending on the pre-test probability of the disease. The approach for detection of primary aldosteronism is based on the aldosterone-renin ratio under standard conditions. Finally, three tests are available to establish the diagnosis of Cushing's syndrome: 24-h urinary free cortisol excretion, low-dose dexamethasone suppression test and the recent and promising late evening salivary cortisol.
Small-target leak detection for a closed vessel via infrared image sequences
NASA Astrophysics Data System (ADS)
Zhao, Ling; Yang, Hongjiu
2017-03-01
This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.
Improving inferences from fisheries capture-recapture studies through remote detection of PIT tags
Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Shively, Rip S.
2010-01-01
Models for capture-recapture data are commonly used in analyses of the dynamics of fish and wildlife populations, especially for estimating vital parameters such as survival. Capture-recapture methods provide more reliable inferences than other methods commonly used in fisheries studies. However, for rare or elusive fish species, parameter estimation is often hampered by small probabilities of re-encountering tagged fish when encounters are obtained through traditional sampling methods. We present a case study that demonstrates how remote antennas for passive integrated transponder (PIT) tags can increase encounter probabilities and the precision of survival estimates from capture-recapture models. Between 1999 and 2007, trammel nets were used to capture and tag over 8,400 endangered adult Lost River suckers (Deltistes luxatus) during the spawning season in Upper Klamath Lake, Oregon. Despite intensive sampling at relatively discrete spawning areas, encounter probabilities from Cormack-Jolly-Seber models were consistently low (< 0.2) and the precision of apparent annual survival estimates was poor. Beginning in 2005, remote PIT tag antennas were deployed at known spawning locations to increase the probability of re-encountering tagged fish. We compare results based only on physical recaptures with results based on both physical recaptures and remote detections to demonstrate the substantial improvement in estimates of encounter probabilities (approaching 100%) and apparent annual survival provided by the remote detections. The richer encounter histories provided robust inferences about the dynamics of annual survival and have made it possible to explore more realistic models and hypotheses about factors affecting the conservation and recovery of this endangered species. Recent advances in technology related to PIT tags have paved the way for creative implementation of large-scale tagging studies in systems where they were previously considered impracticable.
[Optimized application of nested PCR method for detection of malaria].
Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C
2017-04-28
Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.
Probabilistic detection of volcanic ash using a Bayesian approach
Mackie, Shona; Watson, Matthew
2014-01-01
Airborne volcanic ash can pose a hazard to aviation, agriculture, and both human and animal health. It is therefore important that ash clouds are monitored both day and night, even when they travel far from their source. Infrared satellite data provide perhaps the only means of doing this, and since the hugely expensive ash crisis that followed the 2010 Eyjafjalljökull eruption, much research has been carried out into techniques for discriminating ash in such data and for deriving key properties. Such techniques are generally specific to data from particular sensors, and most approaches result in a binary classification of pixels into “ash” and “ash free” classes with no indication of the classification certainty for individual pixels. Furthermore, almost all operational methods rely on expert-set thresholds to determine what constitutes “ash” and can therefore be criticized for being subjective and dependent on expertise that may not remain with an institution. Very few existing methods exploit available contemporaneous atmospheric data to inform the detection, despite the sensitivity of most techniques to atmospheric parameters. The Bayesian method proposed here does exploit such data and gives a probabilistic, physically based classification. We provide an example of the method's implementation for a scene containing both land and sea observations, and a large area of desert dust (often misidentified as ash by other methods). The technique has already been successfully applied to other detection problems in remote sensing, and this work shows that it will be a useful and effective tool for ash detection. Key Points Presentation of a probabilistic volcanic ash detection scheme Method for calculation of probability density function for ash observations Demonstration of a remote sensing technique for monitoring volcanic ash hazards PMID:25844278
Probabilistic detection of volcanic ash using a Bayesian approach.
Mackie, Shona; Watson, Matthew
2014-03-16
Airborne volcanic ash can pose a hazard to aviation, agriculture, and both human and animal health. It is therefore important that ash clouds are monitored both day and night, even when they travel far from their source. Infrared satellite data provide perhaps the only means of doing this, and since the hugely expensive ash crisis that followed the 2010 Eyjafjalljökull eruption, much research has been carried out into techniques for discriminating ash in such data and for deriving key properties. Such techniques are generally specific to data from particular sensors, and most approaches result in a binary classification of pixels into "ash" and "ash free" classes with no indication of the classification certainty for individual pixels. Furthermore, almost all operational methods rely on expert-set thresholds to determine what constitutes "ash" and can therefore be criticized for being subjective and dependent on expertise that may not remain with an institution. Very few existing methods exploit available contemporaneous atmospheric data to inform the detection, despite the sensitivity of most techniques to atmospheric parameters. The Bayesian method proposed here does exploit such data and gives a probabilistic, physically based classification. We provide an example of the method's implementation for a scene containing both land and sea observations, and a large area of desert dust (often misidentified as ash by other methods). The technique has already been successfully applied to other detection problems in remote sensing, and this work shows that it will be a useful and effective tool for ash detection. Presentation of a probabilistic volcanic ash detection schemeMethod for calculation of probability density function for ash observationsDemonstration of a remote sensing technique for monitoring volcanic ash hazards.
Proposal and Implementation of a Robust Sensing Method for DVB-T Signal
NASA Astrophysics Data System (ADS)
Song, Chunyi; Rahman, Mohammad Azizur; Harada, Hiroshi
This paper proposes a sensing method for TV signals of DVB-T standard to realize effective TV White Space (TVWS) Communication. In the TVWS technology trial organized by the Infocomm Development Authority (iDA) of Singapore, with regard to the sensing level and sensing time, detecting DVB-T signal at the level of -120dBm over an 8MHz channel with a sensing time below 1 second is required. To fulfill such a strict sensing requirement, we propose a smart sensing method which combines feature detection and energy detection (CFED), and is also characterized by using dynamic threshold selection (DTS) based on a threshold table to improve sensing robustness to noise uncertainty. The DTS based CFED (DTS-CFED) is evaluated by computer simulations and is also implemented into a hardware sensing prototype. The results show that the DTS-CFED achieves a detection probability above 0.9 for a target false alarm probability of 0.1 for DVB-T signals at the level of -120dBm over an 8MHz channel with the sensing time equals to 0.1 second.
SAR/multispectral image fusion for the detection of environmental hazards with a GIS
NASA Astrophysics Data System (ADS)
Errico, Angela; Angelino, Cesario Vincenzo; Cicala, Luca; Podobinski, Dominik P.; Persechino, Giuseppe; Ferrara, Claudia; Lega, Massimiliano; Vallario, Andrea; Parente, Claudio; Masi, Giuseppe; Gaetano, Raffaele; Scarpa, Giuseppe; Amitrano, Donato; Ruello, Giuseppe; Verdoliva, Luisa; Poggi, Giovanni
2014-10-01
In this paper we propose a GIS-based methodology, using optical and SAR remote sensing data, together with more conventional sources, for the detection of small cattle breeding areas, potentially responsible of hazardous littering. This specific environmental problem is very relevant for the Caserta area, in southern Italy, where many small buffalo breeding farms exist which are not even known to the productive activity register, and are not easily monitored and surveyed. Experiments on a test area, with available specific ground truth, prove that the proposed systems is characterized by very large detection probability and negligible false alarm rate.
Use of vectors in sequence analysis.
Ishikawa, T; Yamamoto, K; Yoshikura, H
1987-10-01
Applications of the vector diagram, a new type of representation of protein structure, in homology search of various proteins including oncogene products are presented. The method takes account of various kinds of information concerning the properties of amino acids, such as Chou and Fasman's probability data. The method can detect conformational similarities of proteins which may not be detected by the conventional programs.
A real-time method for autonomous passive acoustic detection-classification of humpback whales.
Abbot, Ted A; Premus, Vincent E; Abbot, Philip A
2010-05-01
This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.
Optimal Sensor Location Design for Reliable Fault Detection in Presence of False Alarms
Yang, Fan; Xiao, Deyun; Shah, Sirish L.
2009-01-01
To improve fault detection reliability, sensor location should be designed according to an optimization criterion with constraints imposed by issues of detectability and identifiability. Reliability requires the minimization of undetectability and false alarm probability due to random factors on sensor readings, which is not only related with sensor readings but also affected by fault propagation. This paper introduces the reliability criteria expression based on the missed/false alarm probability of each sensor and system topology or connectivity derived from the directed graph. The algorithm for the optimization problem is presented as a heuristic procedure. Finally, a boiler system is illustrated using the proposed method. PMID:22291524
Parkinson Disease Detection from Speech Articulation Neuromechanics.
Gómez-Vilda, Pedro; Mekyska, Jiri; Ferrández, José M; Palacios-Alonso, Daniel; Gómez-Rodellar, Andrés; Rodellar-Biarge, Victoria; Galaz, Zoltan; Smekal, Zdenek; Eliasova, Ilona; Kostalova, Milena; Rektorova, Irena
2017-01-01
Aim: The research described is intended to give a description of articulation dynamics as a correlate of the kinematic behavior of the jaw-tongue biomechanical system, encoded as a probability distribution of an absolute joint velocity. This distribution may be used in detecting and grading speech from patients affected by neurodegenerative illnesses, as Parkinson Disease. Hypothesis: The work hypothesis is that the probability density function of the absolute joint velocity includes information on the stability of phonation when applied to sustained vowels, as well as on fluency if applied to connected speech. Methods: A dataset of sustained vowels recorded from Parkinson Disease patients is contrasted with similar recordings from normative subjects. The probability distribution of the absolute kinematic velocity of the jaw-tongue system is extracted from each utterance. A Random Least Squares Feed-Forward Network (RLSFN) has been used as a binary classifier working on the pathological and normative datasets in a leave-one-out strategy. Monte Carlo simulations have been conducted to estimate the influence of the stochastic nature of the classifier. Two datasets for each gender were tested (males and females) including 26 normative and 53 pathological subjects in the male set, and 25 normative and 38 pathological in the female set. Results: Male and female data subsets were tested in single runs, yielding equal error rates under 0.6% (Accuracy over 99.4%). Due to the stochastic nature of each experiment, Monte Carlo runs were conducted to test the reliability of the methodology. The average detection results after 200 Montecarlo runs of a 200 hyperplane hidden layer RLSFN are given in terms of Sensitivity (males: 0.9946, females: 0.9942), Specificity (males: 0.9944, females: 0.9941) and Accuracy (males: 0.9945, females: 0.9942). The area under the ROC curve is 0.9947 (males) and 0.9945 (females). The equal error rate is 0.0054 (males) and 0.0057 (females). Conclusions: The proposed methodology avails that the use of highly normalized descriptors as the probability distribution of kinematic variables of vowel articulation stability, which has some interesting properties in terms of information theory, boosts the potential of simple yet powerful classifiers in producing quite acceptable detection results in Parkinson Disease.
Nakamura, Kosuke; Akiyama, Hiroshi; Kawano, Noriaki; Kobayashi, Tomoko; Yoshimatsu, Kayo; Mano, Junichi; Kitta, Kazumi; Ohmori, Kiyomi; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko
2013-12-01
Genetically modified (GM) rice (Oryza sativa) lines, such as insecticidal Kefeng and Kemingdao, have been developed and found unauthorised in processed rice products in many countries. Therefore, qualitative detection methods for the GM rice are required for the GM food regulation. A transgenic construct for expressing cowpea (Vigna unguiculata) trypsin inhibitor (CpTI) was detected in some imported processed rice products contaminated with Kemingdao. The 3' terminal sequence of the identified transgenic construct for expression of CpTI included an endoplasmic reticulum retention signal coding sequence (KDEL) and nopaline synthase terminator (T-nos). The sequence was identical to that in a report on Kefeng. A novel construct-specific real-time polymerase chain reaction (PCR) detection method for detecting the junction region sequence between the CpTI-KDEL and T-nos was developed. The imported processed rice products were evaluated for the contamination of the GM rice using the developed construct-specific real-time PCR methods, and detection frequency was compared with five event-specific detection methods. The construct-specific detection methods detected the GM rice at higher frequency than the event-specific detection methods. Therefore, we propose that the construct-specific detection method is a beneficial tool for screening the contamination of GM rice lines, such as Kefeng, in processed rice products for the GM food regulation. Copyright © 2013 Elsevier Ltd. All rights reserved.
2014-01-01
Background Anxiety scales may help primary care physicians to detect specific anxiety disorders among the many emotionally distressed patients presenting in primary care. The anxiety scale of the Four-Dimensional Symptom Questionnaire (4DSQ) consists of an admixture of symptoms of specific anxiety disorders. The research questions were: (1) Is the anxiety scale unidimensional or multidimensional? (2) To what extent does the anxiety scale detect specific DSM-IV anxiety disorders? (3) Which cut-off points are suitable to rule out or to rule in (which) anxiety disorders? Methods We analyzed 5 primary care datasets with standardized psychiatric diagnoses and 4DSQ scores. Unidimensionality was assessed through confirmatory factor analysis (CFA). We examined mean scores and anxiety score distributions per disorder. Receiver operating characteristic (ROC) analysis was used to determine optimal cut-off points. Results Total n was 969. CFA supported unidimensionality. The anxiety scale performed slightly better in detecting patients with panic disorder, agoraphobia, social phobia, obsessive compulsive disorder (OCD) and post traumatic stress disorder (PTSD) than patients with generalized anxiety disorder (GAD) and specific phobia. ROC-analysis suggested that ≥4 was the optimal cut-off point to rule out and ≥10 the cut-off point to rule in anxiety disorders. Conclusions The 4DSQ anxiety scale measures a common trait of pathological anxiety that is characteristic of anxiety disorders, in particular panic disorder, agoraphobia, social phobia, OCD and PTSD. The anxiety score detects the latter anxiety disorders to a slightly greater extent than GAD and specific phobia, without being able to distinguish between the different anxiety disorder types. The cut-off points ≥4 and ≥10 can be used to separate distressed patients in three groups with a relatively low, moderate and high probability of having one or more anxiety disorders. PMID:24761829
Integrating count and detection–nondetection data to model population dynamics
Zipkin, Elise F.; Rossman, Sam; Yackulic, Charles B.; Wiens, David; Thorson, James T.; Davis, Raymond J.; Grant, Evan H. Campbell
2017-01-01
There is increasing need for methods that integrate multiple data types into a single analytical framework as the spatial and temporal scale of ecological research expands. Current work on this topic primarily focuses on combining capture–recapture data from marked individuals with other data types into integrated population models. Yet, studies of species distributions and trends often rely on data from unmarked individuals across broad scales where local abundance and environmental variables may vary. We present a modeling framework for integrating detection–nondetection and count data into a single analysis to estimate population dynamics, abundance, and individual detection probabilities during sampling. Our dynamic population model assumes that site-specific abundance can change over time according to survival of individuals and gains through reproduction and immigration. The observation process for each data type is modeled by assuming that every individual present at a site has an equal probability of being detected during sampling processes. We examine our modeling approach through a series of simulations illustrating the relative value of count vs. detection–nondetection data under a variety of parameter values and survey configurations. We also provide an empirical example of the model by combining long-term detection–nondetection data (1995–2014) with newly collected count data (2015–2016) from a growing population of Barred Owl (Strix varia) in the Pacific Northwest to examine the factors influencing population abundance over time. Our model provides a foundation for incorporating unmarked data within a single framework, even in cases where sampling processes yield different detection probabilities. This approach will be useful for survey design and to researchers interested in incorporating historical or citizen science data into analyses focused on understanding how demographic rates drive population abundance.
Detection of antisalivary duct antibody from Sjögren's syndrome by an autoradiographic method.
Cummings, N A; Tarpley, T M
1978-01-01
A new technique to detect anti-salivary duct antibody (ASDA) has been developed by using autoradiographic, rather than immunofluorescent methods. The antibody activity detected by autoradiography is probably classic ASDA. Both techniques may be consecutively performed on the same tissue section without attenuation of either. Some of the potential advantages of the radiolabelling of ASDA are pointed out, and a few preliminary experiments using the labelled antibody as a marker are presented.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
2014-01-01
Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies. PMID:25350277
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
McLachlan, G J; Bean, R W; Jones, L Ben-Tovim
2006-07-01
An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.
Pandolfo, Tamara J.; Kwak, Thomas J.; Cope, W. Gregory; Heise, Ryan J.; Nichols, Robert B.; Pacifici, Krishna
2016-01-01
Conservation of freshwater unionid mussels presents unique challenges due to their distinctive life cycle, cryptic occurrence and imperilled status. Relevant ecological information is urgently needed to guide their management and conservation.We adopted a modelling approach, which is a novel application to freshwater mussels to enhance inference on rare species, by borrowing data among species in a hierarchical framework to conduct the most comprehensive occurrence analysis for freshwater mussels to date. We incorporated imperfect detection to more accurately examine effects of biotic and abiotic factors at multiple scales on the occurrence of 14 mussel species and the entire assemblage of the Tar River Basin of North Carolina, U.S.A.The single assemblage estimate of detection probability for all species was 0.42 (95% CI, 0.36–0.47) with no species- or site-specific detection effects identified. We empirically observed 15 mussel species in the basin but estimated total species richness at 21 (95% CI, 16–24) when accounting for imperfect detection.Mean occurrence probability among species ranged from 0.04 (95% CI, 0.01–0.16) for Alasmidonta undulata, an undescribed Lampsilis sp., and Strophitus undulatus to 0.67 (95% CI, 0.42–0.86) for Elliptio icterina. Median occurrence probability among sites was <0.30 for all species with the exception of E. icterina. Site occurrence probability generally related to mussel conservation status, with reduced occurrence for endangered and threatened species.Catchment-scale abiotic variables (stream power, agricultural land use) and species traits (brood time, host specificity, tribe) influenced the occurrence of mussel assemblages more than reach- or microhabitat-scale features.Our findings reflect the complexity of mussel ecology and indicate that habitat restoration alone may not be adequate for mussel conservation. Catchment-scale management can benefit an entire assemblage, but species-specific strategies may be necessary for successful conservation. The hierarchical multispecies modelling approach revealed findings that could not be elucidated by other means, and the approach may be applied more broadly to other river basins and regions. Accurate measures of assemblage dynamics, such as occurrence and species richness, are required to create management plans for effective conservation.
Smart sensing surveillance system
NASA Astrophysics Data System (ADS)
Hsu, Charles; Chu, Kai-Dee; O'Looney, James; Blake, Michael; Rutar, Colleen
2010-04-01
Unattended ground sensor (UGS) networks have been widely used in remote battlefield and other tactical applications over the last few decades due to the advances of the digital signal processing. The UGS network can be applied in a variety of areas including border surveillance, special force operations, perimeter and building protection, target acquisition, situational awareness, and force protection. In this paper, a highly-distributed, fault-tolerant, and energyefficient Smart Sensing Surveillance System (S4) is presented to efficiently provide 24/7 and all weather security operation in a situation management environment. The S4 is composed of a number of distributed nodes to collect, process, and disseminate heterogeneous sensor data. Nearly all S4 nodes have passive sensors to provide rapid omnidirectional detection. In addition, Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR cameras are integrated to selected nodes to track the objects and capture associated imagery. These S4 camera-connected nodes will provide applicable advanced on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. In the S4, all the nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of Detect) wireless mesh network using Ultra-wide band (UWB) RF technology, which can provide an ad-hoc, secure mesh network and capability to relay network information, communicate and pass situational awareness and messages. The S4 utilizes a Service Oriented Architecture such that remote applications can interact with the S4 network and use the specific presentation methods. The S4 capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments and near perimeters and borders. The S4 is compliant with Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE®) standards. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Research of the orbital evolution of asteroid 2012 DA14 (in Russian)
NASA Astrophysics Data System (ADS)
Zausaev, A. F.; Denisov, S. S.; Derevyanka, A. E.
Research of the orbital evolution of asteroid 2012 DA14 on the time interval from 1800 to 2206 is made, an object close approaches with Earth and the Moon are detected, the probability of impact with Earth is calculated. The used mathematical model is consistent with the DE405, the integration was performed using a modified Everhart's method of 27th order, the probability of collision is calculated using the Monte Carlo method.
Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle
NASA Technical Reports Server (NTRS)
Peret, L.; Legendre, P.; Delavault, S.; Martin, T.
2007-01-01
In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.
[Isolation and identification of Cronobacter (Enterobacter sakazakii) strains from food].
Dong, Xiaohui; Li, Chengsi; Wu, Qingping; Zhang, Jumei; Mo, Shuping; Guo, Weipeng; Yang, Xiaojuan; Xu, Xiaoke
2013-05-04
This study aimed to detect and quantify Cronobacter in 300 powdered milk samples and 50 non-powdered milk samples. Totally, 24 Cronobacter (formerly Enterobacter sakazakii) strains isolated from powdered milk and other foods were identified and confirmed. Cronobacter strains were detected quantitatively using most probable number (MPN) method and molecular detection method. We identified 24 Cronobacter strains using biochemical patterns, including indole production and dulcitol, malonate, melezitose, turanose, and myo-Inositol utilization. Of the 24 strains, their 16S rRNA genes were sequenced, and constructed phylogenetic tree by N-J (Neighbour-Joining) with the 16S rRNA gene sequences of 17 identified Cronobacter strains and 10 non-Cronobacter strains. Quantitative detection showed that Cronobacter strains were detected in 23 out of 350 samples yielding 6.6% detection rate. Twenty-four Cronobacter strains were isolated from 23 samples and the Cronobacter was more than 100 MPN/100g in 4 samples out of 23 samples. The 24 Cronobacter spp. isolates strains were identified and confirmed, including 19 Cronobacter sakazakii strains, 2 C. malonaticus strains, 2 C. dubliensis subsp. lactaridi strains, and 1 C. muytjensii strain. The combination of molecular detection method and most probable number (MPN) method could be suitable for the detection of Cronobacter in powdered milk, with low rate of contamination and high demand of quantitative detection. 24 isolated strains were confirmed and identified by biochemical patterns and molecular technology, and C. sakazakii could be the dominant species. The problem of Cronobacter in powdered milk should be a hidden danger to nurseling, and should catch the government and consumer's attention.
Hidden Markov Model-Based CNV Detection Algorithms for Illumina Genotyping Microarrays.
Seiser, Eric L; Innocenti, Federico
2014-01-01
Somatic alterations in DNA copy number have been well studied in numerous malignancies, yet the role of germline DNA copy number variation in cancer is still emerging. Genotyping microarrays generate allele-specific signal intensities to determine genotype, but may also be used to infer DNA copy number using additional computational approaches. Numerous tools have been developed to analyze Illumina genotype microarray data for copy number variant (CNV) discovery, although commonly utilized algorithms freely available to the public employ approaches based upon the use of hidden Markov models (HMMs). QuantiSNP, PennCNV, and GenoCN utilize HMMs with six copy number states but vary in how transition and emission probabilities are calculated. Performance of these CNV detection algorithms has been shown to be variable between both genotyping platforms and data sets, although HMM approaches generally outperform other current methods. Low sensitivity is prevalent with HMM-based algorithms, suggesting the need for continued improvement in CNV detection methodologies.
A method for detecting and characterizing outbreaks of infectious disease from clinical reports.
Cooper, Gregory F; Villamarin, Ricardo; Rich Tsui, Fu-Chiang; Millett, Nicholas; Espino, Jeremy U; Wagner, Michael M
2015-02-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. Copyright © 2014 Elsevier Inc. All rights reserved.
A Method for Detecting and Characterizing Outbreaks of Infectious Disease from Clinical Reports
Cooper, Gregory F.; Villamarin, Ricardo; Tsui, Fu-Chiang (Rich); Millett, Nicholas; Espino, Jeremy U.; Wagner, Michael M.
2014-01-01
Outbreaks of infectious disease can pose a significant threat to human health. Thus, detecting and characterizing outbreaks quickly and accurately remains an important problem. This paper describes a Bayesian framework that links clinical diagnosis of individuals in a population to epidemiological modeling of disease outbreaks in the population. Computer-based diagnosis of individuals who seek healthcare is used to guide the search for epidemiological models of population disease that explain the pattern of diagnoses well. We applied this framework to develop a system that detects influenza outbreaks from emergency department (ED) reports. The system diagnoses influenza in individuals probabilistically from evidence in ED reports that are extracted using natural language processing. These diagnoses guide the search for epidemiological models of influenza that explain the pattern of diagnoses well. Those epidemiological models with a high posterior probability determine the most likely outbreaks of specific diseases; the models are also used to characterize properties of an outbreak, such as its expected peak day and estimated size. We evaluated the method using both simulated data and data from a real influenza outbreak. The results provide support that the approach can detect and characterize outbreaks early and well enough to be valuable. We describe several extensions to the approach that appear promising. PMID:25181466
Chen, Hao; Chen, Shulin; Lu, Jie; Wang, Xueping; Li, Jianpei; Li, Linfang; Fu, Jihuan; Scheper, Thomas; Meyer, Wolfgang; Peng, Yu-Hui; Liu, Wanli
2017-09-01
In this study, we aimed to use the combined detection of multiple antibodies against Epstein-Barr virus (EBV) antigens to develop a model for screening and diagnosis of nasopharyngeal carcinoma (NPC). Samples of 300 nasopharyngeal carcinoma patients and 494 controls, including 294 healthy subjects (HC), 99 non-nasopharyngeal carcinoma cancer patients (NNPC), and 101 patients with benign nasopharyngeal lesions (BNL), were incubated with the EUROLINE Anti-EBV Profile 2, and band intensities were used to establish a risk prediction model. The nasopharyngeal carcinoma risk probability analysis based on the panel of VCAgp125 IgA, EBNA-1 IgA, EA-D IgA, EBNA-1 IgG, EAD IgG, and VCAp19 IgG displayed the best performance. When using 26.1% as the cutoff point in ROC analysis, the AUC value and sensitivity/specificity were 0.951 and 90.7%/86.2%, respectively, in nasopharyngeal carcinoma and all controls. In nasopharyngeal carcinoma and controls without the non-nasopharyngeal carcinoma and BNL groups, the AUC value and sensitivity/specificity were 0.957 and 90.7%/88.1%, respectively. The diagnostic specificity and sensitivity of the EUROLINE Anti-EBV Profile 2 assay for both nasopharyngeal carcinoma and early-stage nasopharyngeal carcinoma were higher than that of mono-antibody detection by immune-enzymatic assay and real-time PCR (EBV DNA). In the VCA-IgA-negative group, 82.6% of nasopharyngeal carcinoma patients showed high probability for nasopharyngeal carcinoma, and the negative predictive value was 97.1%. In the VCA-IgA-positive group, 73.3% of healthy subjects showed low probability. The positive predictive value reached 98.2% in this group. The nasopharyngeal carcinoma risk probability value determined by the EUROLINE Anti-EBV Profile 2 might be a suitable tool for nasopharyngeal carcinoma screening. Cancer Prev Res; 10(9); 542-50. ©2017 AACR . ©2017 American Association for Cancer Research.
Assessing bat detectability and occupancy with multiple automated echolocation detectors
Gorresen, P.M.; Miles, A.C.; Todd, C.M.; Bonaccorso, F.J.; Weller, T.J.
2008-01-01
Occupancy analysis and its ability to account for differential detection probabilities is important for studies in which detecting echolocation calls is used as a measure of bat occurrence and activity. We examined the feasibility of remotely acquiring bat encounter histories to estimate detection probability and occupancy. We used echolocation detectors coupled to digital recorders operating at a series of proximate sites on consecutive nights in 2 trial surveys for the Hawaiian hoary bat (Lasiurus cinereus semotus). Our results confirmed that the technique is readily amenable for use in occupancy analysis. We also conducted a simulation exercise to assess the effects of sampling effort on parameter estimation. The results indicated that the precision and bias of parameter estimation were often more influenced by the number of sites sampled than number of visits. Acceptable accuracy often was not attained until at least 15 sites or 15 visits were used to estimate detection probability and occupancy. The method has significant potential for use in monitoring trends in bat activity and in comparative studies of habitat use. ?? 2008 American Society of Mammalogists.
NASA Astrophysics Data System (ADS)
Sugiyanto; Zukhronah, E.; Sari, S. P.
2018-05-01
Financial crisis has hit Indonesia for several times resulting the needs for an early detection system to minimize the impact. One of many methods that can be used to detect the crisis is to model the crisis indicators using combination of volatility and Markov switching models [5]. There are some indicators that can be used to detect financial crisis. Three of them are the difference between interest rate on deposit and lending, the real interest rate on deposit, and the difference between real BI rate and real Fed rate which can be referred as banking condition indicators. Volatility model used to overcome the conditional variance that change over time. Combination of volatility and Markov switching models used to detect condition change on the data. The smoothed probability from the combined models can be used to detect the crisis. This research resulted that the best combined volatility and Markov switching models for the three indicators are MS-GARCH(3,1,1) models with three states assumption. Crises in mid of 1997 until 1998 has successfully detected with a certain range of smoothed probability value for the three indicators.
Weidhaas, Jennifer L; Macbeth, Tamzen W; Olsen, Roger L; Harwood, Valerie J
2011-03-01
The impact of fecal contamination from human and agricultural animal waste on water quality is a major public health concern. Identification of the dominant source(s) of fecal pollution in a watershed is necessary for assessing the safety of recreational water and protecting water resources. A field study was conducted using quantitative PCR (qPCR) for the 16S rRNA gene of Brevibacterium sp. LA35 to track feces-contaminated poultry litter in environmental samples. Based on sensitivity and specificity characteristics of the qPCR method, the Bayesian conditional probability that detection of the LA35 marker gene in a water sample represented a true-positive result was 93%. The marker's covariance with fecal indicator bacteria (FIB) and metals associated with poultry litter was also assessed in litter, runoff, surface water, and groundwater samples. LA35 was detected in water and soil samples collected throughout the watershed, and its concentration covaried with concentrations of Escherichia coli, enterococci, As, Cu, P, and Zn. Significantly greater concentrations of FIB, As, Cu, P, and Zn were observed in edge-of-field runoff samples in which LA35 was detected, compared to samples in which it was not detected. Furthermore, As, Cu, P, and Zn concentrations covaried in environmental samples in which LA35 was detected and typically did not in samples in which the marker gene was not detected. The covariance of the poultry-specific LA35 marker gene with these known contaminants from poultry feces provides further evidence that it is a useful tool for assessing the impact of poultry-derived fecal pollution in environmental waters.
Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly
MacKenzie, D.I.; Nichols, J.D.; Hines, J.E.; Knutson, M.G.; Franklin, A.B.
2003-01-01
Few species are likely to be so evident that they will always be detected when present. Failing to allow for the possibility that a target species was present, but undetected, at a site will lead to biased estimates of site occupancy, colonization, and local extinction probabilities. These population vital rates are often of interest in long-term monitoring programs and metapopulation studies. We present a model that enables direct estimation of these parameters when the probability of detecting the species is less than 1. The model does not require any assumptions of process stationarity, as do some previous methods, but does require detection/nondetection data to be collected in a manner similar to Pollock's robust design as used in mark?recapture studies. Via simulation, we show that the model provides good estimates of parameters for most scenarios considered. We illustrate the method with data from monitoring programs of Northern Spotted Owls (Strix occidentalis caurina) in northern California and tiger salamanders (Ambystoma tigrinum) in Minnesota, USA.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Estimating trends in alligator populations from nightlight survey data
Fujisaki, Ikuko; Mazzotti, Frank J.; Dorazio, Robert M.; Rice, Kenneth G.; Cherkiss, Michael; Jeffery, Brian
2011-01-01
Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001–2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations.
Estimating trends in alligator populations from nightlight survey data
Fujisaki, Ikuko; Mazzotti, F.J.; Dorazio, R.M.; Rice, K.G.; Cherkiss, M.; Jeffery, B.
2011-01-01
Nightlight surveys are commonly used to evaluate status and trends of crocodilian populations, but imperfect detection caused by survey- and location-specific factors makes it difficult to draw population inferences accurately from uncorrected data. We used a two-stage hierarchical model comprising population abundance and detection probability to examine recent abundance trends of American alligators (Alligator mississippiensis) in subareas of Everglades wetlands in Florida using nightlight survey data. During 2001-2008, there were declining trends in abundance of small and/or medium sized animals in a majority of subareas, whereas abundance of large sized animals had either demonstrated an increased or unclear trend. For small and large sized class animals, estimated detection probability declined as water depth increased. Detection probability of small animals was much lower than for larger size classes. The declining trend of smaller alligators may reflect a natural population response to the fluctuating environment of Everglades wetlands under modified hydrology. It may have negative implications for the future of alligator populations in this region, particularly if habitat conditions do not favor recruitment of offspring in the near term. Our study provides a foundation to improve inferences made from nightlight surveys of other crocodilian populations. ?? 2011 US Government.
Verification of road databases using multiple road models
NASA Astrophysics Data System (ADS)
Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian
2017-08-01
In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.
McNew, Lance B.; Handel, Colleen M.
2015-01-01
Accurate estimates of species richness are necessary to test predictions of ecological theory and evaluate biodiversity for conservation purposes. However, species richness is difficult to measure in the field because some species will almost always be overlooked due to their cryptic nature or the observer's failure to perceive their cues. Common measures of species richness that assume consistent observability across species are inviting because they may require only single counts of species at survey sites. Single-visit estimation methods ignore spatial and temporal variation in species detection probabilities related to survey or site conditions that may confound estimates of species richness. We used simulated and empirical data to evaluate the bias and precision of raw species counts, the limiting forms of jackknife and Chao estimators, and multi-species occupancy models when estimating species richness to evaluate whether the choice of estimator can affect inferences about the relationships between environmental conditions and community size under variable detection processes. Four simulated scenarios with realistic and variable detection processes were considered. Results of simulations indicated that (1) raw species counts were always biased low, (2) single-visit jackknife and Chao estimators were significantly biased regardless of detection process, (3) multispecies occupancy models were more precise and generally less biased than the jackknife and Chao estimators, and (4) spatial heterogeneity resulting from the effects of a site covariate on species detection probabilities had significant impacts on the inferred relationships between species richness and a spatially explicit environmental condition. For a real dataset of bird observations in northwestern Alaska, the four estimation methods produced different estimates of local species richness, which severely affected inferences about the effects of shrubs on local avian richness. Overall, our results indicate that neglecting the effects of site covariates on species detection probabilities may lead to significant bias in estimation of species richness, as well as the inferred relationships between community size and environmental covariates.
Statistical context shapes stimulus-specific adaptation in human auditory cortex.
Herrmann, Björn; Henry, Molly J; Fromboluti, Elisa Kim; McAuley, J Devin; Obleser, Jonas
2015-04-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. Copyright © 2015 the American Physiological Society.
Sampling considerations for disease surveillance in wildlife populations
Nusser, S.M.; Clark, W.R.; Otis, D.L.; Huang, L.
2008-01-01
Disease surveillance in wildlife populations involves detecting the presence of a disease, characterizing its prevalence and spread, and subsequent monitoring. A probability sample of animals selected from the population and corresponding estimators of disease prevalence and detection provide estimates with quantifiable statistical properties, but this approach is rarely used. Although wildlife scientists often assume probability sampling and random disease distributions to calculate sample sizes, convenience samples (i.e., samples of readily available animals) are typically used, and disease distributions are rarely random. We demonstrate how landscape-based simulation can be used to explore properties of estimators from convenience samples in relation to probability samples. We used simulation methods to model what is known about the habitat preferences of the wildlife population, the disease distribution, and the potential biases of the convenience-sample approach. Using chronic wasting disease in free-ranging deer (Odocoileus virginianus) as a simple illustration, we show that using probability sample designs with appropriate estimators provides unbiased surveillance parameter estimates but that the selection bias and coverage errors associated with convenience samples can lead to biased and misleading results. We also suggest practical alternatives to convenience samples that mix probability and convenience sampling. For example, a sample of land areas can be selected using a probability design that oversamples areas with larger animal populations, followed by harvesting of individual animals within sampled areas using a convenience sampling method.
Jung, R.E.; Royle, J. Andrew; Sauer, J.R.; Addison, C.; Rau, R.D.; Shirk, J.L.; Whissel, J.C.
2005-01-01
Stream salamanders in the family Plethodontidae constitute a large biomass in and near headwater streams in the eastern United States and are promising indicators of stream ecosystem health. Many studies of stream salamanders have relied on population indices based on counts rather than population estimates based on techniques such as capture-recapture and removal. Application of estimation procedures allows the calculation of detection probabilities (the proportion of total animals present that are detected during a survey) and their associated sampling error, and may be essential for determining salamander population sizes and trends. In 1999, we conducted capture-recapture and removal population estimation methods for Desmognathus salamanders at six streams in Shenandoah National Park, Virginia, USA. Removal sampling appeared more efficient and detection probabilities from removal data were higher than those from capture-recapture. During 2001-2004, we used removal estimation at eight streams in the park to assess the usefulness of this technique for long-term monitoring of stream salamanders. Removal detection probabilities ranged from 0.39 to 0.96 for Desmognathus, 0.27 to 0.89 for Eurycea and 0.27 to 0.75 for northern spring (Gyrinophilus porphyriticus) and northern red (Pseudotriton ruber) salamanders across stream transects. Detection probabilities did not differ across years for Desmognathus and Eurycea, but did differ among streams for Desmognathus. Population estimates of Desmognathus decreased between 2001-2002 and 2003-2004 which may be related to changes in stream flow conditions. Removal-based procedures may be a feasible approach for population estimation of salamanders, but field methods should be designed to meet the assumptions of the sampling procedures. New approaches to estimating stream salamander populations are discussed.
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
DNA hybridization assay for detection of Salmonella in foods: collaborative study.
Flowers, R S; Klatt, M J; Mozola, M A; Curiale, M S; Gabis, D A; Silliker, J H
1987-01-01
A collaborative study was performed in 11 laboratories to validate a DNA hybridization (DNAH) procedure for detection of Salmonella in foods. The DNAH procedure was compared to the standard culture method for detection of Salmonella in 6 foods: ground pepper, soy flour, dry whole egg, milk chocolate, nonfat dry milk, and raw deboned turkey. With the exception of turkey which was naturally contaminated, uninoculated and inoculated samples of each food group were analyzed. Results for the DNAH method were significantly better than for the standard culture method at the 5% probability level for the detection of Salmonella in turkey. There was no significant difference between the methods for the other 5 foods. The method has been adopted official first action.
Incorporating availability for detection in estimates of bird abundance
Diefenbach, D.R.; Marshall, M.R.; Mattice, J.A.; Brauning, D.W.
2007-01-01
Several bird-survey methods have been proposed that provide an estimated detection probability so that bird-count statistics can be used to estimate bird abundance. However, some of these estimators adjust counts of birds observed by the probability that a bird is detected and assume that all birds are available to be detected at the time of the survey. We marked male Henslow's Sparrows (Ammodramus henslowii) and Grasshopper Sparrows (A. savannarum) and monitored their behavior during May-July 2002 and 2003 to estimate the proportion of time they were available for detection. We found that the availability of Henslow's Sparrows declined in late June to <10% for 5- or 10-min point counts when a male had to sing and be visible to the observer; but during 20 May-19 June, males were available for detection 39.1% (SD = 27.3) of the time for 5-min point counts and 43.9% (SD = 28.9) of the time for 10-min point counts (n = 54). We detected no temporal changes in availability for Grasshopper Sparrows, but estimated availability to be much lower for 5-min point counts (10.3%, SD = 12.2) than for 10-min point counts (19.2%, SD = 22.3) when males had to be visible and sing during the sampling period (n = 80). For distance sampling, we estimated the availability of Henslow's Sparrows to be 44.2% (SD = 29.0) and the availability of Grasshopper Sparrows to be 20.6% (SD = 23.5). We show how our estimates of availability can be incorporated in the abundance and variance estimators for distance sampling and modify the abundance and variance estimators for the double-observer method. Methods that directly estimate availability from bird counts but also incorporate detection probabilities need further development and will be important for obtaining unbiased estimates of abundance for these species.
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
In 1983 and 1984, the Infrared Astronomical Satellite (IRAS) detected 5,425 stellar objects and measured their infrared spectra. In 1987 a program called AUTOCLASS used Bayesian inference methods to discover the classes present in these data and determine the most probable class of each object, revealing unknown phenomena in astronomy. AUTOCLASS has rekindled the old debate on the suitability of Bayesian methods, which are computationally intensive, interpret probabilities as plausibility measures rather than frequencies, and appear to depend on a subjective assessment of the probability of a hypothesis before the data were collected. Modern statistical methods have, however, recently been shown to also depend on subjective elements. These debates bring into question the whole tradition of scientific objectivity and offer scientists a new way to take responsibility for their findings and conclusions.
Non-targeted analysis of unexpected food contaminants using LC-HRMS.
Kunzelmann, Marco; Winter, Martin; Åberg, Magnus; Hellenäs, Karl-Erik; Rosén, Johan
2018-03-29
A non-target analysis method for unexpected contaminants in food is described. Many current methods referred to as "non-target" are capable of detecting hundreds or even thousands of contaminants. However, they will typically still miss all other possible contaminants. Instead, a metabolomics approach might be used to obtain "true non-target" analysis. In the present work, such a method was optimized for improved detection capability at low concentrations. The method was evaluated using 19 chemically diverse model compounds spiked into milk samples to mimic unknown contamination. Other milk samples were used as reference samples. All samples were analyzed with UHPLC-TOF-MS (ultra-high-performance liquid chromatography time-of-flight mass spectrometry), using reversed-phase chromatography and electrospray ionization in positive mode. Data evaluation was performed by the software TracMass 2. No target lists of specific compounds were used to search for the contaminants. Instead, the software was used to sort out all features only occurring in the spiked sample data, i.e., the workflow resembled a metabolomics approach. Procedures for chemical identification of peaks were outside the scope of the study. Method, study design, and settings in the software were optimized to minimize manual evaluation and faulty or irrelevant hits and to maximize hit rate of the spiked compounds. A practical detection limit was established at 25 μg/kg. At this concentration, most compounds (17 out of 19) were detected as intact precursor ions, as fragments or as adducts. Only 2 irrelevant hits, probably natural compounds, were obtained. Limitations and possible practical use of the approach are discussed.
Detecting a trend change in cross-border epidemic transmission
NASA Astrophysics Data System (ADS)
Maeno, Yoshiharu
2016-09-01
A method for a system of Langevin equations is developed for detecting a trend change in cross-border epidemic transmission. The equations represent a standard epidemiological SIR compartment model and a meta-population network model. The method analyzes a time series of the number of new cases reported in multiple geographical regions. The method is applicable to investigating the efficacy of the implemented public health intervention in managing infectious travelers across borders. It is found that the change point of the probability of travel movements was one week after the WHO worldwide alert on the SARS outbreak in 2003. The alert was effective in managing infectious travelers. On the other hand, it is found that the probability of travel movements did not change at all for the flu pandemic in 2009. The pandemic did not affect potential travelers despite the WHO alert.
Electrodermal Activity Based Wearable Device for Drowsy Drivers
NASA Astrophysics Data System (ADS)
Malathi, D.; Dorathi Jayaseeli, JD; Madhuri, S.; Senthilkumar, K.
2018-04-01
Road safety and road accident mortality rate are a serious concern for the government. With rise in fatal road accidents, who’s leading cause is the driver being drowsy behind the wheel, measures to alleviate this problem becomes the prime task. To meet the purpose, methods adopted must be of minimum discomfort for the driver, easy to install, provide good detection accuracy and timely alert to circumvent a probable accident. A good candidate to meet these specifications is EDA. As it detects the level of sweat which directly corresponds to the mental state of the person, using EDA for the purposes of driver safety forms a good option. The novelty of this project lies in making use of EDA as a measure to detect if a person is drowsy or not. Much of the challenge lies in building a device equipped with the necessary sensors and processing the data on real-time. The novelty of this work lies in development of an embedded device interfaced with sensors and actuators to detect and alert a driver when found drowsy using sweat as a parameter.
Universal phase transition in community detectability under a stochastic block model.
Chen, Pin-Yu; Hero, Alfred O
2015-03-01
We prove the existence of an asymptotic phase-transition threshold on community detectability for the spectral modularity method [M. E. J. Newman, Phys. Rev. E 74, 036104 (2006) and Proc. Natl. Acad. Sci. (USA) 103, 8577 (2006)] under a stochastic block model. The phase transition on community detectability occurs as the intercommunity edge connection probability p grows. This phase transition separates a subcritical regime of small p, where modularity-based community detection successfully identifies the communities, from a supercritical regime of large p where successful community detection is impossible. We show that, as the community sizes become large, the asymptotic phase-transition threshold p* is equal to √[p1p2], where pi(i=1,2) is the within-community edge connection probability. Thus the phase-transition threshold is universal in the sense that it does not depend on the ratio of community sizes. The universal phase-transition phenomenon is validated by simulations for moderately sized communities. Using the derived expression for the phase-transition threshold, we propose an empirical method for estimating this threshold from real-world data.
Paulinelli, Regis R; Oliveira, Luis-Fernando P; Freitas-Junior, Ruffo; Soares, Leonardo R
2016-01-01
The objective of the present study was to compare the accuracy of SONOBREAST for the prediction of malignancy in solid breast nodules detected at ultrasonography with that of the BI-RADS system and to assess the agreement between these two methods. This prospective study included 274 women and evaluated 500 breast nodules detected at ultrasonography. The probability of malignancy was calculated based on the SONOBREAST model, available at www.sonobreast.com.br, and on the BI-RADS system, with results being compared with the anatomopathology report. The lesions were considered suspect in 171 cases (34.20%), according to both SONOBREAST and BI-RADS. Agreement between the methods was perfect, as shown by a Kappa coefficient of 1 (p<0.001). SONOBREAST and BI-RADS proved identical insofar as sensitivity (95.40%), specificity (78.69%), positive predictive value (48.54%), negative predictive value (98.78%) and accuracy (81.60%) are concerned. With respect to the categorical variables (BI-RADS categories 3, 4 and 5), the area under the receiver operating characteristic (ROC) curve was 94.41 for SONOBREAST (range 92.20-96.62) and 89.99 for BI-RADS (range 86.60-93.37). The accuracy of the SONOBREAST model is identical to that found with BI-RADS when the same parameters are used with respect to the cut-off point at which malignancy is suspected. Regarding the continuous probability of malignancy with BI-RADS categories 3, 4 and 5, SONOBREAST permits a more precise and individualized evaluation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Gray, Shelley; Pittman, Andrea; Weinhold, Juliet
2014-01-01
Purpose: In this study, the authors assessed the effects of phonotactic probability and neighborhood density on word-learning configuration by preschoolers with specific language impairment (SLI) and typical language development (TD). Method: One hundred thirty-one children participated: 48 with SLI, 44 with TD matched on age and gender, and 39…
Comparative analysis of methods for detecting interacting loci
2011-01-01
Background Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. Results We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. Conclusion This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list. PMID:21729295
Comparative analysis of methods for detecting interacting loci.
Chen, Li; Yu, Guoqiang; Langefeld, Carl D; Miller, David J; Guy, Richard T; Raghuram, Jayaram; Yuan, Xiguo; Herrington, David M; Wang, Yue
2011-07-05
Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies. Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive comparison of performance and limitations of available interaction detection methods is warranted. We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First, while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs. This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide, should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geist, William H.
2017-09-15
The objectives for this presentation are to describe the method that the IAEA uses to determine a sampling plan for nuclear material measurements; describe the terms detection probability and significant quantity; list the three nuclear materials measurement types; describe the sampling method applied to an item facility; and describe multiple method sampling.
Olson, Gail S.; Anthony, Robert G.; Forsman, Eric D.; Ackers, Steven H.; Loschl, Peter J.; Reid, Janice A.; Dugger, Katie M.; Glenn, Elizabeth M.; Ripple, William J.
2005-01-01
Northern spotted owls (Strix occidentalis caurina) have been studied intensively since their listing as a threatened species by the U.S. Fish and Wildlife Service in 1990. Studies of spotted owl site occupancy have used various binary response measures, but most of these studies have made the assumption that detectability is perfect, or at least high and not variable. Further, previous studies did not consider temporal variation in site occupancy. We used relatively new methods for open population modeling of site occupancy that incorporated imperfect and variable detectability of spotted owls and allowed modeling of temporal variation in site occupancy, extinction, and colonization probabilities. We also examined the effects of barred owl (S. varia) presence on these parameters. We used spotted owl survey data from 1990 to 2002 for 3 study areas in Oregon, USA, and we used program MARK to develop and analyze site occupancy models. We found per visit detection probabilities averaged <0.70 and were highly variable among study years and study areas. Site occupancy probabilities for owl pairs declined greatly on 1 study area and slightly on the other 2 areas. For all owls, including singles and pairs, site occupancy was mostly stable through time. Barred owl presence had a negative effect on spotted owl detection probabilities, and it had either a positive effect on local-extinction probabilities or a negative effect on colonization probabilities. We conclude that further analyses of spotted owls must account for imperfect and variable detectability and barred owl presence to properly interpret results. Further, because barred owl presence is increasing within the range of northern spotted owls, we expect to see further declines in the proportion of sites occupied by spotted owls.
Liu, Rui; Chen, Pei; Aihara, Kazuyuki; Chen, Luonan
2015-01-01
Identifying early-warning signals of a critical transition for a complex system is difficult, especially when the target system is constantly perturbed by big noise, which makes the traditional methods fail due to the strong fluctuations of the observed data. In this work, we show that the critical transition is not traditional state-transition but probability distribution-transition when the noise is not sufficiently small, which, however, is a ubiquitous case in real systems. We present a model-free computational method to detect the warning signals before such transitions. The key idea behind is a strategy: “making big noise smaller” by a distribution-embedding scheme, which transforms the data from the observed state-variables with big noise to their distribution-variables with small noise, and thus makes the traditional criteria effective because of the significantly reduced fluctuations. Specifically, increasing the dimension of the observed data by moment expansion that changes the system from state-dynamics to probability distribution-dynamics, we derive new data in a higher-dimensional space but with much smaller noise. Then, we develop a criterion based on the dynamical network marker (DNM) to signal the impending critical transition using the transformed higher-dimensional data. We also demonstrate the effectiveness of our method in biological, ecological and financial systems. PMID:26647650
Roberson, A.M.; Andersen, D.E.; Kennedy, P.L.
2005-01-01
Broadcast surveys using conspecific calls are currently the most effective method for detecting northern goshawks (Accipiter gentilis) during the breeding season. These surveys typically use alarm calls during the nestling phase and juvenile food-begging calls during the fledgling-dependency phase. Because goshawks are most vocal during the courtship phase, we hypothesized that this phase would be an effective time to detect goshawks. Our objective was to improve current survey methodology by evaluating the probability of detecting goshawks at active nests in northern Minnesota in 3 breeding phases and at 4 broadcast distances and to determine the effective area surveyed per broadcast station. Unlike previous studies, we broadcast calls at only 1 distance per trial. This approach better quantifies (1) the relationship between distance and probability of detection, and (2) the effective area surveyed (EAS) per broadcast station. We conducted 99 broadcast trials at 14 active breeding areas. When pooled over all distances, detection rates were highest during the courtship (70%) and fledgling-dependency phases (68%). Detection rates were lowest during the nestling phase (28%), when there appeared to be higher variation in likelihood of detecting individuals. EAS per broadcast station was 39.8 ha during courtship and 24.8 ha during fledgling-dependency. Consequently, in northern Minnesota, broadcast stations may be spaced 712m and 562 m apart when conducting systematic surveys during courtship and fledgling-dependency, respectively. We could not calculate EAS for the nestling phase because probability of detection was not a simple function of distance from nest. Calculation of EAS could be applied to other areas where the probability of detection is a known function of distance.
Estimating abundance of mountain lions from unstructured spatial sampling
Russell, Robin E.; Royle, J. Andrew; Desimone, Richard; Schwartz, Michael K.; Edwards, Victoria L.; Pilgrim, Kristy P.; Mckelvey, Kevin S.
2012-01-01
Mountain lions (Puma concolor) are often difficult to monitor because of their low capture probabilities, extensive movements, and large territories. Methods for estimating the abundance of this species are needed to assess population status, determine harvest levels, evaluate the impacts of management actions on populations, and derive conservation and management strategies. Traditional mark–recapture methods do not explicitly account for differences in individual capture probabilities due to the spatial distribution of individuals in relation to survey effort (or trap locations). However, recent advances in the analysis of capture–recapture data have produced methods estimating abundance and density of animals from spatially explicit capture–recapture data that account for heterogeneity in capture probabilities due to the spatial organization of individuals and traps. We adapt recently developed spatial capture–recapture models to estimate density and abundance of mountain lions in western Montana. Volunteers and state agency personnel collected mountain lion DNA samples in portions of the Blackfoot drainage (7,908 km2) in west-central Montana using 2 methods: snow back-tracking mountain lion tracks to collect hair samples and biopsy darting treed mountain lions to obtain tissue samples. Overall, we recorded 72 individual capture events, including captures both with and without tissue sample collection and hair samples resulting in the identification of 50 individual mountain lions (30 females, 19 males, and 1 unknown sex individual). We estimated lion densities from 8 models containing effects of distance, sex, and survey effort on detection probability. Our population density estimates ranged from a minimum of 3.7 mountain lions/100 km2 (95% Cl 2.3–5.7) under the distance only model (including only an effect of distance on detection probability) to 6.7 (95% Cl 3.1–11.0) under the full model (including effects of distance, sex, survey effort, and distance x sex on detection probability). These numbers translate to a total estimate of 293 mountain lions (95% Cl 182–451) to 529 (95% Cl 245–870) within the Blackfoot drainage. Results from the distance model are similar to previous estimates of 3.6 mountain lions/100 km2 for the study area; however, results from all other models indicated greater numbers of mountain lions. Our results indicate that unstructured spatial sampling combined with spatial capture–recapture analysis can be an effective method for estimating large carnivore densities.
He, Jieyue; Wang, Chunyan; Qiu, Kunpu; Zhong, Wei
2014-01-01
Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. The algorithm of probability graph isomorphism evaluation based on circuit simulation method excludes most of subgraphs which are not probability isomorphism and reduces the search space of the probability isomorphism subgraphs using the mismatch values in the node voltage set. It is an innovative way to find the frequent probability patterns, which can be efficiently applied to probability motif discovery problems in the further studies.
Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng
2016-08-18
Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.
Fatemeh, Dehghan; Reza, Zolfaghari Mohammad; Mohammad, Arjomandzadegan; Salomeh, Kalantari; Reza, Ahmari Gholam; Hossein, Sarmadian; Maryam, Sadrnia; Azam, Ahmadi; Mana, Shojapoor; Negin, Najarian; Reza, Kasravi Alii; Saeed, Falahat
2014-01-01
Objective To analyse molecular detection of coliforms and shorten the time of PCR. Methods Rapid detection of coliforms by amplification of lacZ and uidA genes in a multiplex PCR reaction was designed and performed in comparison with most probably number (MPN) method for 16 artificial and 101 field samples. The molecular method was also conducted on isolated coliforms from positive MPN samples; standard sample for verification of microbial method certificated reference material; isolated strains from certificated reference material and standard bacteria. The PCR and electrophoresis parameters were changed for reducing the operation time. Results Results of PCR for lacZ and uidA genes were similar in all of standard, operational and artificial samples and showed the 876 bp and 147 bp bands of lacZ and uidA genes by multiplex PCR. PCR results were confirmed by MPN culture method by sensitivity 86% (95% CI: 0.71-0.93). Also the total execution time, with a successful change of factors, was reduced to less than two and a half hour. Conclusions Multiplex PCR method with shortened operation time was used for the simultaneous detection of total coliforms and Escherichia coli in distribution system of Arak city. It's recommended to be used at least as an initial screening test, and then the positive samples could be randomly tested by MPN. PMID:25182727
Boar taint detection: A comparison of three sensory protocols.
Trautmann, Johanna; Meier-Dinkel, Lisa; Gertheiss, Jan; Mörlein, Daniel
2016-01-01
While recent studies state an important role of human sensory methods for daily routine control of so-called boar taint, the evaluation of different heating methods is still incomplete. This study investigated three common heating methods (microwave (MW), hot-water (HW), hot-iron (HI)) for boar fat evaluation. The comparison was carried out on 72 samples with a 10-person sensory panel. The heating method significantly affected the probability of a deviant rating. Compared to an assumed 'gold standard' (chemical analysis), the performance was best for HI when both sensitivity and specificity were considered. The results show the superiority of the panel result compared to individual assessors. However, the consistency of the individual sensory ratings was not significantly different between MW, HW, and HI. The three protocols showed only fair to moderate agreement. Concluding from the present results, the hot-iron method appears to be advantageous for boar taint evaluation as compared to microwave and hot-water. Copyright © 2015. Published by Elsevier Ltd.
Accounting for imperfect detection in ecology: a quantitative review.
Kellner, Kenneth F; Swihart, Robert K
2014-01-01
Detection in studies of species abundance and distribution is often imperfect. Assuming perfect detection introduces bias into estimation that can weaken inference upon which understanding and policy are based. Despite availability of numerous methods designed to address this assumption, many refereed papers in ecology fail to account for non-detection error. We conducted a quantitative literature review of 537 ecological articles to measure the degree to which studies of different taxa, at various scales, and over time have accounted for imperfect detection. Overall, just 23% of articles accounted for imperfect detection. The probability that an article incorporated imperfect detection increased with time and varied among taxa studied; studies of vertebrates were more likely to incorporate imperfect detection. Among articles that reported detection probability, 70% contained per-survey estimates of detection that were less than 0.5. For articles in which constancy of detection was tested, 86% reported significant variation. We hope that our findings prompt more ecologists to consider carefully the detection process when designing studies and analyzing results, especially for sub-disciplines where incorporation of imperfect detection in study design and analysis so far has been lacking.
Estimating occupancy rates with imperfect detection under complex survey designs
Monitoring the occurrence of specific amphibian species is of interest. Typically, the monitoring design is a complex design that involves stratification and unequal probability of selection. When conducting field visits to selected sites, a common problem is that during a singl...
Buller, G; Lutman, M E
1998-08-01
The increasing use of transiently evoked otoacoustic emissions (TEOAE) in large neonatal hearing screening programmes makes a standardized method of response classification desirable. Until now methods have been either subjective or based on arbitrary response characteristics. This study takes an expert system approach to standardize the subjective judgements of an experienced scorer. The method that is developed comprises three stages. First, it transforms TEOAEs from waveforms in the time domain into a simplified parameter set. Second, the parameter set is classified by an artificial neural network that has been taught on a large database TEOAE waveforms and corresponding expert scores. Third, additional fuzzy logic rules automatically detect probable artefacts in the waveforms and synchronized spontaneous emission components. In this way, the knowledge of the experienced scorer is encapsulated in the expert system software and thereafter can be accessed by non-experts. Teaching and evaluation of the neural network was based on TEOAEs from a database totalling 2190 neonatal hearing screening tests. The database was divided into learning and test groups with 820 and 1370 waveforms respectively. From each recorded waveform a set of 12 parameters was calculated, representing signal static and dynamic properties. The artifical network was taught with parameter sets of only the learning groups. Reproduction of the human scorer classification by the neural net in the learning group showed a sensitivity for detecting screen fails of 99.3% (299 from 301 failed results on subjective scoring) and a specificity for detecting screen passes of 81.1% (421 of 519 pass results). To quantify the post hoc performance of the net (generalization), the test group was then presented to the network input. Sensitivity was 99.4% (474 from 477) and specificity was 87.3% (780 from 893). To check the efficiency of the classification method, a second learning group was selected out of the previous test group, and the previous learning group was used as the test group. Repeating learning and test procedures yielded 99.3% sensitivity and 80.7% specificity for reproduction, and 99.4% sensitivity and 86.7% specificity for generalization. In all respects, performance was better than for a previously optimized method based simply on cross-correlation between replicate non-linear waveforms. It is concluded that classification methods based on neural networks show promise for application to large neonatal screening programmes utilizing TEOAEs.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Hameed, Mustafa Q; Zurakowski, David; Proctor, Mark R; Stone, Scellig S D; Warf, Benjamin C; Smith, Edward R; Goumnerova, Liliana C; Swoboda, Marek; Anor, Tomer; Madsen, Joseph R
2018-06-16
While a noninvasive flow determination would be desirable in the diagnosis of cerebrospinal fluid shunt malfunction, existing studies have not yet defined a role for thermal flow detection. To evaluate a revised test protocol using a micropumper designed to transiently enhance flow during thermal testing to determine whether thermal detection of flow is associated with progression to shunt revision surgery. Eighty-two unique tests were performed in 71 shunts. The primary outcome, need for revision within 7 d of testing, was compared with results of micropumper-augmented thermal flow detection. Statistical analysis was based on blind interpretation of test results and raw temperature data recorded during testing. The test was sensitive (73%) and specific (68%) in predicting need for revision, with 5.6-fold higher probability of revision when flow was not detected. Negative predictive value in our sample was 94.2%. The probability of not requiring revision increased with increasing total temperature drop. Analysis of various possible thresholds showed that the optimal temperature cutoff may be lower than suggested by the manufacturer (0.125°C vs 0.2°C). This is the first study to report a strong association between thermal flow evaluation and a clinical impression that a shunt is not malfunctioning. The current recommended threshold may increase the false positive rate unnecessarily, and as clinicians gain experience with the method, they may find value in examining the temperature curves themselves. Multicenter studies are suggested to further define a role for this diagnostic test.
Taylor M. Wilcox; Kevin S. McKelvey; Michael K. Young; Adam J. Sepulveda; Bradley B. Shepard; Stephen F. Jane; Andrew R. Whiteley; Winsor H. Lowe; Michael K. Schwartz
2016-01-01
Environmental DNA sampling (eDNA) has emerged as a powerful tool for detecting aquatic animals. Previous research suggests that eDNA methods are substantially more sensitive than traditional sampling. However, the factors influencing eDNA detection and the resulting sampling costs are still not well understood. Here we use multiple experiments to derive...
Method for oil pipeline leak detection based on distributed fiber optic technology
NASA Astrophysics Data System (ADS)
Chen, Huabo; Tu, Yaqing; Luo, Ting
1998-08-01
Pipeline leak detection is a difficult problem to solve up to now. Some traditional leak detection methods have such problems as high rate of false alarm or missing detection, low location estimate capability. For the problems given above, a method for oil pipeline leak detection based on distributed optical fiber sensor with special coating is presented. The fiber's coating interacts with hydrocarbon molecules in oil, which alters the refractive indexed of the coating. Therefore the light-guiding properties of the fiber are modified. Thus pipeline leak location can be determined by OTDR. Oil pipeline lead detection system is designed based on the principle. The system has some features like real time, multi-point detection at the same time and high location accuracy. In the end, some factors that probably influence detection are analyzed and primary improving actions are given.
Space moving target detection using time domain feature
NASA Astrophysics Data System (ADS)
Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu
2018-01-01
The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.
Davidovitch, Lior; Stoklosa, Richard; Majer, Jonathan; Nietrzeba, Alex; Whittle, Peter; Mengersen, Kerrie; Ben-Haim, Yakov
2009-06-01
Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.
Bird, Patrick; Fisher, Kiel; Boyle, Megan; Huffman, Travis; Benzinger, M Joseph; Bedinghaus, Paige; Flannery, Jonathan; Crowley, Erin; Agin, James; Goins, David; Benesh, DeAnn; David, John
2013-01-01
The 3M Molecular Detection Assay (MDA) Salmonella is used with the 3M Molecular Detection System for the detection of Salmonella spp. in food, food-related, and environmental samples after enrichment. The assay utilizes loop-mediated isothermal amplification to rapidly amplify Salmonella target DNA with high specificity and sensitivity, combined with bioluminescence to detect the amplification. The 3M MDA Salmonella method was compared using an unpaired study design in a multilaboratory collaborative study to the U.S. Department of Agriculture/Food Safety and Inspection Service-Microbiology Laboratory Guidebook (USDA/FSIS-MLG 4.05), Isolation and Identification of Salmonella from Meat, Poultry, Pasteurized Egg and Catfish Products for raw ground beef and the U.S. Food and Drug Administration/Bacteriological Analytical Manual (FDA/BAM) Chapter 5 Salmonella reference method for wet dog food following the current AOAC guidelines. A total of 20 laboratories participated. For the 3M MDA Salmonella method, raw ground beef was analyzed using 25 g test portions, and wet dog food was analyzed using 375 g test portions. For the reference methods, 25 g test portions of each matrix were analyzed. Each matrix was artificially contaminated with Salmonella at three inoculation levels: an uninoculated control level (0 CFU/test portion), a low inoculum level (0.2-2 CFU/test portion), and a high inoculum level (2-5 CFU/test portion). In this study, 1512 unpaired replicate samples were analyzed. Statistical analysis was conducted according to the probability of detection (POD). For the low-level raw ground beef test portions, the following dLPOD (difference between the POD of the reference and candidate method) values with 95% confidence intervals were obtained: -0.01 (-0.14, +0.12). For the low-level wet dog food test portions, the following dLPOD with 95% confidence intervals were obtained: -0.04 (-0.16, +0.09). No significant differences were observed in the number of positive samples detected by the 3M MDA Salmonella method versus either the USDA/FSIS-MLG or FDA/BAM methods.
Spatial cluster detection using dynamic programming
2012-01-01
Background The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. Methods We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. Results When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. Conclusions We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm. PMID:22443103
False alarm recognition in hyperspectral gas plume identification
Conger, James L [San Ramon, CA; Lawson, Janice K [Tracy, CA; Aimonetti, William D [Livermore, CA
2011-03-29
According to one embodiment, a method for analyzing hyperspectral data includes collecting first hyperspectral data of a scene using a hyperspectral imager during a no-gas period and analyzing the first hyperspectral data using one or more gas plume detection logics. The gas plume detection logic is executed using a low detection threshold, and detects each occurrence of an observed hyperspectral signature. The method also includes generating a histogram for all occurrences of each observed hyperspectral signature which is detected using the gas plume detection logic, and determining a probability of false alarm (PFA) for all occurrences of each observed hyperspectral signature based on the histogram. Possibly at some other time, the method includes collecting second hyperspectral data, and analyzing the second hyperspectral data using the one or more gas plume detection logics and the PFA to determine if any gas is present. Other systems and methods are also included.
Reliably detectable flaw size for NDE methods that use calibration
NASA Astrophysics Data System (ADS)
Koshti, Ajay M.
2017-04-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
Reliably Detectable Flaw Size for NDE Methods that Use Calibration
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2017-01-01
Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh1823 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.
ROKU: a novel method for identification of tissue-specific genes.
Kadota, Koji; Ye, Jiazhen; Nakai, Yuji; Terada, Tohru; Shimizu, Kentaro
2006-06-12
One of the important goals of microarray research is the identification of genes whose expression is considerably higher or lower in some tissues than in others. We would like to have ways of identifying such tissue-specific genes. We describe a method, ROKU, which selects tissue-specific patterns from gene expression data for many tissues and thousands of genes. ROKU ranks genes according to their overall tissue specificity using Shannon entropy and detects tissues specific to each gene if any exist using an outlier detection method. We evaluated the capacity for the detection of various specific expression patterns using synthetic and real data. We observed that ROKU was superior to a conventional entropy-based method in its ability to rank genes according to overall tissue specificity and to detect genes whose expression pattern are specific only to objective tissues. ROKU is useful for the detection of various tissue-specific expression patterns. The framework is also directly applicable to the selection of diagnostic markers for molecular classification of multiple classes.
Hayer, C.-A.; Irwin, E.R.
2008-01-01
We used an information-theoretic approach to examine the variation in detection probabilities for 87 Piedmont and Coastal Plain fishes in relation to instream gravel mining in four Alabama streams of the Mobile River drainage. Biotic and abiotic variables were also included in candidate models. Detection probabilities were heterogeneous across species and varied with habitat type, stream, season, and water quality. Instream gravel mining influenced the variation in detection probabilities for 38% of the species collected, probably because it led to habitat loss and increased sedimentation. Higher detection probabilities were apparent at unmined sites than at mined sites for 78% of the species for which gravel mining was shown to influence detection probabilities, indicating potential negative impacts to these species. Physical and chemical attributes also explained the variation in detection probabilities for many species. These results indicate that anthropogenic impacts can affect detection probabilities for fishes, and such variation should be considered when developing monitoring programs or routine sampling protocols. ?? Copyright by the American Fisheries Society 2008.
Areal, F J; Touza, J; MacLeod, A; Dehnen-Schmutz, K; Perrings, C; Palmieri, M G; Spence, N J
2008-12-01
This paper analyses the cut flower market as an example of an invasion pathway along which species of non-indigenous plant pests can travel to reach new areas. The paper examines the probability of pest detection by assessing information on pest detection and detection effort associated with the import of cut flowers. We test the link between the probability of plant pest arrivals, as a precursor to potential invasion, and volume of traded flowers using count data regression models. The analysis is applied to the UK import of specific genera of cut flowers from Kenya between 1996 and 2004. There is a link between pest detection and the Genus of cut flower imported. Hence, pest detection efforts should focus on identifying and targeting those imported plants with a high risk of carrying pest species. For most of the plants studied, efforts allocated to inspection have a significant influence on the probability of pest detection. However, by better targeting inspection efforts, it is shown that plant inspection effort could be reduced without increasing the risk of pest entry. Similarly, for most of the plants analysed, an increase in volume traded will not necessarily lead to an increase in the number of pests entering the UK. For some species, such as Carthamus and Veronica, the volume of flowers traded has a significant and positive impact on the likelihood of pest detection. We conclude that analysis at the rank of plant Genus is important both to understand the effectiveness of plant pest detection efforts and consequently to manage the risk of introduction of non-indigenous species.
Sea Ice Detection Based on Differential Delay-Doppler Maps from UK TechDemoSat-1
Zhu, Yongchao; Yu, Kegen; Zou, Jingui; Wickert, Jens
2017-01-01
Global Navigation Satellite System (GNSS) signals can be exploited to remotely sense atmosphere and land and ocean surface to retrieve a range of geophysical parameters. This paper proposes two new methods, termed as power-summation of differential Delay-Doppler Maps (PS-D) and pixel-number of differential Delay-Doppler Maps (PN-D), to distinguish between sea ice and sea water using differential Delay-Doppler Maps (dDDMs). PS-D and PN-D make use of power-summation and pixel-number of dDDMs, respectively, to measure the degree of difference between two DDMs so as to determine the transition state (water-water, water-ice, ice-ice and ice-water) and hence ice and water are detected. Moreover, an adaptive incoherent averaging of DDMs is employed to improve the computational efficiency. A large number of DDMs recorded by UK TechDemoSat-1 (TDS-1) over the Arctic region are used to test the proposed sea ice detection methods. Through evaluating against ground-truth measurements from the Ocean Sea Ice SAF, the proposed PS-D and PN-D methods achieve a probability of detection of 99.72% and 99.69% respectively, while the probability of false detection is 0.28% and 0.31% respectively. PMID:28704948
A Pulse Rate Detection Method for Mouse Application Based on Multi-PPG Sensors
Chen, Wei-Hao
2017-01-01
Heart rate is an important physiological parameter for healthcare. Among measurement methods, photoplethysmography (PPG) is an easy and convenient method for pulse rate detection. However, as the PPG signal faces the challenge of motion artifacts and is constrained by the position chosen, the purpose of this paper is to implement a comfortable and easy-to-use multi-PPG sensor module combined with a stable and accurate real-time pulse rate detection method on a computer mouse. A weighted average method for multi-PPG sensors is used to adjust the weight of each signal channel in order to raise the accuracy and stability of the detected signal, therefore reducing the disturbance of noise under the environment of moving effectively and efficiently. According to the experiment results, the proposed method can increase the usability and probability of PPG signal detection on palms. PMID:28708112
Evaluation of aerial survey methods for Dall's sheep
Udevitz, Mark S.; Shults, Brad S.; Adams, Layne G.; Kleckner, Christopher
2006-01-01
Most Dall's sheep (Ovis dalli dalli) population-monitoring efforts use intensive aerial surveys with no attempt to estimate variance or adjust for potential sightability bias. We used radiocollared sheep to assess factors that could affect sightability of Dall's sheep in standard fixed-wing and helicopter surveys and to evaluate feasibility of methods that might account for sightability bias. Work was conducted in conjunction with annual aerial surveys of Dall's sheep in the western Baird Mountains, Alaska, USA, in 2000–2003. Overall sightability was relatively high compared with other aerial wildlife surveys, with 88% of the available, marked sheep detected in our fixed-wing surveys. Total counts from helicopter surveys were not consistently larger than counts from fixed-wing surveys of the same units, and detection probabilities did not differ for the 2 aircraft types. Our results suggest that total counts from helicopter surveys cannot be used to obtain reliable estimates of detection probabilities for fixed-wing surveys. Groups containing radiocollared sheep often changed in size and composition before they could be observed by a second crew in units that were double-surveyed. Double-observer methods that require determination of which groups were detected by each observer will be infeasible unless survey procedures can be modified so that groups remain more stable between observations. Mean group sizes increased during our study period, and our logistic regression sightability model indicated that detection probabilities increased with group size. Mark–resight estimates of annual population sizes were similar to sightability-model estimates, and confidence intervals overlapped broadly. We recommend the sightability-model approach as the most effective and feasible of the alternatives we considered for monitoring Dall's sheep populations.
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron
2014-01-01
The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
A Multi-Channel Method for Detecting Periodic Forced Oscillations in Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, James D.; Tuffner, Francis K.
2016-11-14
Forced oscillations in electric power systems are often symptomatic of equipment malfunction or improper operation. Detecting and addressing the cause of the oscillations can improve overall system operation. In this paper, a multi-channel method of detecting forced oscillations and estimating their frequencies is proposed. The method operates by comparing the sum of scaled periodograms from various channels to a threshold. A method of setting the threshold to specify the detector's probability of false alarm while accounting for the correlation between channels is also presented. Results from simulated and measured power system data indicate that the method outperforms its single-channel counterpartmore » and is suitable for real-world applications.« less
Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections.
Fisher, Jason T; Heim, Nicole; Code, Sandra; Paczkowski, John
2016-01-01
Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears' range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error-arising when a visiting bear fails to leave a hair sample-has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation-which form the crux of management plans-require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based.
Grizzly Bear Noninvasive Genetic Tagging Surveys: Estimating the Magnitude of Missed Detections
Fisher, Jason T.; Heim, Nicole; Code, Sandra; Paczkowski, John
2016-01-01
Sound wildlife conservation decisions require sound information, and scientists increasingly rely on remotely collected data over large spatial scales, such as noninvasive genetic tagging (NGT). Grizzly bears (Ursus arctos), for example, are difficult to study at population scales except with noninvasive data, and NGT via hair trapping informs management over much of grizzly bears’ range. Considerable statistical effort has gone into estimating sources of heterogeneity, but detection error–arising when a visiting bear fails to leave a hair sample–has not been independently estimated. We used camera traps to survey grizzly bear occurrence at fixed hair traps and multi-method hierarchical occupancy models to estimate the probability that a visiting bear actually leaves a hair sample with viable DNA. We surveyed grizzly bears via hair trapping and camera trapping for 8 monthly surveys at 50 (2012) and 76 (2013) sites in the Rocky Mountains of Alberta, Canada. We used multi-method occupancy models to estimate site occupancy, probability of detection, and conditional occupancy at a hair trap. We tested the prediction that detection error in NGT studies could be induced by temporal variability within season, leading to underestimation of occupancy. NGT via hair trapping consistently underestimated grizzly bear occupancy at a site when compared to camera trapping. At best occupancy was underestimated by 50%; at worst, by 95%. Probability of false absence was reduced through successive surveys, but this mainly accounts for error imparted by movement among repeated surveys, not necessarily missed detections by extant bears. The implications of missed detections and biased occupancy estimates for density estimation–which form the crux of management plans–require consideration. We suggest hair-trap NGT studies should estimate and correct detection error using independent survey methods such as cameras, to ensure the reliability of the data upon which species management and conservation actions are based. PMID:27603134
ERIC Educational Resources Information Center
van der Linden, Wim J.; Vos, Hans J.; Chang, Lei
In judgmental standard setting experiments, it may be difficult to specify subjective probabilities that adequately take the properties of the items into account. As a result, these probabilities are not consistent with each other in the sense that they do not refer to the same borderline level of performance. Methods to check standard setting…
Prediction Metrics for Chemical Detection in Long-Wave Infrared Hyperspectral Imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chilton, Marie C.; Walsh, Stephen J.; Daly, Don S.
2009-01-29
A natural or anthropogenic process often generates a signature gas plume whose chemical constituents may be identified using hyperspectral imagery. A hyperspectral image is a pixel-indexed set of spectra where each spectrum reflects the chemical constituents of the plume, the atmosphere, the bounding background surface, and instrument noise. This study explored the relationship between gas absorbance and background emissivity across the long-wave infrared (LWIR) spectrum and how they affect relative gas detection sensitivity. The physics-based model for the observed radiance shows that high gas absorbance coupled with low background emissivity at a single wavenumber results in a stronger recorded radiance.more » Two sensitivity measures were developed to predict relative probability of detection using chemical absorbance and background emissivity: one focused on a single wavenumber while another accounted for the entire spectrum. The predictive abilities of these measures were compared to synthetic image analysis. This study simulated images with 499 distinct gases at each of 6 concentrations over 6 different background surfaces with the atmosphere and level of instrument noise held constant. The Whitened Matched Filter was used to define gas detection from an image spectrum. The estimate of a chemical’s probability of detection at a given concentration over a specific background was the proportion of detections in 500 trials. Of the 499 chemicals used in the images, 276 had estimated probabilities of detection below 0.2 across all backgrounds and concentrations; these chemicals were removed from the study. For 92.8 percent of the remaining chemicals, the single channel measure correctly predicted the background over which the chemical had the largest relative probability of detection. Further, the measure which accounted for information across all wavenumbers predicted the background over which the chemical had the largest relative probability of detection for 93.3 percent of the chemicals. These results suggest that the wavenumber with largest gas absorbance has the most influence over gas detection for this data. By furthering the in-silico experimentation with higher concentrations of gases not detectable in this experiment or by standardizing the gas absorbance spectra to unit vectors, these conclusions may be confirmed and generalized to more gases. This will help simplify image acquisition planning and the identification of unknowns in field collected images.« less
Calibrating random forests for probability estimation.
Dankowski, Theresa; Ziegler, Andreas
2016-09-30
Probabilities can be consistently estimated using random forests. It is, however, unclear how random forests should be updated to make predictions for other centers or at different time points. In this work, we present two approaches for updating random forests for probability estimation. The first method has been proposed by Elkan and may be used for updating any machine learning approach yielding consistent probabilities, so-called probability machines. The second approach is a new strategy specifically developed for random forests. Using the terminal nodes, which represent conditional probabilities, the random forest is first translated to logistic regression models. These are, in turn, used for re-calibration. The two updating strategies were compared in a simulation study and are illustrated with data from the German Stroke Study Collaboration. In most simulation scenarios, both methods led to similar improvements. In the simulation scenario in which the stricter assumptions of Elkan's method were not met, the logistic regression-based re-calibration approach for random forests outperformed Elkan's method. It also performed better on the stroke data than Elkan's method. The strength of Elkan's method is its general applicability to any probability machine. However, if the strict assumptions underlying this approach are not met, the logistic regression-based approach is preferable for updating random forests for probability estimation. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Chapple, T K; Chambert, T; Kanive, P E; Jorgensen, S J; Rotella, J J; Anderson, S D; Carlisle, A B; Block, B A
2016-12-01
Spatial segregation of animals by class (i.e., maturity or sex) within a population due to differential rates of temporary emigration (TE) from study sites can be an important life history feature to consider in population assessment and management. However, such rates are poorly known; new quantitative approaches to address these knowledge gaps are needed. We present a novel application of multi-event models that takes advantage of two sources of detections to differentiate temporary emigration from apparent absence to quantify class segregation within a study population of double-marked (photo-identified and tagged with coded acoustic transmitters) white sharks (Carcharodon carcharias) in central California. We use this model to test if sex-specific patterns in TE result in disparate apparent capture probabilities (p o ) between male and female white sharks, which can affect the observed sex ratio. The best-supported model showed a contrasting pattern of Pr(TE) from coastal aggregation sites between sexes (for males Pr[TE] = 0.015 [95% CI = 0.00, 0.31] and Pr[TE]= 0.57 [0.40, 0.72] for females), but not maturity classes. Additionally, by accounting for Pr(TE) and imperfect detection, we were able to estimate class-specific values of true capture probability (p * ) for tagged and untagged sharks. The best-supported model identified differences between maturity classes but no difference between sexes or tagging impacts (tagged mature sharks p * = 0.55 (0.46-0.63) and sub-adult sharks p* = 0.36 (0.25, 0.50); and untagged mature sharks p * = 0.50 (0.39-0.61) and sub-adults p * = 0.18 (0.10, 0.31). Estimated sex-based differences in p o were linked to sex-specific differences in Pr(TE) but not in p * ; once the Pr(TE) is accounted for, the p * between sexes was not different. These results indicate that the observed sex ratio is not a consequence of unequal detectability and sex-specific values of Pr(TE) are important drivers of the observed male-dominated sex ratio. Our modeling approach reveals complex class-specific patterns in Pr(TE) and p * in a mark-recapture data set, and highlights challenges for the population modeling and conservation of white sharks in central California. The model we develop here can be used to estimate rates of temporary emigration and class segregation when two detection methods are used. © 2016 by the Ecological Society of America.
Chis Ster, Irina; Dodd, Peter J; Ferguson, Neil M
2012-08-01
This paper uses statistical and mathematical models to examine the potential impact of within-farm transmission dynamics on the spread of the 2001 foot and mouth disease (FMD) outbreak in Great Britain. We partly parameterize a simple within farm transmission model using data from experimental studies of FMD pathogenesis, embed this model within an existing between-farm transmission model, and then estimate unknown parameters (such as the species-specific within-farm reproduction number) from the 2001 epidemic case data using Markov Chain Monte-Carlo (MCMC) methods. If the probability of detecting an infected premises depends on farm size and species mix then the within-farm species specific basic reproduction ratios for baseline models are estimated to be 21 (16, 25) and 14 (10, 19) for cattle and sheep, respectively. Alternatively, if detection is independent of farm size, then the corresponding estimates are 49 (41, 61) and 10 (1.4, 21). Both model variants predict that the average fraction of total farm infectiousness accumulated prior to detection of infection on an IP is about 30-50% in cattle or mixed farms. The corresponding estimate for sheep farms depended more on the detection model, being 65-80% if detection was linked to the farms' characteristics, but only 25% if not. We highlighted evidence which reinforces the role of within-farm dynamics in contributing to the long tail of the 2001 epidemic. Copyright © 2012 Elsevier B.V. All rights reserved.
Toward General Software Level Silent Data Corruption Detection for Parallel Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berrocal, Eduardo; Bautista-Gomez, Leonardo; Di, Sheng
Silent data corruption (SDC) poses a great challenge for high-performance computing (HPC) applications as we move to extreme-scale systems. Mechanisms have been proposed that are able to detect SDC in HPC applications by using the peculiarities of the data (more specifically, its “smoothness” in time and space) to make predictions. However, these data-analytic solutions are still far from fully protecting applications to a level comparable with more expensive solutions such as full replication. In this work, we propose partial replication to overcome this limitation. More specifically, we have observed that not all processes of an MPI application experience the samemore » level of data variability at exactly the same time. Thus, we can smartly choose and replicate only those processes for which the lightweight data-analytic detectors would perform poorly. In addition, we propose a new evaluation method based on the probability that a corruption will pass unnoticed by a particular detector (instead of just reporting overall single-bit precision and recall). In our experiments, we use four applications dealing with different explosions. Our results indicate that our new approach can protect the MPI applications analyzed with 7–70% less overhead (depending on the application) than that of full duplication with similar detection recall.« less
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...
2017-12-28
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
NASA Astrophysics Data System (ADS)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.
2018-03-01
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.
A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less
Wiens, J. David; Kolar, Patrick S.; Fuller, Mark R.; Hunt, W. Grainger; Hunt, Teresa
2015-01-01
We used a multistate occupancy sampling design to estimate occupancy, breeding success, and abundance of territorial pairs of golden eagles (Aquila chrysaetos) in the Diablo Range, California, in 2014. This method uses the spatial pattern of detections and non-detections over repeated visits to survey sites to estimate probabilities of occupancy and successful reproduction while accounting for imperfect detection of golden eagles and their young during surveys. The estimated probability of detecting territorial pairs of golden eagles and their young was less than 1 and varied with time of the breeding season, as did the probability of correctly classifying a pair’s breeding status. Imperfect detection and breeding classification led to a sizeable difference between the uncorrected, naïve estimate of the proportion of occupied sites where successful reproduction was observed (0.20) and the model-based estimate (0.30). The analysis further indicated a relatively high overall probability of landscape occupancy by pairs of golden eagles (0.67, standard error = 0.06), but that areas with the greatest occupancy and reproductive potential were patchily distributed. We documented a total of 138 territorial pairs of golden eagles during surveys completed in the 2014 breeding season, which represented about one-half of the 280 pairs we estimated to occur in the broader 5,169-square kilometer region sampled. The study results emphasize the importance of accounting for imperfect detection and spatial heterogeneity in studies of site occupancy, breeding success, and abundance of golden eagles.
Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz
This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less
Burgos, Joaquin; Garcia-Pérez, Jorge N; di Lauro, Sabina González; Falcó, Vicenç; Pumarola, Tomás; Almirante, Benito; Teresa Martín Gomez, M
2018-04-13
The Sofia Pneumococcal FIA® test is a recently introduced immunofluorescent assay automatically read aimed to detect Streptococcus pneumoniae antigen in urine. The aim of this study was to evaluate the usefulness of SofiaFIA® urinary antigen test (UAT) in comparison with classical immunochromatographic BinaxNOW® test for the diagnosis of pneumococcal pneumonia (PP). Observational study was conducted in the Hospital Universitari Vall d'Hebron from December 2015 to August 2016. Consecutive adult patients diagnosed of pneumonia and admitted to the emergency department in whom UAT was requested were prospectively enrolled. Paired pneumococcal UAT was performed (BinaxNOW® and SofiaFIA®) in urine samples. To assess the performance of both tests, patients were categorized into proven PP (isolation of S. pneumoniae in sterile fluid) or probable PP (isolation of S. pneumoniae in respiratory secretion). Sensitivity, specificity, and concordance were calculated. A total of 219 patients with pneumonia were enrolled, of whom 14% had a proven or probable PP, 22% a non-pneumococcal etiology, and 64% an unidentified pathogen. Concordance between tests was good (κ = 0.81). Sensitivity of SofiaFIA® and BinaxNOW® UAT was 78.6 and 50% for proven PP (p = 0.124), and 74.2 and 58% for proven/probable PP (p = 0.063). Specificity for both tests was 83.3 and 85.5% for proven and proven/probable PP. In patients without an identified pathogen, SofiaFIA® test was positive in 33 (23.6%) cases and BinaxNOW® in 25 (17.8%), so Sofia Pneumococcal FIA® detected 32.6% more cases than BinaxNOW® (p = 0.001). Sofia Pneumococcal FIA® test showed an improved sensitivity over visual reading of BinaxNOW® test without a noticeable loss of specificity.
Guest, Rebecca; Tran, Yvonne; Gopinath, Bamini; Cameron, Ian D; Craig, Ashley
2018-02-21
Physical injury and psychological disorder following a motor vehicle crash (MVC) is a public health concern. The objective of this research was to determine rates of major depressive disorder (MDD) and post-traumatic stress disorder (PTSD) in adults with MVC-related injury engaged in compensation, and to determine the capacity (e.g. sensitivity and specificity) of two psychometric scales for estimating the presence of MDD and PTSD. Participants included 109 adults with MVC-related injury engaged in compensation during 2015 to 2017, in Sydney, Australia. The mean time from MVC to baseline assessment was 11 weeks. Comprehensive assessment was conducted at baseline, and the Depression Anxiety Stress Scales (DASS-21) and the Impact of Event Scale-Revised (IES-R) were administered to determine probable MDD and PTSD. An online psychiatric interview, based on Diagnostic and Statistical Manual for Mental Disorders (DSM-5), was used to diagnose actual MDD and PTSD, acknowledged as gold standard diagnostic criteria. One-way multivariate analyses of variance established criterion validity of the DASS-21 and IES-R, and sensitivity and specificity analyses were conducted to determine the most sensitive cut-off points for detecting probable MDD and PTSD. Substantial rates of MDD (53.2%) and PTSD (19.3%) were found. The DASS-21 and IES-R were shown to have excellent criterion validity for detecting MDD and PTSD in injured participants. A range of cut-off points were investigated and shown to have acceptable sensitivity and specificity for detecting MDD and PTSD in an injured population engaged in compensation. The preferred cut-off points based on this study are: to detect MDD, a DASS-21 total score of 30 and/or a DASS-21 depression score of 10; to detect PTSD, IES-R scores of 33-40 and/or a DASS-21 anxiety score of 7-8. Major psychological disorder is prevalent following a MVC. Results suggest the DASS-21 and IES-R are suitable for use in clinical/compensation settings to detect probable MDD and PTSD soon after a MVC in physically injured people engaged in compensation. These results provide positive direction in the public health arena for improving mental health outcomes. Clinical Trials registration number: ANZCTR - ACTRN12615000326594 (9th April 2015).
Cho, Gyu-Sung; Krauss, Sabrina; Huch, Melanie; Du Toit, Maret; Franz, Charles M A P
2011-12-01
A quantitative, real-time PCR method was developed to enumerate Lactobacillus plantarum IWBT B 188 during the malolactic fermentation (MLF) in Grauburgunder wine. The qRT-PCR was strain-specific, as it was based on primers targeting a plasmid DNA sequence, or it was L. plantarum-specific, as it targeted a chromosomally located plantaricin gene sequence. Two 50 l wine fermentations were prepared. One was inoculated with 15 g/hl Saccharomyces cerevisiae, followed by L. plantarum IWBT B 188 at 3.6 × 10(6) CFU/ml, whereas the other was not inoculated (control). Viable cell counts were performed for up to 25 days on MRS agar, and the same cells were enumerated by qRT-PCR with both the plasmid or chromosomally encoded gene primers. The L. plantarum strain survived under the harsh conditions in the wine fermentation at levels above 10(5)/ml for approx. 10 days, after which cell numbers decreased to levels of 10(3) CFU/ml at day 25, and to below the detection limit after day 25. In the control, no lactic acid bacteria could be detected throughout the fermentation, with the exception of two sampling points where ca. 1 × 10(2) CFU/ml was detected. The minimum detection level for quantitative PCR in this study was 1 × 10(2) to 1 × 10(3) CFU/ml. The qRT-PCR results determined generally overestimated the plate count results by about 1 log unit, probably as a result of the presence of DNA from dead cells. Overall, qRT-PCR appeared to be well suited for specifically enumerating Lactobacillus plantarum starter cultures in the MLF in wine.
Cruz, Cristina D; Win, Jessicah K; Chantarachoti, Jiraporn; Mutukumira, Anthony N; Fletcher, Graham C
2012-02-15
The standard Bacteriological Analytical Manual (BAM) protocol for detecting Listeria in food and on environmental surfaces takes about 96 h. Some studies indicate that rapid methods, which produce results within 48 h, may be as sensitive and accurate as the culture protocol. As they only give presence/absence results, it can be difficult to compare the accuracy of results generated. We used the Most Probable Number (MPN) technique to evaluate the performance and detection limits of six rapid kits for detecting Listeria in seafood and on an environmental surface compared with the standard protocol. Three seafood products and an environmental surface were inoculated with similar known cell concentrations of Listeria and analyzed according to the manufacturers' instructions. The MPN was estimated using the MPN-BAM spreadsheet. For the seafood products no differences were observed among the rapid kits and efficiency was similar to the BAM method. On the environmental surface the BAM protocol had a higher recovery rate (sensitivity) than any of the rapid kits tested. Clearview™, Reveal®, TECRA® and VIDAS® LDUO detected the cells but only at high concentrations (>10(2) CFU/10 cm(2)). Two kits (VIP™ and Petrifilm™) failed to detect 10(4) CFU/10 cm(2). The MPN method was a useful tool for comparing the results generated by these presence/absence test kits. There remains a need to develop a rapid and sensitive method for detecting Listeria in environmental samples that performs as well as the BAM protocol, since none of the rapid tests used in this study achieved a satisfactory result. Copyright © 2011 Elsevier B.V. All rights reserved.
Jenkins, M B; Endale, D M; Fisher, D S; Gay, P A
2009-02-01
To better understand the transport and enumeration of dilute densities of Escherichia coli O157:H7 in agricultural watersheds, we developed a culture-based, five tube-multiple dilution most probable number (MPN) method. The MPN method combined a filtration technique for large volumes of surface water with standard selective media, biochemical and immunological tests, and a TaqMan confirmation step. This method determined E. coli O157:H7 concentrations as low as 0.1 MPN per litre, with a 95% confidence level of 0.01-0.7 MPN per litre. Escherichia coli O157:H7 densities ranged from not detectable to 9 MPN per litre for pond inflow, from not detectable to 0.9 MPN per litre for pond outflow and from not detectable to 8.3 MPN per litre for within pond. The MPN methodology was extended to mass flux determinations. Fluxes of E. coli O157:H7 ranged from <27 to >10(4) MPN per hour. This culture-based method can detect small numbers of viable/culturable E. coli O157:H7 in surface waters of watersheds containing animal agriculture and wildlife. This MPN method will improve our understanding of the transport and fate of E. coli O157:H7 in agricultural watersheds, and can be the basis of collections of environmental E. coli O157:H7.
Automated reference-free detection of motion artifacts in magnetic resonance images.
Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios
2018-04-01
Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.
NASA Astrophysics Data System (ADS)
Brakensiek, Joshua; Ragozzine, D.
2012-10-01
The transit method for discovering extra-solar planets relies on detecting regular diminutions of light from stars due to the shadows of planets passing in between the star and the observer. NASA's Kepler Mission has successfully discovered thousands of exoplanet candidates using this technique, including hundreds of stars with multiple transiting planets. In order to estimate the frequency of these valuable systems, our research concerns the efficient calculation of geometric probabilities for detecting multiple transiting extrasolar planets around the same parent star. In order to improve on previous studies that used numerical methods (e.g., Ragozzine & Holman 2010, Tremaine & Dong 2011), we have constructed an efficient, analytical algorithm which, given a collection of conjectured exoplanets orbiting a star, computes the probability that any particular group of exoplanets are transiting. The algorithm applies theorems of elementary differential geometry to compute the areas bounded by circular curves on the surface of a sphere (see Ragozzine & Holman 2010). The implemented algorithm is more accurate and orders of magnitude faster than previous algorithms, based on comparison with Monte Carlo simulations. Expanding this work, we have also developed semi-analytical methods for determining the frequency of exoplanet mutual events, i.e., the geometric probability two planets will transit each other (Planet-Planet Occultation) and the probability that this transit occurs simultaneously as they transit their star (Overlapping Double Transits; see Ragozzine & Holman 2010). The latter algorithm can also be applied to calculating the probability of observing transiting circumbinary planets (Doyle et al. 2011, Welsh et al. 2012). All of these algorithms have been coded in C and will be made publicly available. We will present and advertise these codes and illustrate their value for studying exoplanetary systems.
Teramura, H; Sota, K; Iwasaki, M; Ogihara, H
2017-07-01
Sanita-kun™ CC (coliform count) and EC (Escherichia coli/coliform count), sheet quantitative culture systems which can avoid chromogenic interference by lactase in food, were evaluated in comparison with conventional methods for these bacteria. Based on the results of inclusivity and exclusivity studies using 77 micro-organisms, sensitivity and specificity of both Sanita-kun™ met the criteria for ISO 16140. Both media were compared with deoxycholate agar, violet red bile agar, Merck Chromocult™ coliform agar (CCA), 3M Petrifilm™ CC and EC (PEC) and 3-tube MPN, as reference methods, in 100 naturally contaminated food samples. The correlation coefficients of both Sanita-kun™ for coliform detection were more than 0·95 for all comparisons. For E. coli detection, Sanita-kun™ EC was compared with CCA, PEC and MPN in 100 artificially contaminated food samples. The correlation coefficients for E. coli detection of Sanita-kun™ EC were more than 0·95 for all comparisons. There were no significant differences in all comparisons when conducting a one-way analysis of variance (anova). Both Sanita-kun™ significantly inhibited colour interference by lactase when inhibition of enzymatic staining was assessed using 40 natural cheese samples spiked with coliform. Our results demonstrated Sanita-kun™ CC and EC are suitable alternatives for the enumeration of coliforms and E. coli/coliforms, respectively, in a variety of foods, and specifically in fermented foods. Current chromogenic media for coliforms and Escherichia coli/coliforms have enzymatic coloration due to breaking down of chromogenic substrates by food lactase. The novel sheet culture media which have film layer to avoid coloration by food lactase have been developed for enumeration of coliforms and E. coli/coliforms respectively. In this study, we demonstrated these media had comparable performance with reference methods and less interference by food lactase. These media have a possibility not only to be useful alternatives but also to contribute for accurate enumeration of these bacteria in a variety of foods, and specifically in fermented foods. © 2017 The Society for Applied Microbiology.
Sargeant, Glen A.; Sovada, Marsha A.; Slivinski, Christiane C.; Johnson, Douglas H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997–1999, we searched 355 townships (ca. 93 km) 1–3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ≥1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ≥0.65.
Sargeant, G.A.; Sovada, M.A.; Slivinski, C.C.; Johnson, D.H.
2005-01-01
Accurate maps of species distributions are essential tools for wildlife research and conservation. Unfortunately, biologists often are forced to rely on maps derived from observed occurrences recorded opportunistically during observation periods of variable length. Spurious inferences are likely to result because such maps are profoundly affected by the duration and intensity of observation and by methods used to delineate distributions, especially when detection is uncertain. We conducted a systematic survey of swift fox (Vulpes velox) distribution in western Kansas, USA, and used Markov chain Monte Carlo (MCMC) image restoration to rectify these problems. During 1997-1999, we searched 355 townships (ca. 93 km2) 1-3 times each for an average cost of $7,315 per year and achieved a detection rate (probability of detecting swift foxes, if present, during a single search) of ?? = 0.69 (95% Bayesian confidence interval [BCI] = [0.60, 0.77]). Our analysis produced an estimate of the underlying distribution, rather than a map of observed occurrences, that reflected the uncertainty associated with estimates of model parameters. To evaluate our results, we analyzed simulated data with similar properties. Results of our simulations suggest negligible bias and good precision when probabilities of detection on ???1 survey occasions (cumulative probabilities of detection) exceed 0.65. Although the use of MCMC image restoration has been limited by theoretical and computational complexities, alternatives do not possess the same advantages. Image models accommodate uncertain detection, do not require spatially independent data or a census of map units, and can be used to estimate species distributions directly from observations without relying on habitat covariates or parameters that must be estimated subjectively. These features facilitate economical surveys of large regions, the detection of temporal trends in distribution, and assessments of landscape-level relations between species and habitats. Requirements for the use of MCMC image restoration include study areas that can be partitioned into regular grids of mapping units, spatially contagious species distributions, reliable methods for identifying target species, and cumulative probabilities of detection ???0.65.
Bird, Patrick; Fisher, Kiel; Boyle, Megan; Huffman, Travis; Benzinger, M Joseph; Bedinghaus, Paige; Flannery, Jonathon; Crowley, Erin; Agin, James; Goins, David; Benesh, DeAnn; David, John
2014-01-01
The 3M(™) Molecular Detection Assay (MDA) Salmonella utilizes isothermal amplification of nucleic acid sequences with high specificity, efficiency, rapidity and bioluminescence to detect amplification of Salmonella spp. in food, food-related, and environmental samples after enrichment. A method modification and matrix extension study of the previously approved AOAC Official Method(SM) 2013.09 was conducted, and approval of the modification was received on March 20, 2014. Using an unpaired study design in a multilaboratory collaborative study, the 3M MDA Salmonella method was compared to the U.S. Department of Agriculture/Food Safety and Inspection Service (USDA/FSIS) Microbiology Laboratory Guidebook (MLG) 4.05 (2011), Isolation and Identification of Salmonella from Meat, Poultry, Pasteurized Egg, and Catfish Products for raw ground beef and the U.S. Food and Drug Administration (FDA)/Bacteriological Analytical Manual (BAM) Chapter 5, Salmonella reference method for wet dog food following the current AOAC guidelines. A total of 20 laboratories participated. For the 3M MDA Salmonella method, raw ground beef was analyzed using 25 g test portions, and wet dog food was analyzed using 375 g test portions. For the reference methods, 25 g test portions of each matrix were analyzed. Each matrix was artificially contaminated with Salmonella at three inoculation levels: an uninoculated control level (0 CFU/test portion), a low inoculum level (0.2-2 CFU/test portion), and a high inoculum level (2-5 CFU/test portion). In this study, 1512 unpaired replicate samples were analyzed. Statistical analysis was conducted according to the probability of detection (POD). For the low-level raw ground beef test portions, the following dLPOD (difference between the LPODs of the reference and candidate method) values with 95% confidence intervals were obtained: -0.01 (-0.14, +0.12). For the low-level wet dog food test portions, the following dLPOD with 95% confidence intervals were obtained: -0.04 (-0.16, +0.09). No significant differences were observed in the number of positive samples detected by the 3M MDA Salmonella method versus either the USDA/FSIS-MLG or FDA/BAM methods.
Probabilistic resident space object detection using archival THEMIS fluxgate magnetometer data
NASA Astrophysics Data System (ADS)
Brew, Julian; Holzinger, Marcus J.
2018-05-01
Recent progress in the detection of small space objects, at geosynchronous altitudes, through ground-based optical and radar measurements is demonstrated as a viable method. However, in general, these methods are limited to detection of objects greater than 10 cm. This paper examines the use of magnetometers to detect plausible flyby encounters with charged space objects using a matched filter signal existence binary hypothesis test approach. Relevant data-set processing and reduction of archival fluxgate magnetometer data from the NASA THEMIS mission is discussed in detail. Using the proposed methodology and a false alarm rate of 10%, 285 plausible detections with probability of detection greater than 80% are claimed and several are reviewed in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, J; Zhang, L; Samei, E
Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less
Molaee, Neda; Abtahi, Hamid; Ghannadzadeh, Mohammad Javad; Karimi, Masoude; Ghaznavi-Rad, Ehsanollah
2015-01-01
Polymerase chain reaction (PCR) is preferred to other methods for detecting Escherichia coli (E. coli) in water in terms of speed, accuracy and efficiency. False positive result is considered as the major disadvantages of PCR. For this reason, reverse transcriptase-polymerase chain reaction (RT-PCR) can be used to solve this problem. The aim of present study was to determine the efficiency of RT-PCR for rapid detection of viable Escherichia coli in drinking water samples and enhance its sensitivity through application of different filter membranes. Specific primers were designed for 16S rRNA and elongation Factor II genes. Different concentrations of bacteria were passed through FHLP and HAWP filters. Then, RT-PCR was performed using 16srRNA and EF -Tu primers. Contamination of 10 wells was determined by RT-PCR in Arak city. To evaluate RT-PCR efficiency, the results were compared with most probable number (MPN) method. RT-PCR is able to detect bacteria in different concentrations. Application of EF II primers reduced false positive results compared to 16S rRNA primers. The FHLP hydrophobic filters have higher ability to absorb bacteria compared with HAWB hydrophilic filters. So the use of hydrophobic filters will increase the sensitivity of RT-PCR. RT-PCR shows a higher sensitivity compared to conventional water contamination detection method. Unlike PCR, RT-PCR does not lead to false positive results. The use of EF-Tu primers can reduce the incidence of false positive results. Furthermore, hydrophobic filters have a higher ability to absorb bacteria compared to hydrophilic filters.
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Sri Lankan FRAX model and country-specific intervention thresholds.
Lekamwasam, Sarath
2013-01-01
There is a wide variation in fracture probabilities estimated by Asian FRAX models, although the outputs of South Asian models are concordant. Clinicians can choose either fixed or age-specific intervention thresholds when making treatment decisions in postmenopausal women. Cost-effectiveness of such approach, however, needs to be addressed. This study examined suitable fracture probability intervention thresholds (ITs) for Sri Lanka, based on the Sri Lankan FRAX model. Fracture probabilities were estimated using all Asian FRAX models for a postmenopausal woman of BMI 25 kg/m² and has no clinical risk factors apart from a fragility fracture, and they were compared. Age-specific ITs were estimated based on the Sri Lankan FRAX model using the method followed by the National Osteoporosis Guideline Group in the UK. Using the age-specific ITs as the reference standard, suitable fixed ITs were also estimated. Fracture probabilities estimated by different Asian FRAX models varied widely. Japanese and Taiwan models showed higher fracture probabilities while Chinese, Philippine, and Indonesian models gave lower fracture probabilities. Output of remaining FRAX models were generally similar. Age-specific ITs of major osteoporotic fracture probabilities (MOFP) based on the Sri Lankan FRAX model varied from 2.6 to 18% between 50 and 90 years. ITs of hip fracture probabilities (HFP) varied from 0.4 to 6.5% between 50 and 90 years. In finding fixed ITs, MOFP of 11% and HFP of 3.5% gave the lowest misclassification and highest agreement. Sri Lankan FRAX model behaves similar to other Asian FRAX models such as Indian, Singapore-Indian, Thai, and South Korean. Clinicians may use either the fixed or age-specific ITs in making therapeutic decisions in postmenopausal women. The economical aspects of such decisions, however, need to be considered.
NASA Technical Reports Server (NTRS)
Melcher, Kevin J.; Cruz, Jose A.; Johnson Stephen B.; Lo, Yunnhon
2015-01-01
This paper describes a quantitative methodology for bounding the false positive (FP) and false negative (FN) probabilities associated with a human-rated launch vehicle abort trigger (AT) that includes sensor data qualification (SDQ). In this context, an AT is a hardware and software mechanism designed to detect the existence of a specific abort condition. Also, SDQ is an algorithmic approach used to identify sensor data suspected of being corrupt so that suspect data does not adversely affect an AT's detection capability. The FP and FN methodologies presented here were developed to support estimation of the probabilities of loss of crew and loss of mission for the Space Launch System (SLS) which is being developed by the National Aeronautics and Space Administration (NASA). The paper provides a brief overview of system health management as being an extension of control theory; and describes how ATs and the calculation of FP and FN probabilities relate to this theory. The discussion leads to a detailed presentation of the FP and FN methodology and an example showing how the FP and FN calculations are performed. This detailed presentation includes a methodology for calculating the change in FP and FN probabilities that result from including SDQ in the AT architecture. To avoid proprietary and sensitive data issues, the example incorporates a mixture of open literature and fictitious reliability data. Results presented in the paper demonstrate the effectiveness of the approach in providing quantitative estimates that bound the probability of a FP or FN abort determination.
Identification and resolution of artifacts in the interpretation of imprinted gene expression.
Proudhon, Charlotte; Bourc'his, Déborah
2010-12-01
Genomic imprinting refers to genes that are epigenetically programmed in the germline to express exclusively or preferentially one allele in a parent-of-origin manner. Expression-based genome-wide screening for the identification of imprinted genes has failed to uncover a significant number of new imprinted genes, probably because of the high tissue- and developmental-stage specificity of imprinted gene expression. A very large number of technical and biological artifacts can also lead to the erroneous evidence of imprinted gene expression. In this article, we focus on three common sources of potential confounding effects: (i) random monoallelic expression in monoclonal cell populations, (ii) genetically determined monoallelic expression and (iii) contamination or infiltration of embryonic tissues with maternal material. This last situation specifically applies to genes that occur as maternally expressed in the placenta. Beside the use of reciprocal crosses that are instrumental to confirm the parental specificity of expression, we provide additional methods for the detection and elimination of these situations that can be misinterpreted as cases of imprinted expression.
Antagonistic effect of helpers on breeding male and female survival in a cooperatively breeding bird
Paquet, Matthieu; Doutrelant, Claire; Hatchwell, Ben J; Spottiswoode, Claire N; Covas, Rita
2015-01-01
1. Cooperatively breeding species are typically long lived and hence, according to theory, are expected to maximize their lifetime reproductive success through maximizing survival. Under these circumstances, the presence of helpers could be used to lighten the effort of current reproduction for parents to achieve higher survival. 2. In addition, individuals of different sexes and ages may follow different strategies, but whether male and female breeders and individuals of different ages benefit differently from the presence of helpers has often been overlooked. Moreover, only one study that investigated the relationship between parental survival and the presence of helpers used capture–mark–recapture analyses (CMR). These methods are important since they allow us to account for the non-detection of individuals that are alive in the population but not detected, and thus, the effects on survival and recapture probability to be disentangled. 3. Here, we used multi-event CMR methods to investigate whether the number of helpers was associated with an increase in survival probability for male and female breeders of different ages in the sociable weaver Philetairus socius. In this species, both sexes reduce their feeding rate in the presence of helpers. We therefore predicted that the presence of helpers should increase the breeders' survival in both sexes, especially early in life when individuals potentially have more future breeding opportunities. In addition, sociable weaver females reduce their investment in eggs in the presence of helpers, so we predicted a stronger effect of helpers on female than male survival. 4. As expected we found that females had a higher survival probability when breeding with more helpers. Unexpectedly, however, male survival probability decreased with increasing number of helpers. This antagonistic effect diminished as the breeders grew older. 5. These results illustrate the complexity of fitness costs and benefits underlying cooperative behaviours and how these may vary with the individuals' sex and age. They also highlight the need for further studies on the sex-specific effects of helpers on survival. PMID:25850564
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems
NASA Astrophysics Data System (ADS)
Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu
2018-02-01
Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.
Mirzadeh, Abolfazl; Saadatnia, Geita; Golkar, Majid; Babaie, Jalal; Noordin, Rahmah
2017-05-01
SAG1-related sequence 3 (SRS3) is one of the major Toxoplasma gondii tachyzoite surface antigens and has been shown to be potentially useful for the detection of toxoplasmosis. This protein is highly conformational due to the presence of six disulfide bonds. To achieve solubility and antigenicity, SRS3 depends on proper disulfide bond formation. The aim of this study was to over-express the SRS3 protein with correct folding for use in serodiagnosis of the disease. To achieve this, a truncated SRS3 fusion protein (rtSRS3) was produced, containing six histidyl residues at both terminals and purified by immobilized metal affinity chromatography. The refolding process was performed through three methods, namely dialysis in the presence of chemical additives along with reduced/oxidized glutathione and drop-wise dilution methods with reduced/oxidized glutathione or reduced DTT/oxidized glutathione. Ellman's assay and ELISA showed that the protein folding obtained by the dialysis method was the most favorable, probably due to the correct folding. Subsequently, serum samples from individuals with chronic infection (n = 76), probable acute infection (n = 14), and healthy controls (n = 81) were used to determine the usefulness of the refolded rtSRS3 for Toxoplasma serodiagnosis. The results of the developed IgG-ELISA showed a diagnostic specificity of 91% and a sensitivity of 82.89% and 100% for chronic and acute serum samples, respectively. In conclusion, correctly folded rtSRS3 has the potential to be used as a soluble antigen for the detection of human toxoplasmosis. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2016-05-01
We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.
Amundson, Courtney L.; Royle, J. Andrew; Handel, Colleen M.
2014-01-01
Imperfect detection during animal surveys biases estimates of abundance and can lead to improper conclusions regarding distribution and population trends. Farnsworth et al. (2005) developed a combined distance-sampling and time-removal model for point-transect surveys that addresses both availability (the probability that an animal is available for detection; e.g., that a bird sings) and perceptibility (the probability that an observer detects an animal, given that it is available for detection). We developed a hierarchical extension of the combined model that provides an integrated analysis framework for a collection of survey points at which both distance from the observer and time of initial detection are recorded. Implemented in a Bayesian framework, this extension facilitates evaluating covariates on abundance and detection probability, incorporating excess zero counts (i.e. zero-inflation), accounting for spatial autocorrelation, and estimating population density. Species-specific characteristics, such as behavioral displays and territorial dispersion, may lead to different patterns of availability and perceptibility, which may, in turn, influence the performance of such hierarchical models. Therefore, we first test our proposed model using simulated data under different scenarios of availability and perceptibility. We then illustrate its performance with empirical point-transect data for a songbird that consistently produces loud, frequent, primarily auditory signals, the Golden-crowned Sparrow (Zonotrichia atricapilla); and for 2 ptarmigan species (Lagopus spp.) that produce more intermittent, subtle, and primarily visual cues. Data were collected by multiple observers along point transects across a broad landscape in southwest Alaska, so we evaluated point-level covariates on perceptibility (observer and habitat), availability (date within season and time of day), and abundance (habitat, elevation, and slope), and included a nested point-within-transect and park-level effect. Our results suggest that this model can provide insight into the detection process during avian surveys and reduce bias in estimates of relative abundance but is best applied to surveys of species with greater availability (e.g., breeding songbirds).
Guo, Jinchao; Yang, Litao; Liu, Xin; Guan, Xiaoyan; Jiang, Lingxi; Zhang, Dabing
2009-08-26
Genetically modified (GM) papaya (Carica papaya L.), Huanong No. 1, was approved for commercialization in Guangdong province, China in 2006, and the development of the Huanong No. 1 papaya detection method is necessary for implementing genetically modified organism (GMO) labeling regulations. In this study, we reported the characterization of the exogenous integration of GM Huanong No. 1 papaya by means of conventional polymerase chain reaction (PCR) and thermal asymmetric interlaced (TAIL)-PCR strategies. The results suggested that one intact copy of the initial construction was integrated in the papaya genome and which probably resulted in one deletion (38 bp in size) of the host genomic DNA. Also, one unintended insertion of a 92 bp truncated NptII fragment was observed at the 5' end of the exogenous insert. Furthermore, we revealed its 5' and 3' flanking sequences between the insert DNA and the papaya genomic DNA, and developed the event-specific qualitative and quantitative PCR assays for GM Huanong No. 1 papaya based on the 5' integration flanking sequence. The relative limit of detection (LOD) of the qualitative PCR assay was about 0.01% in 100 ng of total papaya genomic DNA, corresponding to about 25 copies of papaya haploid genome. In the quantitative PCR, the limits of detection and quantification (LOD and LOQ) were as low as 12.5 and 25 copies of papaya haploid genome, respectively. In practical sample quantification, the quantified biases between the test and true values of three samples ranged from 0.44% to 4.41%. Collectively, we proposed that all of these results are useful for the identification and quantification of Huanong No. 1 papaya and its derivates.
NASA Astrophysics Data System (ADS)
Sheet, Debdoot; Karamalis, Athanasios; Kraft, Silvan; Noël, Peter B.; Vag, Tibor; Sadhu, Anup; Katouzian, Amin; Navab, Nassir; Chatterjee, Jyotirmoy; Ray, Ajoy K.
2013-03-01
Breast cancer is the most common form of cancer in women. Early diagnosis can significantly improve lifeexpectancy and allow different treatment options. Clinicians favor 2D ultrasonography for breast tissue abnormality screening due to high sensitivity and specificity compared to competing technologies. However, inter- and intra-observer variability in visual assessment and reporting of lesions often handicaps its performance. Existing Computer Assisted Diagnosis (CAD) systems though being able to detect solid lesions are often restricted in performance. These restrictions are inability to (1) detect lesion of multiple sizes and shapes, and (2) differentiate between hypo-echoic lesions from their posterior acoustic shadowing. In this work we present a completely automatic system for detection and segmentation of breast lesions in 2D ultrasound images. We employ random forests for learning of tissue specific primal to discriminate breast lesions from surrounding normal tissues. This enables it to detect lesions of multiple shapes and sizes, as well as discriminate between hypo-echoic lesion from associated posterior acoustic shadowing. The primal comprises of (i) multiscale estimated ultrasonic statistical physics and (ii) scale-space characteristics. The random forest learns lesion vs. background primal from a database of 2D ultrasound images with labeled lesions. For segmentation, the posterior probabilities of lesion pixels estimated by the learnt random forest are hard thresholded to provide a random walks segmentation stage with starting seeds. Our method achieves detection with 99.19% accuracy and segmentation with mean contour-to-contour error < 3 pixels on a set of 40 images with 49 lesions.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-12-13
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-01-01
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577
Model-Free CUSUM Methods for Person Fit
ERIC Educational Resources Information Center
Armstrong, Ronald D.; Shi, Min
2009-01-01
This article demonstrates the use of a new class of model-free cumulative sum (CUSUM) statistics to detect person fit given the responses to a linear test. The fundamental statistic being accumulated is the likelihood ratio of two probabilities. The detection performance of this CUSUM scheme is compared to other model-free person-fit statistics…
Suedee, Roongnapa; Intakong, Wimon; Dickert, Franz L
2006-08-15
An alternative screening method for haloacetic acids (HAAs) disinfection by-products in drinking water is described. The method is based on the use of piezoelectric quartz crystal microbalance (QCM) transducing system, where the electrode is coated with a trichloacetic acid-molecularly imprinted polymer (TCAA-MIP). This MIP comprises a crosslinked poly(ethyleneglycoldimethacrylate-co-4-vinylpyridine). The coated QCM is able to specifically detect the analytes in water samples in terms of the mass change in relation to acid-base interactions of the analytes with the MIP. The TCAA-MIP coated QCM showed high specificity for the determination of TCAA in aqueous solutions containing inorganic anions, but its sensitivity reduced in water samples containing hydrochloric acid due to a mass loss at the sensor surface. Cross-reactivity studies with HAA analogs (dichloro-, monochloro-, tribromo-, dibromo-, and monobromo-acetic acids) and non-structurally related TCAA molecules (acetic acid and malonic acid) indicated that recognition of the structurally related TCAA compounds by the TCAA-MIP-based QCM is due to a carboxylic acid functional group, and probably involves a combination of both size and shape selectivity. The total response time of sensor is in the order of 10min. The achieved limits of detection for HAAs (20-50mugl(-1)) are at present higher than the actual concentrations found in real-life samples, but below the guidelines for the maximum permissible levels (60mugl(-1) for mixed HAAs). Recovery studies with drinking water samples spiked with TCAA or spiked with mixtures of HAAs revealed the reproducibility and precision of the method. The present work has demonstrated that the proposed assay can be a fast, reliable and inexpensive screening method for HAA contaminants in water samples, but further refinement is required to improve the limits of detection.
Phallometric Diagnosis of Pedophilia.
ERIC Educational Resources Information Center
Freund, Kurt; Blanchard, Ray
1989-01-01
Investigated sensitivity/specificity of phallometric test for pedophilia and hebephilia. Total samples were 154 men accused of sexual offenses against minors and 56 men accused of sexual offenses against adult women. Concluded that sensitivity of test in detecting pedophilia or hebephilia in complete nonadmitters is probably greater than or equal…
Health Monitoring Survey of Bell 412EP Transmissions
NASA Technical Reports Server (NTRS)
Tucker, Brian E.; Dempsey, Paula J.
2016-01-01
Health and usage monitoring systems (HUMS) use vibration-based Condition Indicators (CI) to assess the health of helicopter powertrain components. A fault is detected when a CI exceeds its threshold value. The effectiveness of fault detection can be judged on the basis of assessing the condition of actual components from fleet aircraft. The Bell 412 HUMS-equipped helicopter is chosen for such an evaluation. A sample of 20 aircraft included 12 aircraft with confirmed transmission and gearbox faults (detected by CIs) and eight aircraft with no known faults. The associated CI data is classified into "healthy" and "faulted" populations based on actual condition and these populations are compared against their CI thresholds to quantify the probability of false alarm and the probability of missed detection. Receiver Operator Characteristic analysis is used to optimize thresholds. Based on the results of the analysis, shortcomings in the classification method are identified for slow-moving CI trends. Recommendations for improving classification using time-dependent receiver-operator characteristic methods are put forth. Finally, lessons learned regarding OEM-operator communication are presented.
NASA Astrophysics Data System (ADS)
Gharibnezhad, Fahit; Mujica, Luis E.; Rodellar, José
2015-01-01
Using Principal Component Analysis (PCA) for Structural Health Monitoring (SHM) has received considerable attention over the past few years. PCA has been used not only as a direct method to identify, classify and localize damages but also as a significant primary step for other methods. Despite several positive specifications that PCA conveys, it is very sensitive to outliers. Outliers are anomalous observations that can affect the variance and the covariance as vital parts of PCA method. Therefore, the results based on PCA in the presence of outliers are not fully satisfactory. As a main contribution, this work suggests the use of robust variant of PCA not sensitive to outliers, as an effective way to deal with this problem in SHM field. In addition, the robust PCA is compared with the classical PCA in the sense of detecting probable damages. The comparison between the results shows that robust PCA can distinguish the damages much better than using classical one, and even in many cases allows the detection where classic PCA is not able to discern between damaged and non-damaged structures. Moreover, different types of robust PCA are compared with each other as well as with classical counterpart in the term of damage detection. All the results are obtained through experiments with an aircraft turbine blade using piezoelectric transducers as sensors and actuators and adding simulated damages.
Monitoring Butterfly Abundance: Beyond Pollard Walks
Pellet, Jérôme; Bried, Jason T.; Parietti, David; Gander, Antoine; Heer, Patrick O.; Cherix, Daniel; Arlettaz, Raphaël
2012-01-01
Most butterfly monitoring protocols rely on counts along transects (Pollard walks) to generate species abundance indices and track population trends. It is still too often ignored that a population count results from two processes: the biological process (true abundance) and the statistical process (our ability to properly quantify abundance). Because individual detectability tends to vary in space (e.g., among sites) and time (e.g., among years), it remains unclear whether index counts truly reflect population sizes and trends. This study compares capture-mark-recapture (absolute abundance) and count-index (relative abundance) monitoring methods in three species (Maculinea nausithous and Iolana iolas: Lycaenidae; Minois dryas: Satyridae) in contrasted habitat types. We demonstrate that intraspecific variability in individual detectability under standard monitoring conditions is probably the rule rather than the exception, which questions the reliability of count-based indices to estimate and compare specific population abundance. Our results suggest that the accuracy of count-based methods depends heavily on the ecology and behavior of the target species, as well as on the type of habitat in which surveys take place. Monitoring programs designed to assess the abundance and trends in butterfly populations should incorporate a measure of detectability. We discuss the relative advantages and inconveniences of current monitoring methods and analytical approaches with respect to the characteristics of the species under scrutiny and resources availability. PMID:22859980
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Fourier Method for Calculating Fission Chain Neutron Multiplicity Distributions
Chambers, David H.; Chandrasekaran, Hema; Walston, Sean E.
2017-03-27
Here, a new way of utilizing the fast Fourier transform is developed to compute the probability distribution for a fission chain to create n neutrons. We then extend this technique to compute the probability distributions for detecting n neutrons. Lastly, our technique can be used for fission chains initiated by either a single neutron inducing a fission or by the spontaneous fission of another isotope.
Spatial cluster detection using dynamic programming.
Sverchkov, Yuriy; Jiang, Xia; Cooper, Gregory F
2012-03-25
The task of spatial cluster detection involves finding spatial regions where some property deviates from the norm or the expected value. In a probabilistic setting this task can be expressed as finding a region where some event is significantly more likely than usual. Spatial cluster detection is of interest in fields such as biosurveillance, mining of astronomical data, military surveillance, and analysis of fMRI images. In almost all such applications we are interested both in the question of whether a cluster exists in the data, and if it exists, we are interested in finding the most accurate characterization of the cluster. We present a general dynamic programming algorithm for grid-based spatial cluster detection. The algorithm can be used for both Bayesian maximum a-posteriori (MAP) estimation of the most likely spatial distribution of clusters and Bayesian model averaging over a large space of spatial cluster distributions to compute the posterior probability of an unusual spatial clustering. The algorithm is explained and evaluated in the context of a biosurveillance application, specifically the detection and identification of Influenza outbreaks based on emergency department visits. A relatively simple underlying model is constructed for the purpose of evaluating the algorithm, and the algorithm is evaluated using the model and semi-synthetic test data. When compared to baseline methods, tests indicate that the new algorithm can improve MAP estimates under certain conditions: the greedy algorithm we compared our method to was found to be more sensitive to smaller outbreaks, while as the size of the outbreaks increases, in terms of area affected and proportion of individuals affected, our method overtakes the greedy algorithm in spatial precision and recall. The new algorithm performs on-par with baseline methods in the task of Bayesian model averaging. We conclude that the dynamic programming algorithm performs on-par with other available methods for spatial cluster detection and point to its low computational cost and extendability as advantages in favor of further research and use of the algorithm.
Glucosinolate structures in evolution.
Agerbirk, Niels; Olsen, Carl Erik
2012-05-01
By 2000, around 106 natural glucosinolates (GSLs) were probably documented. In the past decade, 26 additional natural GSL structures have been elucidated and documented. Hence, the total number of documented GSLs from nature by 2011 can be estimated to around 132. A considerable number of additional suggested structures are concluded not to be sufficiently documented. In many cases, NMR spectroscopy would have provided the missing structural information. Of the GSLs documented in the past decade, several are of previously unexpected structures and occur at considerable levels. Most originate from just four species: Barbarea vulgaris, Arabidopsis thaliana, Eruca sativa and Isatis tinctoria. Acyl derivatives of known GSLs comprised 15 of the 26 newly documented structures, while the remaining exhibited new substitution patterns or chain length, or contained a mercapto group or related thio-functionality. GSL identification methods are reviewed, and the importance of using authentic references and structure-sensitive detection methods such as MS and NMR is stressed, especially when species with relatively unknown chemistry are analyzed. An example of qualitative GSL analysis is presented with experimental details (group separation and HPLC of both intact and desulfated GSLs, detection and structure determination by UV, MS, NMR and susceptibility to myrosinase) with emphasis on the use of NMR for structure elucidation of even minor GSLs and GSL hydrolysis products. The example includes identification of a novel GSL, (R)-2-hydroxy-2-(3-hydroxyphenyl)ethylglucosinolate. Recent investigations of GSL evolution, based on investigations of species with well established phylogeny, are reviewed. From the relatively few such investigations, it is already clear that GSL profiles are regularly subject to evolution. This result is compatible with natural selection for specific GSL side chains. The probable existence of structure-specific GSL catabolism in intact plants suggests that biochemical evolution of GSLs has more complex implications than the mere liberation of a different hydrolysis product upon tissue disruption. Copyright © 2012 Elsevier Ltd. All rights reserved.
ROKU: a novel method for identification of tissue-specific genes
Kadota, Koji; Ye, Jiazhen; Nakai, Yuji; Terada, Tohru; Shimizu, Kentaro
2006-01-01
Background One of the important goals of microarray research is the identification of genes whose expression is considerably higher or lower in some tissues than in others. We would like to have ways of identifying such tissue-specific genes. Results We describe a method, ROKU, which selects tissue-specific patterns from gene expression data for many tissues and thousands of genes. ROKU ranks genes according to their overall tissue specificity using Shannon entropy and detects tissues specific to each gene if any exist using an outlier detection method. We evaluated the capacity for the detection of various specific expression patterns using synthetic and real data. We observed that ROKU was superior to a conventional entropy-based method in its ability to rank genes according to overall tissue specificity and to detect genes whose expression pattern are specific only to objective tissues. Conclusion ROKU is useful for the detection of various tissue-specific expression patterns. The framework is also directly applicable to the selection of diagnostic markers for molecular classification of multiple classes. PMID:16764735
Early indices of deviance detection in humans and animal models.
Grimm, Sabine; Escera, Carles; Nelken, Israel
2016-04-01
Detecting unexpected stimuli in the environment is a critical function of the auditory system. Responses to unexpected "deviant" sounds are enhanced compared to responses to expected stimuli. At the human scalp, deviance detection is reflected in the mismatch negativity (MMN) and in an enhancement of the middle-latency response (MLR). Single neurons often respond more strongly to a stimulus when rare than when common, a phenomenon termed stimulus-specific adaptation (SSA). Here we compare stimulus-specific adaptation with scalp-recorded deviance-related responses. We conclude that early markers of deviance detection in the time range of the MLR could be a direct correlate of cortical SSA. Both occur at an early level of cortical activation, both are robust findings with low-probability stimuli, and both show properties of genuine deviance detection. Their causal relation with the later scalp-recorded MMN is a key question in this field. Copyright © 2015 Elsevier B.V. All rights reserved.
Probabilistic Model for Untargeted Peak Detection in LC-MS Using Bayesian Statistics.
Woldegebriel, Michael; Vivó-Truyols, Gabriel
2015-07-21
We introduce a novel Bayesian probabilistic peak detection algorithm for liquid chromatography-mass spectroscopy (LC-MS). The final probabilistic result allows the user to make a final decision about which points in a chromatogram are affected by a chromatographic peak and which ones are only affected by noise. The use of probabilities contrasts with the traditional method in which a binary answer is given, relying on a threshold. By contrast, with the Bayesian peak detection presented here, the values of probability can be further propagated into other preprocessing steps, which will increase (or decrease) the importance of chromatographic regions into the final results. The present work is based on the use of the statistical overlap theory of component overlap from Davis and Giddings (Davis, J. M.; Giddings, J. Anal. Chem. 1983, 55, 418-424) as prior probability in the Bayesian formulation. The algorithm was tested on LC-MS Orbitrap data and was able to successfully distinguish chemical noise from actual peaks without any data preprocessing.
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
A new multistage groundwater transport inverse method: presentation, evaluation, and implications
Anderman, Evan R.; Hill, Mary C.
1999-01-01
More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.
Evaluation of the Thermo Scientific™ SureTect™ Listeria species Assay.
Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko
2014-03-01
The Thermo Scientific™ SureTect™ Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested MethodsSM program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University of Guelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.
Evaluation of the Thermo Scientific SureTect Listeria monocytogenes Assay.
Cloke, Jonathan; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron; Evans, Katharine; Crabtree, David; Hughes, Annette; Hopper, Craig; Simpson, Helen; Withey, Sophie; Oleksiuk, Milena; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko
2014-01-01
The Thermo Scientific SureTect Listeria monocytogenes Assay is a new real-time PCR assay for the detection of Listeria monocytogenes in food and environmental samples. This assay was validated using the AOAC Research Institute (AOAC-RI) Performance Tested Methods program in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996, including Amendment 1:2004 with the following foods and food contact surfaces: smoked salmon, processed cheese, fresh bagged spinach, fresh cantaloupe, cooked prawns (chilled product), cooked sliced turkey meat (chilled product), ice cream, pork frankfurters, salami, ground raw beef meat (12% fat), plastic, and stainless steel. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, bagged lettuce, and stainless steel) were analyzed independently as part of the AOAC-RI controlled laboratory study by the University of Guelph, Canada. Using probability of detection (POD) statistical analysis, a significant difference was demonstrated between the candidate and reference methods for salami, cooked sliced turkey and ice cream in favor of the SureTect assay. For all other matrixes, no significant difference by POD was seen between the two methods during the study. Inclusivity and exclusivity testing was also conducted with 53 and 30 isolates, respectively, which demonstrated that the SureTect assay was able to detect all serotypes of L. monocytogenes. None of the exclusivity isolates analyzed were detected by the SureTect assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside the recommended parameters open to variation, i.e., enrichment time and temperature and lysis temperature, which demonstrated that the assay gave reliable performance. Accelerated stability testing was also conducted, validating the assay shelf life.
NASA Astrophysics Data System (ADS)
Canino, Lawrence S.; Shen, Tongye; McCammon, J. Andrew
2002-12-01
We extend the self-consistent pair contact probability method to the evaluation of the partition function for a protein complex at thermodynamic equilibrium. Specifically, we adapt the method for multichain models and introduce a parametrization for amino acid-specific pairwise interactions. This method is similar to the Gaussian network model but allows for the adjusting of the strengths of native state contacts. The method is first validated on a high resolution x-ray crystal structure of bovine Pancreatic Phospholipase A2 by comparing calculated B-factors with reported values. We then examine binding-induced changes in flexibility in protein-protein complexes, comparing computed results with those obtained from x-ray crystal structures and molecular dynamics simulations. In particular, we focus on the mouse acetylcholinesterase:fasciculin II and the human α-thrombin:thrombomodulin complexes.
Mandhaniya, Sushil; Iqbal, Sobuhi; Sharawat, Surender Kumar; Xess, Immaculata; Bakhshi, Sameer
2012-07-01
Invasive fungal infections (IFI) lead to morbidity and mortality in neutropenic patients and in allogenic stem cell transplantation. Serum-based fungal detection assays have limitation of specificity or sensitivity. Studies on fungal DNA detection using real-time PCR in childhood leukaemia are lacking. The aim of this study was to develop sensitive and specific diagnostic tools for IFI in paediatric acute leukaemia patients using real-time PCR. Of 100 randomised paediatric acute leukaemia patients receiving antifungal prophylaxis with voriconazole/amphotericin B, single peripheral whole blood sample in EDTA was used for Pan-AC real-time PCR assay (detects nine Candida and six Aspergillus species) in patients who failed prophylaxis due to proven, probable, possible or suspected fungal infections. PCR results were retrospectively correlated with clinical profile. Real-time PCR test was positive in 18/29 (62%) patients who failed prophylaxis. The only patient with proven IFI (mucormycosis), real-time PCR assay was negative. Real-time PCR was positive in 2/4 (50%) patients with possible and 16/24 (66.6%) suspected IFI and 5/10 (50%) patients with pneumonia. By applying method A/B, sensitivity and positive predictive value could not be commented due to unproven Aspergillus or Candida infections; specificity and negative predictive values (NPV) were 41% and 100% respectively; by method C (included episodes of possible IFI as true positive), sensitivity, specificity, PPV and NPV were 50%, 36%, 11% and 81% respectively. In those with suspected IFI, 8/24 (33.3%) were PCR negative and unnecessarily received empirical antifungal therapy (EAFT). Real-time PCR is a practical, rapid, non-invasive screening test for excluding IFI in paediatric leukaemia. The high NPV makes real-time PCR a promising tool to use this prior to initiating EAFT in antibiotic-resistant febrile neutropenic patients; this would avoid toxicity, cost and hospitalisation for EAFT (ClinicalTrials.gov identifier:NCT00624143). © 2011 Blackwell Verlag GmbH.
Tanabe, Soichi; Miyauchi, Eiji; Muneshige, Akemi; Mio, Kazuhiro; Sato, Chikara; Sato, Masahiko
2007-07-01
A PCR method to detect porcine DNA was developed for verifying the allergen labeling of foods and for identifying hidden pork ingredients in processed foods. The primer pair, F2/R1, was designed to detect the gene encoding porcine cytochrome b for the specific detection of pork with high sensitivity. The amplified DNA fragment (130 bp) was specifically detected from porcine DNA, while no amplification occurred with other species such as cattle, chicken, sheep, and horse. When the developed PCR method was used for investigating commercial food products, porcine DNA was clearly detected in those containing pork in the list of ingredients. In addition, 100 ppb of pork in heated gyoza (pork and vegetable dumpling) could be detected by this method. This method is rapid, specific and sensitive, making it applicable for detecting trace amounts of pork in processed foods.
Probabilistic double guarantee kidnapping detection in SLAM.
Tian, Yang; Ma, Shugen
2016-01-01
For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.
Detection of communities with Naming Game-based methods
Ribeiro, Carlos Henrique Costa
2017-01-01
Complex networks are often organized in groups or communities of agents that share the same features and/or functions, and this structural organization is built naturally with the formation of the system. In social networks, we argue that the dynamic of linguistic interactions of agreement among people can be a crucial factor in generating this community structure, given that sharing opinions with another person bounds them together, and disagreeing constantly would probably weaken the relationship. We present here a computational model of opinion exchange that uncovers the community structure of a network. Our aim is not to present a new community detection method proper, but to show how a model of social communication dynamics can reveal the (simple and overlapping) community structure in an emergent way. Our model is based on a standard Naming Game, but takes into consideration three social features: trust, uncertainty and opinion preference, that are built over time as agents communicate among themselves. We show that the separate addition of each social feature in the Naming Game results in gradual improvements with respect to community detection. In addition, the resulting uncertainty and trust values classify nodes and edges according to role and position in the network. Also, our model has shown a degree of accuracy both for non-overlapping and overlapping communities that are comparable with most algorithms specifically designed for topological community detection. PMID:28797097
Bottari, Fabio; Boveri, Sara; Iacobone, Anna Daniela; Gulmini, Chiara; Igidbashian, Sarah; Cassatella, Maria Cristina; Landoni, Fabio; Sandri, Maria Teresa
2018-02-20
High-risk (HR) Human Papilloma Virus (HPV) Tests for HPV detection differ in sensitivity and specificity. In this study, we evaluated the sensitivity and specificity of the HC2 HR HPV Test and the Cobas 4800 HPV Test in consecutive cervical samples collected from a referral population with a high prevalence of disease, using CIN2+ histology as clinical outcome. Ten thousand two-hundred and thirteen consecutive cervical samples were assayed for HR-HPV in the Laboratory Medicine Division of IEO: 5140 from January 2012 to June 2013 with HC2 and 5073 from July 2013 to December 2014 with the Cobas HPV Test. These two assays differ in terms of target genes and testing methods. The test positivity rates for HC2 and Cobas 4800 were 29.5% (1515/5135, 95% CI 28.3-30.8%) and 23.9% (1212/5069, 95% CI 22.7-25.1%), respectively. The detection rates of CIN2+ in the two time periods were 2.8% (145/5140, 95% CI 2.4-3.3%) and 1.6% (79/5073, 95% CI 1.2-1.9%), respectively. The sensitivity for CIN2+ for HC2 and Cobas 4800 was 95.2% (138/145, 95% CI 91.7-98.7%) and 93.7% (74/79, 95% CI 88.3-99.0%), respectively. The specificity for CIN2+ for HC2 and Cobas 4800 was 72.4% (3613/4990, 95% CI 71.2-73.6%) and 77.2% (3852/4990, 95% CI 76.0-78.4%), respectively. There were 23 cases of cancer in each of the two time periods. HC2 detected 100% (23/23). Cobas 4800 detected 82.6% (19/23). The detection rate of CIN2+ was higher in the first period than in the second period. There was no significant difference in sensitivity of HC2 and Cobas 4800 in women with CIN2+. The specificity of CIN2+ using Cobas 4800 in the second period was higher than HC2 in the first period, probably due to the lower prevalence of CIN2+ in the second period.